homeresume
 
   

TypeORM with NestJS

December 1, 2022

This post covers TypeORM examples with the NestJS framework, from setting up the connection with the Postgres database to working with custom repositories. The following snippets can be adjusted and reused with other frameworks like Express. The same applies to SQL databases.

Prerequisites

  • NestJS app bootstrapped
  • Postgres database running
  • @nestjs/typeorm, typeorm and pg packages installed

Database connection

It requires the initialization of the DataSource configuration.

// app.module.ts
const typeOrmConfig = {
imports: [
ConfigModule.forRoot({
load: [databaseConfig]
})
],
inject: [ConfigService],
useFactory: async (configService: ConfigService) =>
configService.get('database'),
dataSourceFactory: async (options) => new DataSource(options).initialize()
};
@Module({
imports: [TypeOrmModule.forRootAsync(typeOrmConfig)]
})
export class AppModule {}

DataSource configuration contains elements for the connection string, migration details, etc.

// config/database.ts
import path from 'path';
import { registerAs } from '@nestjs/config';
import { PostgresConnectionOptions } from 'typeorm/driver/postgres/PostgresConnectionOptions';
export default registerAs(
'database',
(): PostgresConnectionOptions =>
({
logging: false,
entities: [path.resolve(`${__dirname}/../../**/**.entity{.ts,.js}`)],
migrations: [
path.resolve(`${__dirname}/../../../database/migrations/*{.ts,.js}`)
],
migrationsRun: true,
migrationsTableName: 'migrations',
keepConnectionAlive: true,
synchronize: false,
type: 'postgres',
host: process.env.DATABASE_HOSTNAME,
port: Number(process.env.DATABASE_PORT),
username: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABASE_NAME
} as PostgresConnectionOptions)
);

Migrations and seeders

Migrations are handled with the following scripts for generation, running, and reverting.

// package.json
{
"scripts": {
"migration:generate": "npm run typeorm -- migration:create",
"migrate": "npm run typeorm -- migration:run -d src/common/config/ormconfig-migration.ts",
"migrate:down": "npm run typeorm -- migration:revert -d src/common/config/ormconfig-migration.ts",
"typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js"
}
}

A new migration is generated at the provided path with the following command. The filename of it is in the format <TIMESTAMP>-<MIGRATION_NAME>.ts.

npm run migration:generate database/migrations/<MIGRATION_NAME>

Here is the example for the migration which creates a new table. A table is dropped when the migration is reverted.

// database/migrations/1669833880587-create-users.ts
import { MigrationInterface, QueryRunner, Table } from 'typeorm';
export class CreateUsers1669833880587 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.createTable(
new Table({
name: 'users',
columns: [
{
name: 'id',
type: 'uuid',
default: 'uuid_generate_v4()',
generationStrategy: 'uuid',
isGenerated: true,
isPrimary: true
},
{
name: 'first_name',
type: 'varchar'
}
]
})
);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.dropTable('users');
}
}

Scripts for running and reverting the migrations require a separate DataSource configuration, the migrations table name is migrations in this case. Running a migration adds a new row with the migration name while reverting removes it.

// config/ormconfig-migration.ts
import 'dotenv/config';
import * as path from 'path';
import { DataSource } from 'typeorm';
const config = new DataSource({
type: 'postgres',
host: process.env.DATABASE_HOSTNAME,
port: Number(process.env.DATABASE_PORT),
username: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABASE_NAME,
entities: [path.resolve(`${__dirname}/../../**/**.entity{.ts,.js}`)],
migrations: [
path.resolve(`${__dirname}/../../../database/migrations/*{.ts,.js}`)
],
migrationsTableName: 'migrations',
logging: true,
synchronize: false
});
export default config;

Seeder is a type of migration, seeders are handled with the following scripts for generation, running, and reverting.

// package.json
{
"scripts": {
"seed:generate": "npm run typeorm -- migration:create",
"seed": "npm run typeorm -- migration:run -d src/common/config/ormconfig-seeder.ts",
"seed:down": "npm run typeorm -- migration:revert -d src/common/config/ormconfig-seeder.ts"
}
}

A new seeder is generated at the provided path with the following command. The filename of it is in the format <TIMESTAMP>-<SEEDER_NAME>.ts.

npm run seeder:generate database/seeders/<SEEDER_NAME>

Here is the example for the seeder which inserts some data. A table data is removed when the seeder is reverted.

// database/seeders/1669834539569-add-users.ts
import { UsersEntity } from '../../src/modules/users/users.entity';
import { MigrationInterface, QueryRunner } from 'typeorm';
export class AddUsers1669834539569 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.manager.insert(UsersEntity, [
{
firstName: 'tester'
}
]);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.manager.clear(UsersEntity);
}
}

Scripts for running and reverting the seeders require a separate DataSource configuration, the seeders table name is seeders in this case. Running a seeder adds a new row with the seeder name while reverting removes it.

// config/ormconfig-seeder.ts
import 'dotenv/config';
import * as path from 'path';
import { DataSource } from 'typeorm';
const config = new DataSource({
type: 'postgres',
host: process.env.DATABASE_HOSTNAME,
port: Number(process.env.DATABASE_PORT),
username: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABASE_NAME,
entities: [path.resolve(`${__dirname}/../../**/**.entity{.ts,.js}`)],
migrations: [
path.resolve(`${__dirname}/../../../database/seeders/*{.ts,.js}`)
],
migrationsTableName: 'seeders',
logging: true,
synchronize: false
});
export default config;

Entities

Entities are specified with their columns and Entity decorator.

// user.entity.ts
import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';
@Entity({ name: 'users' })
export class UsersEntity {
@PrimaryGeneratedColumn('uuid')
public id: string;
@Column({ name: 'first_name' })
public firstName: string;
}

Custom repositories

Custom repositories extend the base repository class and enrich it with several additional methods.

// user.repository.ts
@Injectable()
export class UsersRepository extends Repository<UsersEntity> {
constructor(private dataSource: DataSource) {
super(UsersEntity, dataSource.createEntityManager());
}
async getById(id: string) {
return this.findOne({ where: { id } });
}
// ...
}

Unit-testing custom repositories

More details about it are covered in Testing custom repositories (NestJS/TypeORM) post.

Boilerplate

Here is the link to the boilerplate I use for the development, it contains all of the examples mentioned above with more details.

Gatsby blog as PWA (Progressive Web App)

November 26, 2022

Starting with some of the benefits, installed PWAs can bring more user engagement and conversions. On the user side, it brings the possibility to read posts offline. More details about PWAs can be found at Progressive Web App 101 post.

Prerequisites

  • bootstrapped Gatsby blog
  • installed manifest (gatsby-plugin-manifest) and offline (gatsby-plugin-offline) plugins

Setup

Add plugins configuration to the Gatsby configuration file, the manifest plugin should be loaded before the offline plugin.

Prepare app icon in 512x512 pixels, the manifest plugin will generate the icons in all of the necessary dimensions. PWA usage can be logged with the UTM link in start_url property.

Runtime caching for static resources (JavaScript, CSS, and page data JSON files) is set to network-first caching so it retrieves the latest changes before showing it to the user. In case of issues with caching in a local environment, an offline plugin can be disabled.

// gatsby-config.js
const plugins = [
// ...
{
resolve: `gatsby-plugin-manifest`,
options: {
name: `app name`,
short_name: `app name`,
start_url: `/?utm_source=pwa&utm_medium=pwa&utm_campaign=pwa`,
background_color: `#FFF`,
theme_color: `#2F3C7E`,
display: `standalone`,
icon: `src/assets/icon.png`
}
}
];
if (process.env.NODE_ENV !== 'development') {
plugins.push({
resolve: `gatsby-plugin-offline`,
options: {
workboxConfig: {
runtimeCaching: [
{
urlPattern: /(\.js$|\.css$|static\/)/,
handler: `NetworkFirst`
},
{
urlPattern: /^https?:.*\/page-data\/.*\.json/,
handler: `NetworkFirst`
},
{
urlPattern: /^https?:.*\.(png|jpg|jpeg|webp|svg|gif|tiff|js|woff|woff2|json|css)$/,
handler: `StaleWhileRevalidate`
},
{
urlPattern: /^https?:\/\/fonts\.googleapis\.com\/css/,
handler: `StaleWhileRevalidate`
}
]
}
}
});
}
module.exports = {
// ...
plugins
};

Service worker updates can also be detected, up to 10 minutes after the deployment. For a better user experience, a user should approve refreshing the page before updating it to the latest version.

// gatsby-browser.js
exports.onServiceWorkerUpdateReady = () => {
const shouldReload = window.confirm(
'This website has been updated. Reload to display the latest version?'
);
if (shouldReload) {
window.location.href = window.location.href.replace(/#.*$/, '');
}
};
exports.onRouteUpdate = () => {
navigator.serviceWorker
.register('/sw.js')
.then((registration) => registration.update())
.catch(console.error);
};

Documenting JavaScript code with JSDoc

November 24, 2022

JSDoc provides adding types to the JavaScript codebase with appropriate conventions inside comments so different IDEs like Visual Studio Code can recognize defined types, show them and make coding easier with auto-completion. Definitions are put inside /** */ comments.

Examples

Custom types can be defined with @typedef and @property tags. Every property has a type and if the property is optional, its name is put between square brackets, and a description can be included after the property name. Global types should be defined in *.jsdoc.js files so they can be used in multiple files without importing. * represents any type.

/**
* @typedef {object} CollectionItem
* @property {string} [collectionName] - collection name is optional string field
* @property {boolean} isRevealed - reveal status
* @property {number} floorPrice - floor price
* @property {?string} username - username is a nullable field
* @property {Array.<number>} prices - prices array
* @property {Array.<string>} [buyers] - optional buyers array
* @property {Array.<Object<string, *>>} data - some data
*/

Classes are auto recognized so @class and @constructor tags can be omitted.

/**
* Scraper for websites
*/
class Scraper {
/**
* Create scraper
* @param {string} url - website's URL
*/
constructor(url) {
this.url = url;
}
// ...
}

Comments starting with the description can omit @description tag. Function parameters and function return types can be defined with @param and @returns tags. Multiple return types can be handled with | operator. Deprecated parts of the codebase can be annotated with @deprecated tag.

/**
* Gets prices list
* @private
* @param {Array.<number>} prices - prices array
* @returns {string|undefined}
*/
const getPricesList = (prices) => {
if (prices.length > 0) return prices.join(',');
};
/**
* Get data from the API
* @deprecated
* @returns {Promise<CollectionItem>}
*/
const getData = async () => {
// ...
};

Variable types can be documented with @type tags and constants can utilize @const tags.

/**
* Counter for the requests
* @type {number}
*/
let counter;
/**
* HTTP timeout in milliseconds
* @const {number}
*/
const HTTP_TIMEOUT_MS = 3000;

Enums can be documented with @enum and @readonly tags.

/**
* Some states
* @readonly
* @enum {string}
*/
const state = {
STARTED: 'STARTED',
IN_PROGRESS: 'IN_PROGRESS',
FINISHED: 'FINISHED',
};

Docs validation

Linter can validate the docs. Add the following package and update the linter configuration file.

npm i -D eslint-plugin-jsdoc
// .eslintrc.js
module.exports = {
extends: ['plugin:jsdoc/recommended'],
};

Run the linter and it will show warnings if something has to be improved.

Generating the docs overview

Run the following command to recursively generate the HTML files with the docs overview, including the README.md and package.json content. Symbols marked with @private tags will be skipped.

npx jsdoc src -r --destination docs --readme ./README.md --package ./package.json

This command can be included in the CI/CD pipeline, it depends on the needs of the project.

Timeout with Fetch API

November 2, 2022

Setting up a timeout for HTTP requests can prevent the connection from hanging forever, waiting for the response. It can be set on the client side to improve user experience, and on the server side to improve inter-service communication. Fetch API is fully available in Node as well from version 18.

AbortController can be utilized to set up timeouts. Instantiated abort controller has a signal property which represents reference to its associated AbortSignal object. Abort signal object is used as a signal parameter in the request with Fetch API, so HTTP request is aborted when abort method is called.

const HTTP_TIMEOUT = 3000;
const URL = 'https://www.google.com:81';
(async () => {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), HTTP_TIMEOUT);
try {
const response = await fetch(URL, {
signal: controller.signal
}).then((res) => res.json());
console.log(response);
} catch (error) {
console.error(error);
} finally {
clearTimeout(timeoutId);
}
})();

Splash screen with React Native

October 30, 2022

A splash screen is the first thing user sees after opening the app, it usually shows an app logo with optional animations. This post covers setup for Android devices and iOS setup will be covered additionally.

Prerequisites

  • installed Android studio

  • bootstrapped app

  • react-native-splash-screen package installed

npx react-native init SplashScreenApp
cd SplashScreenApp
npm i react-native-splash-screen

JavaScript setup

Hide the splash screen when the app is ready

// App.js
// ...
useEffect(() => SplashScreen.hide(), []);
// ...

Native setup

Extending MainActivity class with onCreate method

// android/app/src/main/java/com/splashscreenapp/MainActivity.java
import org.devio.rn.splashscreen.SplashScreen;
import android.os.Bundle;
// ...
@Override
protected void onCreate(Bundle savedInstanceState) {
SplashScreen.show(this, R.style.SplashTheme, true);
super.onCreate(savedInstanceState);
}
// ...

Adding splash screen activity with the theme into the Android manifest

file: android/app/src/main/AndroidManifest.xml

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.splashscreenapp">
<uses-permission android:name="android.permission.INTERNET" />
<application
android:name=".MainApplication"
android:label="@string/app_name"
android:icon="@mipmap/ic_launcher"
android:roundIcon="@mipmap/ic_launcher_round"
android:allowBackup="false"
android:theme="@style/AppTheme">
<activity
android:name=".SplashActivity"
android:theme="@style/SplashTheme"
android:label="@string/app_name"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".MainActivity"
android:label="@string/app_name"
android:configChanges="keyboard|keyboardHidden|orientation|screenSize"
android:windowSoftInputMode="adjustResize"
android:exported="true"/>
<activity android:name="com.facebook.react.devsupport.DevSettingsActivity" />
</application>
</manifest>

Implementing SplashActivity class

// android/app/src/main/java/SplashActivity.java
package com.splashscreenapp;
import android.content.Intent;
import android.os.Bundle;
import androidx.appcompat.app.AppCompatActivity;
public class SplashActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Intent intent = new Intent(this, MainActivity.class);
startActivity(intent);
finish();
}
}

Themes setup

file: android/app/src/main/res/values/styles.xml

<resources>
<style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
<item name="android:statusBarColor">@color/theme_bg</item>
<item name="android:windowBackground">@drawable/background_splash</item>
</style>
<style name="SplashTheme" parent="Theme.AppCompat.Light.NoActionBar">
<item name="android:background">@drawable/background_splash</item>
<item name="android:windowDisablePreview">true</item>
<item name="colorPrimaryDark">@color/theme_bg</item>
</style>
</resources>

Colors setup

file: android/app/src/main/res/values/colors.xml

<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="theme_bg">#2F3C7E</color>
</resources>

Adding config for background splash drawable

file: android/app/src/main/res/drawable/background_splash.xml

<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android" android:opacity="opaque">
<item android:drawable="@color/theme_bg"/>
<item
android:width="300dp"
android:height="300dp"
android:drawable="@mipmap/splash_icon"
android:gravity="center" />
</layer-list>

Adding layout

file: android/app/src/main/res/layout/launch_screen.xml

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/background_splash"
android:orientation="vertical">
</LinearLayout>

Images for splash screen

App Icon Generator can be used for generating the images. Open the Image Sets tab, choose 4X design base size, insert an image and generate corresponding images for the splash screen. Downloaded images should be stored in mipmap (android/app/src/main/res/mipmap-*hdpi/splash_icon.png) folders.

Demo

Run the following commands

npx react-native start
npx react-native run-android

Push notifications with Firebase

May 29, 2022

Push notifications are a great alternative to email notifications, there is no need for a verification step, UX is improved and user engagement with the app is increased.

Requirements for the push notifications

  • Created Firebase project
  • Project ID, can be found on Project settings General tab
  • Server key for sending the push notifications (used on the back-end)
  • Public Vapid key, can be found on Project settings Cloud Messaging Web Push certificates (used on the front-end)
  • Firebase configuration, can be found on Project settings General Your apps
  • Firebase messaging service worker
  • HTTPS connection (localhost for local development)
  • firebase package installed

Helper functions

getToken

  • generates unique token for the browser or gets already generated token
  • requests permission for receiving push notifications
  • triggers the Firebase messaging service worker

If the user blocks the push notifications, FirebaseError error with code messaging/permission-blocked is thrown. If the user's browser doesn't support the APIs required to use the Firebase SDK, FirebaseError error with code messaging/unsupported-browser is thrown. The access token is invalidated when a user manually blocks the notifications in the browser settings.

isSupported

  • checks if all required APIs for push notifications are supported
  • returns Promise<boolean>

It should be used in useEffect hooks.

import { isSupported } from 'firebase/messaging';
// ...
useEffect(() => {
isSupported()
.then((isAvailable) => {
if (isAvailable) {
// ...
}
})
.catch(console.error);
}, []);
// ...

initializeApp

  • should be called before the app starts
import { initializeApp } from 'firebase/app';
import { getMessaging, getToken } from 'firebase/messaging';
import { firebaseConfig } from 'constants/config';
export const initializeFirebase = () => initializeApp(firebaseConfig);
export const getTokenForPushNotifications = async () => {
const messaging = getMessaging();
const token = await getToken(messaging, {
vapidKey: process.env.NEXT_PUBLIC_VAPID_KEY,
});
return token;
}

Firebase messaging service worker

The following service worker should be registered for handling background notifications. Custom notificationclick handler should be implemented before importing firebase libraries, the below implementation opens a new window with the defined URL if it is not already open. Firebase automatically checks for service workers at /firebase-messaging-sw.js so it should be publicly available.

// /firebase-messaging-sw.js
/* eslint-disable no-unused-vars */
self.addEventListener("notificationclick", (event) => {
event.notification.close();
const DEFAULT_URL = "<URL>";
const url =
event.notification?.data?.FCM_MSG?.notification?.click_action ||
DEFAULT_URL;
event.waitUntil(
clients.matchAll({ type: "window" }).then((clientsArray) => {
const hadWindowToFocus = clientsArray.some((windowClient) =>
windowClient.url === url ? (windowClient.focus(), true) : false
);
if (!hadWindowToFocus)
clients
.openWindow(url)
.then((windowClient) => (windowClient ? windowClient.focus() : null));
})
);
});
let messaging = null;
try {
if (typeof importScripts === "function") {
importScripts("https://www.gstatic.com/firebasejs/8.10.0/firebase-app.js");
importScripts(
"https://www.gstatic.com/firebasejs/8.10.0/firebase-messaging.js"
);
firebase.initializeApp({
apiKey: "xxxxxx",
authDomain: "xxxxxx",
projectId: "xxxxxx",
storageBucket: "xxxxxx",
messagingSenderId: "xxxxxx",
appId: "xxxxxx",
measurementId: "xxxxxx",
});
messaging = firebase.messaging();
}
} catch (error) {
console.error(error);
}

Server keys

The server key for API v1 can be derived from the service account key JSON file, in that case, the JSON file should be encoded and stored in the environment variable to prevent exposing credentials in the repository codebase. The service account key JSON file can be downloaded by clicking Generate new private key on Project settings Service accounts tab. The server key for the legacy API can be found on Project settings Cloud Messaging Cloud Messaging API (Legacy), if it is enabled.

import * as serviceAccountKey from './serviceAccountKey.json';
const encodedServiceAccountKey = Buffer.from(
JSON.stringify(serviceAccountKey),
).toString('base64');
process.env.SERVICE_ACCOUNT_KEY = encodedServiceAccountKey;
import 'dotenv/config';
import * as googleAuth from 'google-auth-library';
(async () => {
const serviceAccountKeyEncoded = process.env.SERVICE_ACCOUNT_KEY;
const serviceAccountKeyDecoded = JSON.parse(
Buffer.from(serviceAccountKeyEncoded, 'base64').toString('ascii'),
);
const jwt = new googleAuth.JWT(
serviceAccountKeyDecoded.client_email,
null,
serviceAccountKeyDecoded.private_key,
['https://www.googleapis.com/auth/firebase.messaging'],
null,
);
const tokens = await jwt.authorize();
const authorizationHeader = `Bearer ${tokens.access_token}`;
console.log(authorizationHeader);
})();

Manually sending the push notification

Icon URL should be covered with HTTPS so the icon can be properly shown in the notification.

  • legacy
curl --location --request POST 'https://fcm.googleapis.com/fcm/send' \
--header 'Authorization: key=<SERVER_KEY>' \
--header 'Content-Type: application/json' \
--data-raw '{
"notification": {
"title": "Push notifications with Firebase",
"body": "Push notifications with Firebase body",
"click_action": "http://localhost:3000",
"icon": "https://picsum.photos/200"
},
"to": "<TOKEN>"
}'

The response contains success key with 1 value when the push notification is successfully sent. The response contains failure key with 1 value when sending the push notification failed, in this case, results key is an array with error objects, some of the error names are InvalidRegistration and NotRegistered.

  • API v1
curl --location --request POST 'https://fcm.googleapis.com/v1/projects/<PROJECT_ID>/messages:send' \
--header 'Authorization: Bearer <TOKEN_DERIVED_FROM_SERVICE_ACCOUNT_KEY>' \
--header 'Content-Type: application/json' \
--data-raw '{
"message": {
"notification": {
"title": "Push notifications with Firebase",
"body": "Push notifications with Firebase body"
},
"webpush": {
"fcmOptions": {
"link": "http://localhost:3000"
},
"notification": {
"icon": "https://picsum.photos/200"
}
},
"token": "<TOKEN>"
}
}'

Successful response return JSON with name key which presents the notification id in the format projects/{project_id}/messages/{message_id}. Error with code 400 is thrown when request body is not valid. Error with code 401 is thrown when the derived token is expired.

State management with Next.js and React

May 15, 2022

Global state can be useful when component share some common parts. Also some parts can stay persistent (in local storage) and be used in the next user's session. React provides native way to handle state management using context with the hooks.

Usage

// ...
import { useAppContext } from "context";
import { UPDATE_FEATURE_ACTIVATION } from "context/constants";
export function CustomComponent() {
const { state, dispatch } = useAppContext();
// get value from the store
console.log(state.isFeatureActivated);
// dispatch action to change the state
dispatch({ type: UPDATE_FEATURE_ACTIVATION, payload: { isFeatureActivated: true } });
// ...
}

Context setup

// context/index.jsx
import PropTypes from "prop-types";
import React, {
createContext,
useContext,
useEffect,
useMemo,
useReducer,
} from "react";
import { getItem, setItem, STATE_KEY } from "utils/local-storage";
import { INITIALIZE_STORE } from "./constants";
import { appReducer, initialState } from "./reducer";
const appContext = createContext(initialState);
export function AppWrapper({ children }) {
const [state, dispatch] = useReducer(appReducer, initialState);
const contextValue = useMemo(() => {
return { state, dispatch };
}, [state, dispatch]);
useEffect(() => {
const stateItem = getItem(STATE_KEY);
if (!stateItem) return;
const parsedState = JSON.parse(stateItem);
const updatedState = {
...initialState,
// persistent state
isFeatureActivated: parsedState.isFeatureActivated,
};
dispatch({
type: INITIALIZE_STORE,
payload: updatedState,
});
}, []);
useEffect(() => {
if (state !== initialState) {
setItem(STATE_KEY, JSON.stringify(state));
}
}, [state]);
return (
<appContext.Provider value={contextValue}>{children}</appContext.Provider>
);
}
AppWrapper.propTypes = {
children: PropTypes.oneOfType([PropTypes.array, PropTypes.object]).isRequired,
};
export function useAppContext() {
return useContext(appContext);
}

Reducer with actions

// context/reducer.js
import { INITIALIZE_STORE, UPDATE_FEATURE_ACTIVATION } from "./constants";
export const initialState = {
isFeatureActivated: false,
};
export const appReducer = (state, action) => {
switch (action.type) {
case INITIALIZE_STORE: {
return action.payload;
}
case UPDATE_FEATURE_ACTIVATION: {
return {
...state,
isFeatureActivated: action.payload.isFeatureActivated,
};
}
default:
return state;
}
};

Wrapper around the app

// _app.jsx
import { AppWrapper } from "context";
function App({ Component, pageProps }) {
// ...
return (
<AppWrapper>
<Component {...pageProps} />
</AppWrapper>
);
}

Constants

// context/constants.js
export const INITIALIZE_STORE = "INITIALIZE_STORE";
export const UPDATE_FEATURE_ACTIVATION = "UPDATE_FEATURE_ACTIVATION";

JSON logging bash scripts

March 18, 2022

Logs are usually streamed to the standard output in JSON format so logging aggregators (Graylog, e.g.) can collect and properly parse them. The following example shows how bash scripts output can be formatted with the message, log levels and timestamp, error logs are streamed into a temporary file, formatted, and redirected to the standard output.

#!/usr/bin/env bash
declare -A log_levels=( [FATAL]=0 [ERROR]=3 [WARNING]=4 [INFO]=6 [DEBUG]=7)
json_logger() {
log_level=$1
message=$2
level=${log_levels[$log_level]}
timestamp=$(date --iso-8601=seconds)
jq --raw-input --compact-output \
'{
"level": '$level',
"timestamp": "'$timestamp'",
"message": .
}'
}
{
set -e
echo $? 1>&2
echo "Finished"
} 2>/tmp/stderr.log | json_logger "INFO"
cat /tmp/stderr.log | json_logger "ERROR"

Sending emails locally

March 8, 2022

For local testing or testing in general, there is no need to send emails to the real e-mail addresses. Mailtrap service helps preview the emails that should be sent. Credentials (username and password) can be found on an inbox page, a list of inboxes is available at Projects page. For sending emails in a production environment some other service can be used (Sendgrid, e.g.). Sendgrid API key can be created at API keys page.

// ...
const emailConfiguration = {
auth: {
user: process.env.EMAIL_USERNAME, // 'apiKey' for Sendgrid
pass: process.env.EMAIL_PASSWORD,
},
host: process.env.EMAIL_HOST, // 'smtp.mailtrap.io' for Mailtrap, 'smtp.sendgrid.net' for Sendgrid
port: process.env.EMAIL_PORT, // 2525 for Mailtrap, 465 for Sendgrid
secure: process.env.EMAIL_SECURE, // true for Sendgrid
};
const transport = nodemailer.createTransport(emailConfiguration);
const info = await transport.sendMail({
from: '"Sender" <sender@example.com>',
to: "recipient1@example.com, recipient2@example.com",
subject: "Subject",
text: "Text",
html: "<b>Text</b>",
});
console.log("Message sent: %s", info.messageId);
// ...

Chronoblog (Gatsby) theme with RSS feed

March 7, 2022

Chronoblog is one of the Gatsby blogging themes. RSS feed can be used to get the latest posts for Github actions, dev.to, etc. It can be easily added by the following steps, a similar approach can be used for other Gatsby themes.

Setup

npx gatsby new chronoblog-site https://github.com/Chronoblog/gatsby-starter-chronoblog
cd chronoblog-site

Install required packages

npm i gatsby-plugin-feed@2 gatsby-transformer-remark@3 gatsby-source-filesystem

Populate GraphQL fields to avoid null errors

Add canonical_url: http://example.com to content links frontmatter-placeholder index.md

Add plugins configurations in gatsby-config.js

Draft content, links and placeholder post are skipped.

module.exports = {
// ...
plugins: [
// ...
{
resolve: `gatsby-plugin-feed`,
options: {
query: `
{
site {
siteMetadata {
title
description
siteUrl
site_url: siteUrl
}
}
}
`,
feeds: [
{
serialize: ({ query: { site, allMarkdownRemark } }) => {
return allMarkdownRemark.edges.map(edge => {
return Object.assign({}, edge.node.frontmatter, {
description: edge.node.excerpt,
date: edge.node.frontmatter.date,
url: edge.node.frontmatter.canonical_url || edge.node.frontmatter.url,
guid: edge.node.frontmatter.canonical_url || edge.node.frontmatter.url,
custom_elements: [{ "content:encoded": edge.node.html }],
})
})
},
query: `
{
allMarkdownRemark(
filter: {
frontmatter: {
draft: { ne: true },
link: { eq: null },
title: { ne: "Ghost Post" }
},
},
sort: { order: DESC, fields: [frontmatter___date] },
) {
edges {
node {
excerpt
html
fields { slug }
frontmatter {
title
canonical_url
date
}
}
}
}
}
`,
output: "/rss.xml",
title: "RSS Feed",
},
],
},
},
{
resolve: `gatsby-transformer-remark`
},
],
// ...
};

Implement onCreateNode function in gatsby-node.js

const { createFilePath } = require(`gatsby-source-filesystem`);
exports.onCreateNode = ({ node, actions, getNode }) => {
const { createNodeField } = actions
if (node.internal.type === `MarkdownRemark`) {
const value = createFilePath({ node, getNode })
createNodeField({
name: `slug`,
node,
value,
})
}
}

Add serve script in package.json

{
// ...
"scripts": {
// ...
"serve": "gatsby serve"
}
}

Demo

Run npm run build && npm run serve, RSS feed should be populated on http://localhost:9000/rss.xml page.

Next.js app in production

March 6, 2022

The following things should be considered when Next.js app is running in production

  • Error tracking is crucial in the production environment, proactively fixing the errors leads to a better user experience. Sentry is one of the error tracking services, Next.js app can be easily integrated with it.
npm i @sentry/nextjs
npx @sentry/wizard -i nextjs

Set SENTRY_AUTH_TOKEN environment variable, token can be found at Settings Account API Auth Tokens.

User feedback can be collected via the report dialog.

// sentry.client.config.js
Sentry.init({
// ...
beforeSend(event, hint) {
// Check if it is an exception, and if so, show the report dialog
if (event.exception) {
Sentry.showReportDialog({
eventId: event.event_id,
// other fields can be overridden if they need to be localized
});
}
return event;
},
//...
});
  • Intl API is not supported in all browsers.

  • localStorage API is not available when cookies are blocked.

Progressive Web Apps 101

March 5, 2022

Progressive Web Apps bring some advantages over native mobile apps

  • automatic updates can be implemented
  • the installed app takes less memory
  • installable on phones, tablets, desktops

Prerequisites for installation

  • web app is running over an HTTPS connection
  • service worker is registered
  • web app manifest (manifest.json) is included

Service worker

Read more about it on Caching with service worker and Workbox

Manifest

Following fields can be included

  • name is a full name used when the app is installed
  • short_name is a shorter version of the name that is shown when there is insufficient space to display the full name
  • background_color is used on a splash screen
  • description is shown on an installation pop-up
  • display customizes which browser UI is shown when the app is launched (standalone, fullscreen, minimal-ui, browser)
  • icons is a list of icons for the browser used in different places (home screen, app launcher, etc.)
  • scope specifies the navigation scope of the PWA. It should start with the URL from start_url value. If the user navigates outside the scope, PWA won't be open from external URLs
  • screenshots is a list of screenshots shown on the installation pop-up
  • start_url is a relative URL of the app which is loaded when the installed app is launched. PWA usage can be tracked by adding UTM parameters within the URL.
  • theme_color sets the color of the toolbar, it should match the meta theme color specified in the document head

Description and screenshots are shown only on mobile phones.

{
"name": "App name",
"short_name": "App short name",
"background_color": "#ffffff",
"description": "App description",
"display": "standalone",
"icons": [
{
"src": "icons/icon-128x128.png",
"sizes": "128x128",
"type": "image/png"
},
{
"src": "icons/icon-144x144.png",
"sizes": "144x144",
"type": "image/png"
},
{
"src": "icons/icon-152x152.png",
"sizes": "152x152",
"type": "image/png"
},
{
"src": "icons/icon-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "icons/icon-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
],
"scope": "/app",
"screenshots": [{
"src": "screenshots/main.jpg",
"sizes": "1080x2400",
"type": "image/jpg"
}],
"start_url": "/app?utm_source=pwa&utm_medium=pwa&utm_campaign=pwa",
"theme_color": "#3366cc"
}

Manifest file should be included via link tag

<link rel="manifest" href="/manifest.json">

In-app installation experience

It can be implemented on Google Chrome and Edge.

  • listen for the beforeinstallprompt event
  • save beforeinstallprompt event so it can be used to trigger the installation
  • provide a button to start the in-app installation flow
let deferredPrompt;
let installable = false;
window.addEventListener("beforeinstallprompt", (event) => {
event.preventDefault();
deferredPrompt = event;
installable = true;
document.getElementById("installable-btn").innerHTML = "Install";
});
window.addEventListener("appinstalled", () => {
installable = false;
});
document.getElementById("installable-btn").addEventListener("click", () => {
if (installable) {
deferredPrompt.prompt();
deferredPrompt.userChoice.then((choiceResult) => {
if (choiceResult.outcome === "accepted") {
document.getElementById("installable-btn").innerHTML = "click!";
}
});
} else {
alert("clicked!");
}
});

Notes

chrome://webapks page on mobile phones shows the list of installed PWAs with their details. Last Update Check Time is useful for checking when the manifest file was updated. The app is updated once a day if there are some manifest changes.

Example

A working example is available at https://github.com/zsevic/pwa-starter

Integrating Next.js app with Google analytics 4

February 6, 2022

Google analytics helps get more insights into app usage.

Prerequisites

  • Google analytics product is already set up, tracking ID is needed
  • Next.js app should be bootstrapped
  • react-ga4 package is installed

Analytics initialization

Analytics should be initialized inside pages/_app.js file.

import ReactGA from "react-ga4";
// ...
if (isEnvironment("production")) {
ReactGA.initialize(ANALYTICS_TRACKING_ID);
}

Tracking events (clicks, e.g.)

// ...
export function trackEvent(category, label, action = "click") {
if (isEnvironment("production")) {
ReactGA.event({
action,
category,
label,
});
}
}
// ...
<Button onClick={() => trackEvent("category", "label")}>
2021

Cursor-based iteration with Mongoose

November 17, 2021

Using cursor-based iteration can be useful when there is a need to iterate (in memory) over all of the documents from a collection and every document is a big object. Every document can be projected to return only specific fields.

const cursor = Model.find().select('/* several fields */').cursor();
for (let document = await cursor.next(); document != null; document = await cursor.next()) {
// ...
}

Redis as custom storage for NestJS rate limiter

September 14, 2021

Rate limiter is a common technique to protect against brute-force attacks. NestJS provides a module for it, the default storage is in-memory. Custom storage should be injected inside ThrottlerModule configuration.

// src/modules/app.module.ts
ThrottlerModule.forRootAsync({
imports: [ThrottlerStorageModule],
useFactory: (throttlerStorage: ThrottlerStorageService) => ({
ttl,
limit,
storage: throttlerStorage,
}),
inject: [ThrottlerStorageService],
}),

Custom storage needs to implement the ThrottlerStorage interface.

// src/modules/throttler-storage/throttler-storage.service.ts
@Injectable()
export class ThrottlerStorageService implements ThrottlerStorage {
constructor(@InjectRedis() private readonly throttlerStorageService: Redis) {}
async getRecord(key: string): Promise<number[]> {
// ...
}
async addRecord(key: string, ttl: number): Promise<void> {
// ...
}
}

Redis client should be configured inside RedisModule configuration.

// src/modules/throttler-storage/throttler-storage.module.ts
@Module({
imports: [
RedisModule.forRootAsync({
useFactory: (configService: ConfigService) => {
const redisUrl = configService.get('REDIS_URL');
return {
config: {
url: redisUrl
}
};
},
inject: [ConfigService]
})
],
providers: [ThrottlerStorageService],
exports: [ThrottlerStorageService]
})
export class ThrottlerStorageModule {}

Redis connection should be closed during the graceful shutdown.

// src/app/app.module.ts
@Injectable()
export class AppModule implements OnApplicationShutdown {
constructor(
// ...
@InjectRedis() private readonly redisConnection: Redis
) {}
async closeRedisConnection(): Promise<void> {
await this.redisConnection.quit();
this.logger.log('Redis connection is closed');
}
async onApplicationShutdown(signal: string): Promise<void> {
// ...
await Promise.all([
// ...
this.closeRedisConnection()
]).catch((error) => this.logger.error(error.message));
}
}

@liaoliaots/nestjs-redis library is used in the examples.

Postgres and Redis container services for e2e tests in Github actions

September 8, 2021

For e2e testing we need database connection, provisioning container service for Postgres database can be automated using Github actions. Environment variable for connection string for newly created database can be set in the step for running e2e tests. The same goes for Redis instance.

# ...
jobs:
build:
# Container must run in Linux-based operating systems
runs-on: ubuntu-latest
# Image from Docker hub
container: node:16.3.0-alpine3.13
# ...
strategy:
matrix:
# ...
database-name:
- e2e-testing-db
database-user:
- username
database-password:
- password
database-host:
- postgres
database-port:
- 5432
redis-host:
- redis
redis-port:
- 6379
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: ${{ matrix.database-name }}
POSTGRES_USER: ${{ matrix.database-user }}
POSTGRES_PASSWORD: ${{ matrix.database-password }}
ports:
- ${{ matrix.database-port }}:${{ matrix.database-port }}
# Set health checks to wait until postgres has started
options:
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis
# Set health checks to wait until redis has started
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
# ...
- run: npm run test:e2e
env:
DATABASE_URL: postgres://${{ matrix.database-user }}:${{ matrix.database-password }}@${{ matrix.database-host }}:${{ matrix.database-port }}/${{ matrix.database-name }}
REDIS_URL: redis://${{ matrix.redis-host }}:${{ matrix.redis-port }}
# ...

Testing custom repositories (NestJS/TypeORM)

September 5, 2021

Custom repositories extend base repository class and enrich it with several additional methods.

// user.repository.ts
@Injectable()
export class UserRepository extends Repository<UserEntity> {
constructor(private dataSource: DataSource) {
super(UserEntity, dataSource.createEntityManager());
}
async getById(id: string) {
return this.findOne({ where: { id } });
}
// ...
}

Custom repository can be injected into the service.

// user.service.ts
export class UserService {
constructor(private readonly userRepository: UserRepository) {}
async getById(id: string): Promise<User> {
return this.userRepository.getById(id);
}
// ...
}

Instantiation responsibility can be delegated to Nest by passing entity class to the forFeature method.

// user.module.ts
@Module({
imports: [
TypeOrmModule.forFeature([UserEntity])],
// ...
],
providers: [UserService, UserRepository],
// ...
})
export class UserModule {}

In order to properly test the custom repository, some of the methods has to be mocked.

// user.repository.spec.ts
describe('UserRepository', () => {
let userRepository: UserRepository;
const dataSource = {
createEntityManager: jest.fn()
};
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
UserRepository,
{
provide: DataSource,
useValue: dataSource
}
]
}).compile();
userRepository = module.get<UserRepository>(UserRepository);
});
describe('getById', () => {
it('should return found user', async () => {
const id = 'id';
const user = {
id
};
const findOneSpy = jest
.spyOn(userRepository, 'findOne')
.mockResolvedValue(user as UserEntity);
const foundUser = await userRepository.getById(id);
expect(foundUser).toEqual(user);
expect(findOneSpy).toHaveBeenCalledWith({ where: user });
});
});
});

Spies and mocking with Jest

August 19, 2021

Unit testing, in addition to output testing, includes the usage of spies and mocking. Spies are functions that let you spy on the behavior of functions that are called indirectly by some other code. Spy can be created by using jest.fn(). Mocking is injecting test values into the code during the tests. Some of the use cases will be presented below.

  • Async function and its resolved value can be mocked using mockResolvedValue. Another way to mock it is by using mockImplementation and providing a function as an argument.
const calculationService = {
calculate: jest.fn(),
};
jest.spyOn(calculationService, "calculate").mockResolvedValue(value);
jest.spyOn(calculationService, "calculate").mockImplementation(async (a) => Promise.resolve(a));
  • Rejected async function can be mocked using mockRejectedValue and mockImplementation.
jest.spyOn(calculationService, "calculate").mockRejectedValue(value);
jest.spyOn(calculationService, "calculate")
.mockImplementation(async () => Promise.reject(new Error(errorMessage)));
await expect(calculateSomething(calculationService)).rejects.toThrowError(Error);
  • Sync function and its return value can be mocked using mockReturnValue and mockImplementation.
jest.spyOn(calculationService, "calculate").mockReturnValue(value);
jest.spyOn(calculationService, "calculate").mockImplementation((a) => a);
  • Chained methods can be mocked using mockReturnThis.
// calculationService.get().calculate();
jest.spyOn(calculationService, "get").mockReturnThis();
  • Async and sync functions which are called multiple times can be mocked with different values using mockResolvedValueOnce and mockReturnValueOnce respectively and mockImplementationOnce.
jest.spyOn(calculationService, "calculate").mockResolvedValueOnce(value)
.mockResolvedValueOnce(otherValue);
jest.spyOn(calculationService, "calculate").mockReturnValueOnce(value)
.mockReturnValueOnce(otherValue);
jest.spyOn(calculationService, "calculate").mockImplementationOnce((a) => a + 3)
.mockImplementationOnce((a) => a + 5);
  • External modules can be mocked similarly as spies. For the following example let's suppose axios package is already used in one function. The following example represents a test file where axios is mocked using jest.mock().
import axios from 'axios';
jest.mock('axios');
// within test case
axios.get.mockResolvedValue(data);
  • Manual mocks are resolved by writing corresponding modules in __mocks__ directory, e.g. fs/promises mock will be stored in __mocks__/fs/promises.js file. fs/promises mock will be resolved using jest.mock() in the test file.
jest.mock('fs/promises');
  • To assert called arguments for a mocked function, an assertion can be done using toHaveBeenCalledWith matcher.
expect(calculationService.calculate).toHaveBeenCalledWith(firstArgument, secondArgument);
  • To assert skipped call for a mocked function, an assertion can be done using not.toHaveBeenCalled matcher.
expect(calculationService.calculate).not.toHaveBeenCalled();
  • To assert called arguments for the exact call when a mocked function is called multiple times, an assertion can be done using toHaveBeenNthCalledWith matcher.
argumentsList.forEach((argumentList, index) => {
expect(calculationService.calculate).toHaveBeenNthCalledWith(
index + 1,
argumentList,
);
});
  • Mocks should be reset to their initial implementation before each test case.
beforeEach(() => {
jest.resetAllMocks();
});

Server-Sent Events 101

August 18, 2021

Server-Sent Events (SSE) is a unidirectional communication between client and server where the server sends the events in text/event-stream format to the client once the client-server connection is established by the client. The client initiates the connection with the server using EventSource API. Previously mentioned API can also be used for listening to the events from the server, listening for the errors, and closing the connection.

const eventSource = new EventSource(url);
eventSource.onmessage = ({ data }) => {
const eventData = JSON.parse(data);
// handling the data from the server
};
eventSource.onerror = () => {
// error handling
};
eventSource.close();

A server can filter clients by query parameter and send them only the appropriate events. In the following example server sends the events only to a specific client distinguished by its e-mail address.

@Sse('sse')
sse(@Query() sseQuery: SseQueryDto): Observable<MessageEvent> {
const subject$ = new Subject();
this.eventService.on(FILTER_VERIFIED, data => {
if (sseQuery.email !== data.email) return;
subject$.next({ isVerifiedFilter: true });
});
return subject$.pipe(
map((data: MessageEventData): MessageEvent => ({ data })),
);
}
2020

 

© 2022