homeprojectstemplates
 
   
🔍

Debugging Node.js apps with Chrome DevTools debugger

April 12, 2024

Debugging with a debugger and breakpoints is recommended rather than using console logs. Chrome provides a built-in debugger for JavaScript-based apps.

This post covers configuring and running a debugger for various Node.js apps in Chrome DevTools.

Prerequisites

  • Google Chrome installed

  • Node.js app bootstrapped

Setup

Open chrome://inspect, click Open dedicated DevTools for Node and open the Connection tab. You should see a list of network endpoints for debugging.

Run node --inspect-brk app.js to start the debugger, it will log the debugger network. Choose the network in Chrome to open the debugger in Chrome DevTools.

Debugging basics

The variables tab shows local and global variables during debugging.

The step over next function call option goes to the following statement in the codebase, while the step into next function call option goes deeper into the current statement.

Add logs in the debug console via logpoints by selecting the specific part of the codebase so it logs when the selected codebase executes.

Boilerplate

Here is the link to the boilerplate I use for the development.

Sending e-mails with Sendgrid

March 30, 2024

To send e-mails in a production environment, use services like Sendgrid.

Verify the e-mail address on the Sender Management page and create the SendGrid API key on the API keys page.

import nodemailer from 'nodemailer';
(async () => {
const emailConfiguration = {
auth: {
user: process.env.EMAIL_USERNAME, // 'apikey'
pass: process.env.EMAIL_PASSWORD
},
host: process.env.EMAIL_HOST, // 'smtp.sendgrid.net'
port: process.env.EMAIL_PORT, // 465
secure: process.env.EMAIL_SECURE, // true
};
const transport = nodemailer.createTransport(emailConfiguration);
const info = await transport.sendMail({
from: '"Sender" <sender@example.com>',
to: 'recipient1@example.com, recipient2@example.com',
subject: 'Subject',
text: 'Text',
html: '<b>Text</b>'
});
console.log('Message sent: %s', info.messageId);
})();

Demo

The demo with the mentioned example is available here.

Web scraping with cheerio

January 19, 2024

Web scraping means extracting data from websites. This post covers extracting data from the page's HTML tags.

Prerequisites

  • cheerio package is installed

  • HTML page is retrieved via an HTTP client

Usage

  • create a scraper object with load method by passing HTML content as an argument

    • set decodeEntities option to false to preserve encoded characters (like &) in their original form
    const $ = load('<div><!-- HTML content --></div>', { decodeEntities: false });
  • find DOM elements by using CSS-like selectors

    const items = $('.item');
  • iterate through found elements using each method

    items.each((index, element) => {
    // ...
    });
  • access element content using specific methods

    • text - $(element).text()

    • HTML - $(element).html()

    • attributes

      • all - $(element).attr()
      • specific one - $(element).attr('href')
    • child elements

      • first - $(element).first()
      • last - $(element).last()
      • all - $(element).children()
      • specific one - $(element).find('a')
    • siblings

      • previous - $(element).prev()
      • next - $(element).next()

Disclaimer

Please check the website's terms of service before scraping it. Some websites may have terms of service that prohibit such activity.

Demo

The demo with the mentioned examples is available here.

2023

Integration with GitHub GraphQL API

December 22, 2023

GitHub provides GraphQL API to create integrations, retrieve data, and automate workflows.

Prerequisites

  • GitHub token (Settings Developer Settings Personal access tokens)

Integration

Below is an example of retrieving sponsorable users by location.

export async function getUsersBy(location) {
return fetch('https://api.github.com/graphql', {
method: 'POST',
body: JSON.stringify({
query: `query {
search(type: USER, query: "location:${location} is:sponsorable", first: 100) {
edges {
node {
... on User {
bio
login
viewerCanSponsor
}
}
}
userCount
}
}`
}),
headers: {
ContentType: 'application/json',
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`
}
})
.then((response) => response.json())
.then((response) => response.data?.search?.edges || []);
}

Demo

The demo with the mentioned example is available here.

Web scraping with jsdom

December 14, 2023

Web scraping means extracting data from websites. This post covers extracting data from the page's HTML when data is stored in JavaScript variable or stringified JSON.

The scraping prerequisite is retrieving an HTML page via an HTTP client.

Examples

The example below moves data into a global variable, executes the page scripts and accesses the data from the global variable.

import jsdom from 'jsdom';
fetch(URL)
.then((res) => res.text())
.then((response) => {
const dataVariable = 'someVariable.someField';
const html = response.replace(dataVariable, `var data=${dataVariable}`);
const dom = new jsdom.JSDOM(html, {
runScripts: 'dangerously',
virtualConsole: new jsdom.VirtualConsole()
});
console.log('data', dom?.window?.data);
});

The example below runs the page scripts, and access stringified JSON data.

import jsdom from 'jsdom';
fetch(URL)
.then((res) => res.text())
.then((response) => {
const dom = new jsdom.JSDOM(response, {
runScripts: 'dangerously',
virtualConsole: new jsdom.VirtualConsole()
});
const data = dom?.window?.document?.getElementById('someId')?.value;
console.log('data', JSON.parse(data));
});

Disclaimer

Please check the website's terms of service before scraping it. Some websites may have terms of service that prohibit such activity.

License key verification with Gumroad API

November 16, 2023

Gumroad allows verifying license keys via API calls to limit the usage of the keys. It can be helpful to prevent the redistribution of products like desktop apps.

Allow generating a unique license key per sale in product settings, and the product ID will be shown there. Below is the code snippet for verification.

try {
const requestBody = new URLSearchParams();
requestBody.append('product_id', process.env.PRODUCT_ID);
requestBody.append('license_key', process.env.LICENSE_KEY);
requestBody.append('increment_uses_count', true);
const response = await fetch('https://api.gumroad.com/v2/licenses/verify', {
method: 'POST',
body: requestBody
});
const data = await response.json();
if (data.purchase?.test) {
console.log('Skipping verification for test purchase');
return;
}
const verificationLimit = Number(process.env.VERIFICATION_LIMIT);
if (data.uses >= verificationLimit + 1) {
throw new Error('Verification limit exceeded');
}
if (!data.success) {
throw new Error(data.message);
}
} catch (error) {
if (error?.response?.status === 404) {
console.log("License key doesn't exist");
return;
}
console.log('Verifying license key failed', error);
}

Demo

The demo with the mentioned example is available here.

PDF generation with Gotenberg

November 4, 2023

Gotenberg is a Docker-based stateless API for PDF generation from HTML and Markdown files.

To get started, configure Docker compose and run the docker-compose up command.

version: '3.8'
services:
gotenberg:
image: gotenberg/gotenberg:7
ports:
- 3000:3000

API is available on http://localhost:3000 address.

Run the following commands to generate PDFs

  • from the given URL

    curl \
    --request POST 'http://localhost:3000/forms/chromium/convert/url' \
    --form 'url="https://sparksuite.github.io/simple-html-invoice-template/"' \
    --form 'pdfFormat="PDF/A-1a"' \
    -o curl-url-response.pdf
  • from the given HTML file

    curl \
    --request POST 'http://localhost:3000/forms/chromium/convert/html' \
    --form 'files=@"./index.html"' \
    --form 'pdfFormat="PDF/A-1a"' \
    -o curl-html-response.pdf

PDF/A-1a format is used for the long-term preservation of electronic documents, ensuring that documents can be accessed and read even as technology changes.

Demo

The demo with the mentioned examples is available here. It also contains examples using Fetch API and Axios package.

Identifying missing variables in Handlebars templates

November 3, 2023

Handlebars is a template engine that can create server-side views, e-mail templates, and invoice templates by injecting JSON data into HTML.

Resolving all variables in a Handlebars template is essential to maintain the accuracy of the displayed information and to prevent incomplete content or layout problems.

The following snippet checks for missing variables by overriding the default nameLookup function. It logs a warning for unresolved variables and sets the default value, empty string, in this case.

// ...
const originalNameLookup = Handlebars.JavaScriptCompiler.prototype.nameLookup;
Handlebars.JavaScriptCompiler.prototype.nameLookup = function(
parent,
name,
type
) {
if (type === 'context') {
const messageLog = JSON.stringify({
message: `Variable is not resolved in the template: ${name}`,
level: WARNING_LEVEL
// ...
});
return `${parent} && ${parent}.${name} ? ${parent}.${name} : (console.log(${messageLog}), ''`;
}
return originalNameLookup.call(this, parent, name, type);
};
// ...
const result = Handlebars.compile(template)(data);

Extending outdated TypeScript package declarations

November 2, 2023

Extending package declarations locally is one of the options for outdated package typings.

Create a declaration file .d.ts (e.g., handlebars.d.ts), and put it inside the src directories.

Find the exact name of the package namespace inside the node_modules types file (e.g. handlebars/types/index.d.ts).

Extend the found namespace with your needed properties, like classes, functions, etc.

// handlebars.d.ts
declare namespace Handlebars {
export class JavaScriptCompiler {
public nameLookup(parent: string, name: string, type: string): string | string[];
}
export function doSomething(name: string): void;
// ...
}

Integration with Notion API

September 9, 2023

Notion is a versatile workspace tool combining note-taking, task management, databases, and collaboration features into a single platform.

It also supports integration with Notion content, facilitating tasks such as creating pages, retrieving a block, and filtering database entries via API.

Prerequisites

  • Notion account
  • Generated Integration token (Settings & Members Connections Develop or manage integrations New integration)
  • Notion database ID (open database as full page, extract database ID from the URL (https://notion.so/<USERNAME>/<DATABASE_ID>?v=v))
  • Added Notion connection (three dots (...) menu Add Connections choose created integration)
  • @notionhq/client package installed

Integration

Below is an example of interacting with Notion API to create the page (within the chosen database) with icon, cover, properties, and child blocks.

const { Client } = require('@notionhq/client');
const notion = new Client({ auth: process.env.NOTION_INTEGRATION_TOKEN });
const response = await notion.pages.create({
parent: {
type: 'database_id',
database_id: process.env.NOTION_DATABASE_ID
},
icon: {
type: 'emoji',
emoji: '🆗'
},
cover: {
type: 'external',
external: {
url: 'https://cover.com'
}
},
properties: {
Name: {
title: [
{
type: 'text',
text: {
content: 'Some name'
}
}
]
},
Score: {
number: 42
},
Tags: {
multi_select: [
{
name: 'A'
},
{
name: 'B'
}
]
},
Generation: {
select: {
name: 'I'
}
}
// other properties
},
children: [
{
object: 'block',
type: 'bookmark',
bookmark: {
url: 'https://bookmark.com'
}
}
]
});

Check out Sparxno for more Notion-related content.

Browser automation with Puppeteer

August 26, 2023

Puppeteer is a headless browser for automating browser tasks. Here's the list of some of the features:

  • Turn off headless mode

    const browser = await puppeteer.launch({
    headless: false
    // ...
    });
  • Resize the viewport to the window size

    const browser = await puppeteer.launch({
    // ...
    defaultViewport: null
    });
  • Emulate screen how it's shown to the user via the emulateMediaType method

    await page.emulateMediaType('screen');
  • Save the page as a PDF file with a specified path, format, scale factor, and page range

    await page.pdf({
    path: 'path.pdf',
    format: 'A3',
    scale: 1,
    pageRanges: '1-2',
    printBackground: true
    });
  • Use preexisting user's credentials to skip logging in to some websites. The user data directory is a parent of the Profile Path value from the chrome://version page.

    const browser = await puppeteer.launch({
    userDataDir: 'C:\\Users\\<USERNAME>\\AppData\\Local\\Google\\Chrome\\User Data',
    args: [],
    });
  • Use Chrome instance instead of Chromium by utilizing the Executable Path from the chrome://version URL. Close Chrome browser before running the script

    const browser = await puppeteer.launch({
    executablePath: puppeteer.executablePath('chrome'),
    // ...
    });
  • Get value based on evaluation in the browser page

    const shouldPaginate = await page.evaluate(() => {
    // ...
    });
  • Get HTML content from the specific element

    const html = await page.evaluate(
    () => document.querySelector('.field--text').outerHTML,
    );
  • Wait for a specific selector to be loaded. You can also provide a timeout in milliseconds

    await page.waitForSelector('.success', { timeout: 5000 });
  • Manipulate with a specific element and click on some of the elements

    await page.$eval('#header', async (headerElement) => {
    // ...
    headerElement
    .querySelectorAll('svg')
    .item(13)
    .parentNode.click();
    });
  • Extend execution of the $eval method

    const browser = await puppeteer.launch({
    // ...
    protocolTimeout: 0,
    });
  • Manipulate with multiple elements

    await page.$$eval('.some-class', async (elements) => {
    // ...
    });
  • Wait for navigation (e.g., form submitting) to be done

    await page.waitForNavigation({ waitUntil: 'networkidle0', timeout: 0 });
  • Trigger hover event on some of the elements

    await page.$eval('#header', async (headerElement) => {
    const hoverEvent = new MouseEvent('mouseover', {
    view: window,
    bubbles: true,
    cancelable: true
    });
    headerElement.dispatchEvent(hoverEvent);
    });
  • Expose a function in the browser and use it in $eval and $$eval callbacks (e.g., simulate typing using the window.type function)

    await page.exposeFunction('type', async (selector, text, options) => {
    await page.type(selector, text, options);
    });
    await page.$$eval('.some-class', async (elements) => {
    // ...
    window.type(selector, text, { delay: 0 });
    });
  • Press the Enter button after typing the input field value

    await page.type(selector, `${text}${String.fromCharCode(13)}`, options);
  • Remove the value from the input field before typing the new one

    await page.click(selector, { clickCount: 3 });
    await page.type(selector, text, options);
  • Expose a variable in the browser by passing it as the third argument for $eval and $$eval methods and use it in $eval and $$eval callbacks

    await page.$eval(
    '#element',
    async (element, customVariable) => {
    // ...
    },
    customVariable
    );
  • Mock response for the specific request

    await page.setRequestInterception(true);
    page.on('request', async function (request) {
    const url = request.url();
    if (url !== REDIRECTION_URL) {
    return request.continue();
    }
    await request.respond({
    contentType: 'text/html',
    status: 304,
    body: '<body></body>',
    });
    });
  • Intercept page redirections (via interceptor) and open them in new tabs rather than following them in the same tab

    await page.setRequestInterception(true);
    page.on('request', async function (request) {
    const url = request.url();
    if (url !== REDIRECTION_URL) {
    return request.continue();
    }
    await request.respond({
    contentType: 'text/html',
    status: 304,
    body: '<body></body>',
    });
    const newPage = await browser.newPage();
    await newPage.goto(url, { waitUntil: 'domcontentloaded', timeout: 0 });
    // ...
    await newPage.close();
    });
  • Intercept page response

    page.on('response', async (response) => {
    if (response.url() === RESPONSE_URL) {
    if (response.status() === 200) {
    // ...
    }
    // ...
    }
    });

Boilerplate

Here is the link to the boilerplate I use for the development.

AI bulk image upscaler with Node.js

August 4, 2023

Image upscaling can be done using Real-ESRGAN, a super-resolution algorithm. Super-resolution is the process of increasing the resolution of the image.

Real-ESRGAN provides Linux, Windows and MacOS executable files and models for Intel/AMD/Nvidia GPUs.

The snippet below demonstrates bulk image upscaling with scale factor 4 and using the realesrgan-x4plus-anime model.

(async () => {
const inputDirectory = path.resolve(path.join(__dirname, 'pictures'));
const outputDirectory = path.resolve(
path.join(__dirname, 'pictures_upscaled')
);
const modelsPath = path.resolve(path.join(__dirname, 'resources', 'models'));
const execName = 'realesrgan-ncnn-vulkan';
const execPath = path.resolve(
path.join(__dirname, 'resources', getPlatform(), 'bin', execName)
);
const scaleFactor = 4;
const modelName = 'realesrgan-x4plus-anime';
if (!fs.existsSync(outputDirectory)) {
await fs.promises.mkdir(outputDirectory, { recursive: true });
}
const commands = [
'-i',
inputDirectory,
'-o',
outputDirectory,
'-s',
scaleFactor,
'-m',
modelsPath,
'-n',
modelName
];
const upscaler = spawn(execPath, commands, {
cwd: undefined,
detached: false
});
upscaler.stderr.on('data', (data) => {
console.log(data.toString());
});
await timers.setTimeout(600 * 1000);
})();

Demo

The demo with the mentioned example is available here.

Boilerplate

Here is the link to the boilerplate I use for the development.

App

Here is the link to the app I use for image upscaling.

Publishing Electron apps to GitHub with Electron Forge

July 19, 2023

Releasing Electron desktop apps can be automated with Electron Forge and GitHub Actions. This post covers the main steps for automation.

Prerequisites

  • bootstrapped Electron app
  • GitHub personal access token (with repo and write:packages permissions) as a GitHub Action secret (GH_TOKEN)

Setup

Run the following commands to configure Electron Forge for the app release.

npm i @electron-forge/cli @electron-forge/publisher-github -D
npm i electron-squirrel-startup
npx electron-forge import

The last command should install the necessary dependencies and add a configuration file.

Update the forge.config.js file with the bin field containing the app name and ensure the GitHub publisher points to the right repository.

Put Windows and MacOS icons paths in the packagerConfig.icon field, Windows supports ico files with 256x256 resolution, and MacOS supports icns icons with 512x512 resolution (1024x1024 for Retina displays). Linux supports png icons with 512x512 resolution, also include its path in the BrowserWindow constructor config within the icon field.

// forge.config.js
const path = require('path');
module.exports = {
packagerConfig: {
asar: true,
icon: path.join(process.cwd(), 'main', 'build', 'icon'),
},
rebuildConfig: {},
makers: [
{
name: '@electron-forge/maker-squirrel',
config: {
bin: 'Electron Starter'
}
},
{
name: '@electron-forge/maker-dmg',
config: {
bin: 'Electron Starter'
},
},
{
name: '@electron-forge/maker-deb',
config: {
bin: 'Electron Starter',
options: {
icon: path.join(process.cwd(), 'main', 'build', 'icon.png'),
},
}
},
{
name: '@electron-forge/maker-rpm',
config: {
bin: 'Electron Starter',
icon: path.join(process.cwd(), 'main', 'build', 'icon.png'),
}
}
],
plugins: [
{
name: '@electron-forge/plugin-auto-unpack-natives',
config: {}
}
],
publishers: [
{
name: '@electron-forge/publisher-github',
config: {
repository: {
owner: 'delimitertech',
name: 'electron-starter'
},
prerelease: true
}
}
]
};

Upgrade the package version before releasing the app. The npm script for publishing should use publish command. Set productName field to the app name.

// package.json
{
// ...
"version": "1.0.1",
"scripts": {
// ...
"publish": "electron-forge publish"
},
"productName": "Electron Starter"
}

GitHub Action workflow for manually releasing the app for Linux, Windows, and MacOS should contain the below configuration.

# .github/workflows/release.yml
name: Release app
on:
workflow_dispatch:
jobs:
build:
strategy:
matrix:
os:
[
{ name: 'linux', image: 'ubuntu-latest' },
{ name: 'windows', image: 'windows-latest' },
{ name: 'macos', image: 'macos-latest' },
]
runs-on: ${{ matrix.os.image }}
steps:
- name: Github checkout
uses: actions/checkout@v4
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- name: Publish app
env:
GITHUB_TOKEN: ${{ secrets.GH_TOKEN }}
run: npm run publish

Windows startup events

Add the following code in the main process to prevent Squirrel.Windows launches your app multiple times during the installation/updating/uninstallation.

// main/index.js
if (require('electron-squirrel-startup') === true) app.quit();

Boilerplate

Here is the link to the boilerplate I use for the development. It contains the examples mentioned above with more details.

Formatting Node.js codebase with Prettier

July 3, 2023

Formatting helps to stay consistent with code style throughout the whole codebase. Include format script in pre-hooks (pre-commit or pre-push). This post covers Prettier setup with JavaScript and TypeScript code.

Start by installing the prettier package as a dev dependency.

npm i prettier -D

Specify rules inside the .prettierrc config file.

{
"singleQuote": true,
"trailingComma": "all"
}

Add format script in the package.json file.

{
"scripts": {
// ...
"format": "prettier --write \"{src,test}/**/*.{js,ts}\""
}
}

Notes

If you use Eslint, install the eslint-config-prettier package as a dev dependency and update the Eslint configuration to use the Prettier config.

{
// ...
"extends": [
// ...
"prettier"
]
}

Using Visual Studio Code, you can install a prettier-vscode extension and activate formatting when file changes are saved.

Boilerplate

Here is the link to the boilerplate I use for the development.

Tracing Node.js Microservices with OpenTelemetry

June 30, 2023

Regarding microservices observability, tracing is important to catch bottlenecks of the services like slow requests and database queries.

OpenTelemetry is a set of monitoring tools that support integration with distributed tracing platforms like Jaeger, Zipkin, and NewRelic, to name a few. This post covers Jaeger's tracing setup for Node.js projects.

Start by setting up the Docker compose via the docker-compose up command. Jaeger UI will be available at http://localhost:16686.

version: '3.8'
services:
jaeger:
image: jaegertracing/all-in-one:1.46
environment:
- COLLECTOR_ZIPKIN_HTTP_PORT=:9411
- COLLECTOR_OTLP_ENABLED=true
ports:
- 6831:6831/udp
- 6832:6832/udp
- 5778:5778
- 16685:16685
- 16686:16686
- 14268:14268
- 14269:14269
- 14250:14250
- 9411:9411
- 4317:4317
- 4318:4318

The code below shows setting up the tracing via Jaeger. Jaeger doesn't require a separate exporter package since OpenTelemetry supports it natively. Others need to use an exporter package. Filter traces within Jaeger UI by service name or trace ID stored within the logs.

Use resources and semantic resource attributes to set new fields for the trace, like service name or service version. Auto instrumentation identifies frameworks like Express, protocols like HTTP, databases like Postgres, and loggers like Winston used within the project.

Process spans (units of work in distributed systems) in batches to optimize tracing performance. Also, terminate the tracing during the graceful shutdown.

import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { NodeSDK } from '@opentelemetry/sdk-node';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
const traceExporter = new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces',
});
const sdk = new NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: `<service-name>-${process.env.NODE_ENV}`,
[SemanticResourceAttributes.SERVICE_VERSION]: process.env.npm_package_version ?? '0.0.0',
env: process.env.NODE_ENV || '',
}),
instrumentations: [getNodeAutoInstrumentations()],
spanProcessor: new BatchSpanProcessor(traceExporter),
});
sdk.start();
process.on('SIGTERM', () => {
sdk.shutdown()
.then(() => console.log('Tracing terminated'))
.catch((error) => console.error('Error terminating tracing', error))
.finally(() => process.exit(0));
});

Import tracing config as the first thing inside the entry file.

import './tracing';
// ...

Search Service menu should show the service name in Jaeger UI. Happy tracing!

Demo

The demo with the mentioned examples is available here.

Boilerplate

Here is the link to the boilerplate I use for the development.

Streaming binary and base64 files

June 25, 2023

Streaming is useful when dealing with big files in web apps. Instead of loading the entire file into memory before sending it to the client, streaming allows you to send it in small chunks, improving memory efficiency and reducing response time.

The code snippet below shows streaming the binary CSV and base64-encoded PDF files with NestJS. Use the same approach for other types of files, like JSON files.

Set content type and filename headers so files are streamed and downloaded correctly. Base64 file is converted to a buffer and streamed afterward. Read files from a file system or by API calls.

import { Controller, Get, Param, Res } from '@nestjs/common';
import { Response } from 'express';
import { createReadStream } from 'fs';
import { readFile } from 'fs/promises';
import { join } from 'path';
import { Readable } from 'stream';
@Controller('templates')
export class TemplatesController {
@Get('csv')
getCsvTemplate(@Res() res: Response): void {
const file = createReadStream(join(process.cwd(), 'template.csv'));
res.set({
'Content-Type': 'text/csv',
'Content-Disposition': 'attachment; filename="template.csv"'
});
file.pipe(res);
}
@Get('pdf/:id')
async getPdfTemplate(
@Param('id') id: string,
@Res() res: Response
): Promise<void> {
const fileBase64 = await readFile(
join(process.cwd(), 'template.pdf'),
'base64'
);
// const fileBase64 = await apiCall();
const fileBuffer = Buffer.from(fileBase64, 'base64');
const fileStream = Readable.from(fileBuffer);
res.set({
'Content-Type': 'application/pdf',
'Content-Disposition': `attachment; filename="template_${id}.pdf"`
});
fileStream.pipe(res);
}
}

Boilerplate

Here is the link to the boilerplate I use for the development.

Spies and mocking with Node test runner (node:test)

June 24, 2023

Node.js version 20 brings a stable test runner so you can run tests inside *.test.js files with node --test command. This post covers the primary usage of it regarding spies and mocking for the unit tests.

Spies are functions that let you spy on the behavior of functions called indirectly by some other code while mocking injects test values into the code during the tests.

mock.method can create spies and mock async, rejected async, sync, chained methods, and external and built-in modules.

  • Async function
import assert from 'node:assert/strict';
import { describe, it, mock } from 'node:test';
const calculationService = {
calculate: () => // implementation
};
describe('mocking resolved value', () => {
it('should resolve mocked value', async () => {
const value = 2;
mock.method(calculationService, 'calculate', async () => value);
const result = await calculationService.calculate();
assert.equal(result, value);
});
});
  • Rejected async function
const error = new Error('some error message');
mock.method(calculationService, 'calculate', async () => Promise.reject(error));
await assert.rejects(async () => calculateSomething(calculationService), error);
  • Sync function
mock.method(calculationService, 'calculate', () => value);
  • Chained methods
mock.method(calculationService, 'get', () => calculationService);
mock.method(calculationService, 'calculate', async () => value);
const result = await calculationService.get().calculate();
  • External modules
import axios from 'axios';
mock.method(axios, 'get', async () => ({ data: value }));
  • Built-in modules
import fs from 'fs/promises';
mock.method(fs, 'readFile', async () => fileContent);
  • Async and sync functions called multiple times can be mocked with different values using context.mock.fn and mockedFunction.mock.mockImplementationOnce.
describe('mocking same method multiple times with different values', () => {
it('should resolve mocked values', async (context) => {
const firstValue = 2;
const secondValue = 3;
const calculateMock = context.mock.fn(calculationService.calculate);
calculateMock.mock.mockImplementationOnce(async () => firstValue, 0);
calculateMock.mock.mockImplementationOnce(async () => secondValue, 1);
const firstResult = await calculateMock();
const secondResult = await calculateMock();
assert.equal(firstResult, firstValue);
assert.equal(secondResult, secondValue);
});
});
  • To assert called arguments for a spy, use mockedFunction.mock.calls[0] value.
mock.method(calculationService, 'calculate');
await calculateSomething(calculationService, firstValue, secondValue);
const call = calculationService.calculate.mock.calls[0];
assert.deepEqual(call.arguments, [firstValue, secondValue]);
  • To assert skipped call for a spy, use mockedFunction.mock.calls.length value.
mock.method(calculationService, 'calculate');
assert.equal(calculationService.calculate.mock.calls.length, 0);
  • To assert how many times mocked function is called, use mockedFunction.mock.calls.length value.
mock.method(calculationService, 'calculate');
calculationService.calculate(3);
calculationService.calculate(2);
assert.equal(calculationService.calculate.mock.calls.length, 2);
  • To assert called arguments for the exact call when a mocked function is called multiple times, an assertion can be done using mockedFunction.mock.calls[index] and call.arguments values.
const calculateMock = context.mock.fn(calculationService.calculate);
calculateMock.mock.mockImplementationOnce((a) => a + 2, 0);
calculateMock.mock.mockImplementationOnce((a) => a + 3, 1);
calculateMock(firstValue);
calculateMock(secondValue);
[firstValue, secondValue].forEach((argument, index) => {
const call = calculateMock.mock.calls[index];
assert.deepEqual(call.arguments, [argument]);
});

Running TypeScript tests

Add a new test script

{
"type": "module",
"scripts": {
"test": "node --test",
"test:ts": "glob -c \"node --loader tsx --no-warnings --test\" \"./src/**/*.{spec,test}.ts\""
},
"devDependencies": {
// ...
"glob": "^10.3.1",
"tsx": "^3.12.7"
}
}

Demo

The demo with the mentioned examples is available here.

Boilerplate

Here is the link to the boilerplate I use for the development.

Async API documentation 101

May 21, 2023

Async API documentation is used for documenting events in event-driven systems, like Kafka events. All of the event DTOs are stored in one place. It supports YAML and JSON formats.

It contains information about channels and components. Channels and components are defined with their messages and DTO schemas, respectively.

{
"asyncapi": "2.6.0",
"info": {
"title": "Events docs",
"version": "1.0.0"
},
"channels": {
"topic_name": {
"publish": {
"message": {
"schemaFormat": "application/vnd.oai.openapi;version=3.0.0",
"payload": {
"type": "object",
"properties": {
"counter": {
"type": "number"
}
},
"required": ["counter"]
}
}
}
}
},
"components": {
"schemas": {
"EventDto": {
"type": "object",
"properties": {
"counter": {
"type": "number"
}
},
"required": ["counter"]
}
}
}
}

Autogeneration

Async API docs can be autogenerated by following multiple steps:

  • define DTOs and their required and optional fields with ApiProperty and ApiPropertyOptional decorators (from the @nestjs/swagger package), respectively
  • generate OpenAPI docs from the defined DTOs
  • parse and reuse component schemas from generated OpenAPI documentation to build channel messages and component schemas for Async API docs

Validation

Use AsyncAPI Studio to validate the written specification.

Preview

There are multiple options

  • AsyncAPI Studio

  • VSCode extension asyncapi-preview, open the command palette, and run the Preview AsyncAPI command.

UI generation

  • Install @asyncapi/cli and corresponding template package (e.g., @asyncapi/html-template, @asyncapi/markdown-template)
  • Update package.json with scripts
{
"scripts": {
// ...
"generate-docs:html": "asyncapi generate fromTemplate ./asyncapi/asyncapi.json @asyncapi/html-template --output ./docs/html",
"generate-docs:markdown": "asyncapi generate fromTemplate ./asyncapi/asyncapi.json @asyncapi/markdown-template --output ./docs/markdown"
}
}

Linting JavaScript codebase with Eslint

April 5, 2023

Linting represents static code analysis based on specified rules. Please include it in the CI pipeline.

Setup

Run the following commands to generate the linter configuration using the eslint package.

npm init -y
npm init @eslint/config

Below is an example of the configuration. Some rules can be ignored or suppressed as warnings.

// .eslintrc.js
module.exports = {
env: {
commonjs: true,
es2021: true,
node: true,
jest: true,
},
extends: 'airbnb-base',
overrides: [
],
parserOptions: {
ecmaVersion: 'latest',
},
rules: {
'import/no-extraneous-dependencies': 'warn',
'import/prefer-default-export': 'off',
},
};

Ignore the files with the .eslintignore file.

dist

Linting

Configure and run the script with the npm run lint command. Some errors can be fixed automatically with the --fix option.

// package.json
{
"scripts": {
// ...
"lint": "eslint src",
"lint:fix": "npm run lint -- --fix"
}
}

Boilerplate

Here is the link to the boilerplate I use for the development.

Migrating Node.js app from Heroku to Fly.io

April 1, 2023

I recently migrated the Node.js app from Heroku to Fly.io, mainly due to reduced costs.

This blog post will cover the necessary steps in the migration process.

Prerequisites

  • Heroku app running

  • Use the exact versions for dependencies and dev dependencies in package.json so installation and build steps can pass successfully

  • Use the same Node.js version in Dockerfile, package.json, and GitHub Actions workflow

  • Use API gateway or custom domain for the service so web apps and mobile apps don't get affected by changing the URL of the service

Migration steps

  • Migrate environment variables and secrets

  • Migrate the Postgres database with the following commands (the ssl field in database configuration options is not needed)

fly secrets set HEROKU_DATABASE_URL=$(heroku config:get DATABASE_URL)
fly ssh console
apt update && apt install postgresql-client
pg_dump -Fc --no-acl --no-owner -d $HEROKU_DATABASE_URL | pg_restore --verbose --clean --no-acl --no-owner -d $DATABASE_URL
exit
fly secrets unset HEROKU_DATABASE_URL
  • Migrate the Redis database if it's used

  • Include the deployment step in the GitHub Actions workflow

References

Integration with ChatGPT API

March 19, 2023

ChatGPT is a large language model (LLM) that understands and processes human prompts to produce helpful responses. OpenAI provides an API to interact with the ChatGPT model (gpt-3.5-turbo).

Prerequisites

  • OpenAI account
  • Generated API key
  • Enabled billing

Integration

Below is an example of interacting with ChatGPT API based on a given prompt.

const handlePrompt = async (prompt) => {
const response = await axios.post(
'https://api.openai.com/v1/chat/completions',
{
model: 'gpt-3.5-turbo',
messages: [
{
role: 'user',
content: prompt
}
]
},
{
headers: {
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
}
}
);
return response?.data?.choices?.[0]?.message?.content;
};

Node.js built-in module functions as Promises

February 28, 2023

Node.js provides asynchronous methods for fs, dns, stream, and timers modules that return Promises.

const {
createWriteStream,
promises: { readFile }
} = require('fs');
const dns = require('dns/promises');
const stream = require('stream/promises');
const timers = require('timers/promises');
const sleep = timers.setTimeout;
const SLEEP_TIMEOUT_MS = 2000;
(async () => {
const fileName = 'test-file';
const writeStream = createWriteStream(fileName, {
autoClose: true,
flags: 'w'
});
await stream.pipeline('some text', writeStream);
await sleep(SLEEP_TIMEOUT_MS);
const readFileResult = await readFile(fileName);
console.log(readFileResult.toString());
const lookupResult = await dns.lookup('google.com');
console.log(lookupResult);
})();

Use the promisify function to convert other callback-based functions to Promise-based.

const crypto = require('crypto');
const { promisify } = require('util');
const randomBytes = promisify(crypto.randomBytes);
const RANDOM_BYTES_LENGTH = 20;
(async () => {
const randomBytesResult = await randomBytes(RANDOM_BYTES_LENGTH);
console.log(randomBytesResult);
})();

Error tracking with Sentry

February 14, 2023

Error tracking and alerting are crucial in the production environment, proactively fixing the errors leads to a better user experience. Sentry is one of the error tracking services, and it provides alerting for unhandled exceptions. You should receive an email when something wrong happens.

Sentry issues show the error stack trace, device, operating system, and browser information. The project dashboard shows an unhandled exception once it's thrown. This post covers the integration of several technologies with Sentry.

Node.js

  • Create a Node.js project on Sentry

  • Install the package

npm i @sentry/node
  • Run the following script
const Sentry = require('@sentry/node');
Sentry.init({
dsn: SENTRY_DSN
});
test();

Next.js

  • Create a Next.js project on Sentry (version 13 is not yet supported)

  • Run the following commands for the setup

npm i @sentry/nextjs
npx @sentry/wizard -i nextjs

Gatsby

  • Create a Gatsby project on Sentry

  • Install the package

npm i @sentry/gatsby
  • Add plugin in Gatsby config
module.exports = {
plugins: [
// ...
{
resolve: '@sentry/gatsby',
options: {
dsn: SENTRY_DSN
}
}
]
};

React Native

  • Create a React Native project on Sentry

  • Run the following commands for the setup

npm i @sentry/react-native
npx @sentry/wizard -i reactNative -p android

Logging practices

February 7, 2023

This post covers some logging practices for the back-end (Node.js) apps.

  • Avoid putting unique identifiers (e.g., user id) within the message. A unique id will produce a lot of different messages with the same context. Use it as a message parameter.

  • Use the appropriate log level for the message. There are multiple log levels

    • info - app behavior, don't log every single step
    • error - app processing failure, something that needs to be fixed
    • debug - additional logs needed for troubleshooting
    • warning - something unexpected happened (e.g., third-party API fails)
    • fatal - app crash, needs to be fixed as soon as possible

Don't use the debug logs on production. Put log level as an environment variable.

  • Stream logs to the standard output in JSON format so logging aggregators (Graylog, e.g.) can collect and adequately parse them

  • Avoid logging any credentials, like passwords, auth tokens, etc.

  • Put correlation ID as a message parameter for tracing related logs.

  • Use a configurable logger like pino

const pino = require('pino');
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
redact: {
paths: ['token'],
remove: true,
},
});
logger.info({ someId: 'id' }, 'Started the app...');
const correlationId = request.headers['correlation-id'] || uuid.v4();
logger.debug({ data: 'some data useful for debugging', correlationId }, 'Sending the request...');

Boilerplate

Here is the link to the boilerplate I use for the development.

Integration testing Node.js apps

January 25, 2023

Integration testing means testing a component with multiple sub-components and how they interact. Some sub-components can be external services, databases, and message queues.

External services are running, but their business logic is mocked based on received parameters (request headers, query parameters, etc.). Databases and message queues are spun up using test containers.

This post covers testing service as a component and its API endpoints. This approach can be used with any framework and language. NestJS and Express are used in the examples below.

API endpoints

Below is the controller for two endpoints. First communicates with an external service and retrieves some data based on the sent parameter. The second one retrieves the data from the database.

// users.controller.ts
@Controller('users')
export class UsersController {
constructor(private userService: UsersService) {}
@Get()
async getAll(@Query('type') type: string) {
return this.userService.findAll(type);
}
@Get(':id')
async getById(@Param('id', new ParseUUIDPipe()) id: string) {
return this.userService.findById(id);
}
}

External dependencies

External service is mocked to send data based on the received parameter.

export const createDummyUserServiceServer = async (): Promise<DummyServer> => {
return createDummyServer((app) => {
app.get('/users', (req, res) => {
if (req.query.type !== 'user') {
return res.status(403).send('User type is not valid');
}
res.json(usersResponse);
});
});
};

Tests setup

Tests for endpoints can be split into two parts. The first is related to the external dependencies setup.

The example below creates a mocked service and spins up the database using test containers. The environment variables are set for before mentioned dependencies, and the leading service starts running.

The database is cleaned before every test run. External dependencies (mocked service and database) are closed after tests finish.

// test/users.spec.ts
describe('UsersController (integration)', () => {
let app: INestApplication;
let dummyUserServiceServerClose: () => void;
let postgresContainer: StartedTestContainer;
let usersRepository: Repository<UsersEntity>;
const databaseConfig = {
databaseName: 'nestjs-starter-db',
databaseUsername: 'user',
databasePassword: 'some-r4ndom-pasS',
databasePort: 5432,
}
beforeAll(async () => {
const dummyUserServiceServer = await createDummyUserServiceServer();
dummyUserServiceServerClose = dummyUserServiceServer.close;
postgresContainer = await new GenericContainer('postgres:15-alpine')
.withEnvironment({
POSTGRES_USER: databaseConfig.databaseUsername,
POSTGRES_PASSWORD: databaseConfig.databasePassword,
POSTGRES_DB: databaseConfig.databaseName,
})
.withExposedPorts(databaseConfig.databasePort)
.start();
const moduleFixture: TestingModule = await Test.createTestingModule({
imports: [AppModule],
})
.overrideProvider(ConfigService)
.useValue({
get: (key: string): string => {
const map: Record<string, string | undefined> = process.env;
map.USER_SERVICE_URL = dummyUserServiceServer.url;
map.DATABASE_HOSTNAME = postgresContainer.getHost();
map.DATABASE_PORT = `${postgresContainer.getMappedPort(databaseConfig.databasePort)}`;
map.DATABASE_NAME = databaseConfig.databaseName;
map.DATABASE_USERNAME = databaseConfig.databaseUsername;
map.DATABASE_PASSWORD = databaseConfig.databasePassword;
return map[key] || '';
},
})
.compile();
app = moduleFixture.createNestApplication();
usersRepository = app.get(getRepositoryToken(UsersEntity));
await app.init();
});
beforeEach(async () => {
await usersRepository.delete({});
});
afterAll(async () => {
await app.close();
dummyUserServiceServerClose();
await postgresContainer.stop();
});
// ...
});

Tests

The second part covers tests for the implemented endpoints. The first test suite asserts retrieving data from the external service based on the sent type as a query parameter.

// test/users.spec.ts
describe('/users (GET)', () => {
it('should return list of users', async () => {
return request(app.getHttpServer())
.get('/users?type=user')
.expect(HttpStatus.OK)
.then((response) => {
expect(response.body).toEqual(usersResponse);
});
});
it('should throw an error when type is forbidden', async () => {
return request(app.getHttpServer())
.get('/users?type=admin')
.expect(HttpStatus.FORBIDDEN);
});
});

The second test suite asserts retrieving the data from the database.

// test/users.spec.ts
describe('/users/:id (GET)', () => {
it('should return found user', async () => {
const userId = 'b618445a-0089-43d5-b9ca-e6f2fc29a11d';
const userDetails = {
id: userId,
firstName: 'tester',
};
const newUser = await usersRepository.create(userDetails);
await usersRepository.save(newUser);
return request(app.getHttpServer())
.get(`/users/${userId}`)
.expect(HttpStatus.OK)
.then((response) => {
expect(response.body).toEqual(userDetails);
});
});
it('should return 404 error when user is not found', async () => {
const userId = 'b618445a-0089-43d5-b9ca-e6f2fc29a11d';
return request(app.getHttpServer())
.get(`/users/${userId}`)
.expect(HttpStatus.NOT_FOUND);
});
});

Boilerplate

Here is the link to the boilerplate I use for the development. It contains the examples mentioned above with more details.

2022

Debugging Node.js apps with Visual Studio Code debugger

December 28, 2022

Rather than doing it with console logs, debugging with a debugger and breakpoints is recommended. VSCode provides a built-in debugger for JavaScript-based apps.

This post covers configuring and running a debugger for various Node.js apps in VSCode.

Configuration basics

VSCode configurations can use runtime executables like npm and ts-node. The executables mentioned above should be installed globally before running the configurations.

A configuration can use the program field to point to the binary executable package inside node_modules directory to avoid installing packages globally.

Runtime executables and programs can have arguments defined in runtimeArgs and args fields, respectively.

A configuration can have different requests:

  • attach - the debugger is attached to the running process
  • launch - the debugger launches a new process and wraps it

There are multiple configuration types:

  • node - runs the program from program field, and logs are shown in debug console
  • node-terminal - runs the command from command field and shows the logs in the terminal

The configuration file is .vscode/launch.json. The selected configuration on the Run and Debug tab is used as the default one.

Launch configs

Node

Below are examples of configurations for running Node processes with the debugger.

{
"version": "0.2.0",
"configurations": [
// ...
{
"name": "Launch script in debug console",
"program": "index.js", // update entry point
"request": "launch",
"type": "node",
"skipFiles": [
"<node_internals>/**"
]
},
{
"name": "Launch script in the terminal",
"command": "node index.js", // update entry point
"request": "launch",
"type": "node-terminal",
"skipFiles": [
"<node_internals>/**"
]
}
]
}

Running npm scripts in debug mode

The debugger launches the following script in both configurations, dev in this case.

{
"version": "0.2.0",
"configurations": [
// ...
{
"name": "Launch dev script in debug console",
"runtimeExecutable": "npm",
"runtimeArgs": [
"run",
"dev"
],
"request": "launch",
"type": "node",
"skipFiles": [
"<node_internals>/**"
]
},
{
"name": "Launch dev script in the terminal",
"command": "npm run dev",
"request": "launch",
"type": "node-terminal"
}
]
}

ts-node

The following configurations will wrap the debugger around ts-node entry point.

{
"version": "0.2.0",
"configurations": [
// ...
{
"name": "Launch ts-node script in debug console",
"program": "node_modules/.bin/ts-node",
"args": ["index.ts"], // update entry point
"request": "launch",
"type": "node",
"skipFiles": [
"<node_internals>/**"
]
},
{
"name": "Launch ts-node script in the terminal",
"command": "ts-node index.ts", // update entrypoint
"request": "launch",
"type": "node-terminal",
"skipFiles": [
"<node_internals>/**"
]
}
]
}

@babel/node

The following configuration will wrap the debugger around babel-node entry point.

{
"version": "0.2.0",
"configurations": [
// ...
{
"name": "Launch babel-node script in debug console",
"program": "node_modules/.bin/babel-node",
"args": ["src"], // update entry point
"request": "launch",
"type": "node",
"skipFiles": [
"<node_internals>/**"
]
}
]
}

Nodemon

Add new configuration with Run Add configuration option, select Node.js: Nodemon Setup.

Update program field to point to the nodemon executable package, and add arguments with args field to point to the entry point.

{
"version": "0.2.0",
"configurations": [
// ...
{
"name": "Launch nodemon script in debug console",
"program": "node_modules/.bin/nodemon",
"args": ["-r", "dotenv/config", "--exec", "babel-node", "src/index.js"], // update entry point
"request": "launch",
"type": "node",
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen",
"restart": true,
"skipFiles": [
"<node_internals>/**"
]
}
]
}

Mocha

Add a new configuration, and choose Node.js: Mocha Tests configuration.

Replace tdd with bdd as u parameter and program field to point to the mocha executable package.

{
"version": "0.2.0",
"configurations": [
// ...
{
"name": "Launch mocha tests",
"program": "node_modules/.bin/mocha",
"args": [
"-u",
"bdd",
"--timeout",
"999999",
"--colors",
"test"
],
"request": "launch",
"type": "node",
"internalConsoleOptions": "openOnSessionStart",
"skipFiles": [
"<node_internals>/**"
]
}
]
}

Attach configs

Auto Attach should be activated in settings with the With Flag value. In that case, auto-attaching is done when --inspect flag is given.

The debugger should be attached when some of the following scripts are executed.

{
// ...
"scripts": {
// ...
"start:debug": "node --inspect index.js", // update entry point
}
}
Jest
{
// ...
"scripts": {
// ...
"test:debug": "node --inspect -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
}
}
NestJS
{
// ...
"scripts": {
// ...
"start:debug": "nest start --debug --watch"
}
}

Debugging basics

During the debugging, the variables tab shows local variables. The step over option goes to the following statement in the codebase, while step into option goes deeper into the current statement.

Log points can add logs in debug console when a certain part of the codebase is executed without pausing the process.

Boilerplate

Here is the link to the boilerplate I use for the development.

Timeout with Fetch API

November 2, 2022

Setting up a timeout for HTTP requests can prevent the connection from hanging forever, waiting for the response. It can be set on the client side to improve user experience, and on the server side to improve inter-service communication. Fetch API is fully available in Node as well from version 18.

AbortController can be utilized to set up timeouts. Instantiated abort controller has a signal property which represents reference to its associated AbortSignal object. Abort signal object is used as a signal parameter in the request with Fetch API, so HTTP request is aborted when abort method is called.

const HTTP_TIMEOUT = 3000;
const URL = 'https://www.google.com:81';
(async () => {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), HTTP_TIMEOUT);
try {
const response = await fetch(URL, {
signal: controller.signal
}).then((res) => res.json());
console.log(response);
} catch (error) {
console.error(error);
} finally {
clearTimeout(timeoutId);
}
})();

Use this snippet also to simulate aborted requests.

Boilerplate

Here is the link to the template I use for the development.

Push notifications with Firebase

May 29, 2022

Push notifications are a great alternative to email notifications. There is no need for a verification step, UX is improved, and user engagement with the app is increased. This post covers both front-end and back-end setups.

Requirements for the push notifications

  • firebase package installed
  • Created Firebase project
  • Firebase configuration can be found on Project Overview Project settings General Your apps
  • Public Vapid key can be found on Project Overview Project settings Cloud Messaging Web Push certificates (used on the front-end)
  • Firebase messaging service worker
  • Project ID can be found on Project Overview Project settings General tab
  • Server key for sending the push notifications (used on the back end)
  • HTTPS connection (localhost for local development)

Front-end setup

Firebase messaging service worker

The following service worker should be registered for handling background notifications. A custom notificationclick handler should be implemented before importing firebase libraries. The below implementation opens a new window with the defined URL if it is not already open. Firebase automatically checks for service workers at /firebase-messaging-sw.js, so it should be publicly available.

// /firebase-messaging-sw.js
/* eslint-disable no-unused-vars */
self.addEventListener('notificationclick', (event) => {
event.notification.close();
const DEFAULT_URL = '<URL>';
const url =
event.notification?.data?.FCM_MSG?.notification?.click_action ||
DEFAULT_URL;
event.waitUntil(
clients.matchAll({ type: 'window' }).then((clientsArray) => {
const hadWindowToFocus = clientsArray.some((windowClient) =>
windowClient.url === url ? (windowClient.focus(), true) : false
);
if (!hadWindowToFocus)
clients
.openWindow(url)
.then((windowClient) => (windowClient ? windowClient.focus() : null));
})
);
});
importScripts('https://www.gstatic.com/firebasejs/9.19.1/firebase-app-compat.js');
importScripts('https://www.gstatic.com/firebasejs/9.19.1/firebase-messaging-compat.js');
const firebaseApp = initializeApp({
apiKey: 'xxxxxx',
authDomain: 'xxxxxx',
projectId: 'xxxxxx',
storageBucket: 'xxxxxx',
messagingSenderId: 'xxxxxx',
appId: 'xxxxxx',
measurementId: 'xxxxxx'
});
const messaging = getMessaging(firebaseApp);

Helper functions

getToken
  • generates a unique registration token for the browser or gets an already generated token
  • requests permission to receive push notifications
  • triggers the Firebase messaging service worker

There are multiple types of errors as response:

  • code messaging/permission-blocked - user blocks the notifications
  • code messaging/unsupported-browser - user's browser doesn't support the APIs required to use the Firebase SDK
  • code messaging/failed-service-worker-registration - there's an issue with the Firebase messaging service worker

The access token is invalidated when a user manually blocks the notifications in the browser settings.

isSupported
  • checks if all required APIs for push notifications are supported
  • returns Promise<boolean>

It should be used in useEffect hooks.

import { isSupported } from 'firebase/messaging';
// ...
useEffect(() => {
isSupported()
.then((isAvailable) => {
if (isAvailable) {
// ...
}
})
.catch(console.error);
}, []);
// ...
initializeApp
  • should be called before the app starts
import { initializeApp } from 'firebase/app';
import { getMessaging, getToken } from 'firebase/messaging';
import { firebaseConfig } from 'constants/config';
export const initializeFirebase = () => initializeApp(firebaseConfig);
export const getTokenForPushNotifications = async () => {
const messaging = getMessaging();
const token = await getToken(messaging, {
vapidKey: process.env.NEXT_PUBLIC_VAPID_KEY
});
return token;
};

Back-end setup

Server keys

The server key for API v1 can be derived from the service account key JSON file. In that case, the JSON file should be encoded and stored in the environment variable to prevent exposing credentials in the repository codebase. The service account key JSON file can be downloaded by clicking Generate new private key on the Project Overview Project settings Service accounts tab.

import * as serviceAccountKey from './service-account-key.json';
const encodedServiceAccountKey = Buffer.from(
JSON.stringify(serviceAccountKey)
).toString('base64');
process.env.SERVICE_ACCOUNT_KEY = encodedServiceAccountKey;
import 'dotenv/config';
import * as googleAuth from 'google-auth-library';
(async () => {
const serviceAccountKeyEncoded = process.env.SERVICE_ACCOUNT_KEY;
const serviceAccountKeyDecoded = JSON.parse(
Buffer.from(serviceAccountKeyEncoded, 'base64').toString('ascii')
);
const jwt = new googleAuth.JWT(
serviceAccountKeyDecoded.client_email,
null,
serviceAccountKeyDecoded.private_key,
['https://www.googleapis.com/auth/firebase.messaging'],
null
);
const tokens = await jwt.authorize();
const authorizationHeader = `Bearer ${tokens.access_token}`;
console.log(authorizationHeader);
})();

Manually sending the push notification

The icon URL should be covered with HTTPS, so the notification correctly shows it.

  • API v1
// ...
try {
const response = await axios.post(
`https://fcm.googleapis.com/v1/projects/${projectId}/messages:send`,
{
message: {
notification: {
title: 'Push notifications with Firebase',
body: 'Push notifications with Firebase body'
},
webpush: {
fcmOptions: {
link: 'http://localhost:3000'
},
notification: {
icon: 'https://picsum.photos/200'
}
},
token: registrationToken
}
},
{
headers: {
Authorization: authorizationHeader
}
}
);
console.log(response.data);
} catch (error) {
console.error(error?.response?.data?.error);
}
// ...

A successful response returns an object with name key, which presents the notification id in the format projects/{project_id}/messages/{message_id}.

There are multiple types of errors in the response:

  • code 400 - request body is not valid
  • code 401 - the derived token is expired
  • code 404 - registration token was not found

Demo

The demo with the mentioned examples is available here.

Boilerplate

Here is the link to the template I use for the development.

Sending e-mails with Mailtrap

March 8, 2022

For local testing or testing in general, there is no need to send e-mails to real e-mail addresses. Mailtrap service can preview the e-mails for sending.

The inbox page can show credentials (username and password). A list of inboxes is available at Projects page.

import nodemailer from 'nodemailer';
(async () => {
const emailConfiguration = {
auth: {
user: process.env.EMAIL_USERNAME,
pass: process.env.EMAIL_PASSWORD
},
host: process.env.EMAIL_HOST, // 'smtp.mailtrap.io'
port: process.env.EMAIL_PORT, // 2525
secure: process.env.EMAIL_SECURE,
};
const transport = nodemailer.createTransport(emailConfiguration);
const info = await transport.sendMail({
from: '"Sender" <sender@example.com>',
to: 'recipient1@example.com, recipient2@example.com',
subject: 'Subject',
text: 'Text',
html: '<b>Text</b>'
});
console.log('Message sent: %s', info.messageId);
})();

Demo

The demo with the mentioned example is available here.

2021

Spies and mocking with Jest

August 19, 2021

Besides asserting the output of the function call, unit testing includes the usage of spies and mocking. Spies are functions that let you spy on the behavior of functions called indirectly by some other code. Spy can be created by using jest.fn(). Mocking injects test values into the code during the tests. Some of the use cases will be presented below.

  • Async function and its resolved value can be mocked using mockResolvedValue. Another way to mock it is by using mockImplementation and providing a function as an argument.
const calculationService = {
calculate: jest.fn()
};
jest.spyOn(calculationService, 'calculate').mockResolvedValue(value);
jest
.spyOn(calculationService, 'calculate')
.mockImplementation(async (a) => Promise.resolve(a));
  • Rejected async function can be mocked using mockRejectedValue and mockImplementation.
jest
.spyOn(calculationService, 'calculate')
.mockRejectedValue(new Error(errorMessage));
jest
.spyOn(calculationService, 'calculate')
.mockImplementation(async () => Promise.reject(new Error(errorMessage)));
await expect(calculateSomething(calculationService)).rejects.toThrowError(
Error
);
  • Sync function and its return value can be mocked using mockReturnValue and mockImplementation.
jest.spyOn(calculationService, 'calculate').mockReturnValue(value);
jest.spyOn(calculationService, 'calculate').mockImplementation((a) => a);
  • Chained methods can be mocked using mockReturnThis.
// calculationService.get().calculate();
jest.spyOn(calculationService, 'get').mockReturnThis();
  • Async and sync functions called multiple times can be mocked with different values using mockResolvedValueOnce and mockReturnValueOnce, respectively, and mockImplementationOnce.
jest
.spyOn(calculationService, 'calculate')
.mockResolvedValueOnce(value)
.mockResolvedValueOnce(otherValue);
jest
.spyOn(calculationService, 'calculate')
.mockReturnValueOnce(value)
.mockReturnValueOnce(otherValue);
jest
.spyOn(calculationService, 'calculate')
.mockImplementationOnce((a) => a + 3)
.mockImplementationOnce((a) => a + 5);
  • External modules can be mocked similarly to spies. For the following example, let's suppose axios package is already used in one function. The following example represents a test file where axios is mocked using jest.mock().
import axios from 'axios';
jest.mock('axios');
// within test case
axios.get.mockResolvedValue(data);
  • Manual mocks are resolved by writing corresponding modules in __mocks__ directory, e.g., fs/promises mock will be stored in __mocks__/fs/promises.js file. fs/promises mock will be resolved using jest.mock() in the test file.
jest.mock('fs/promises');
  • To assert called arguments for a mocked function, an assertion can be done using toHaveBeenCalledWith matcher.
const spy = jest.spyOn(calculationService, 'calculate');
expect(spy).toHaveBeenCalledWith(firstArgument, secondArgument);
  • To assert skipped call for a mocked function, an assertion can be done using not.toHaveBeenCalled matcher.
const spy = jest.spyOn(calculationService, 'calculate');
expect(spy).not.toHaveBeenCalled();
  • To assert how many times mocked function is called, an assertion can be done using toHaveBeenCalledTimes matcher.
const spy = jest.spyOn(calculationService, 'calculate');
calculationService.calculate(3);
calculationService.calculate(2);
expect(spy).toHaveBeenCalledTimes(2);
  • To assert called arguments for the exact call when a mocked function is called multiple times, an assertion can be done using toHaveBeenNthCalledWith matcher.
const argumentsList = [0, 1];
argumentsList.forEach((argument, index) => {
expect(calculationService.calculate).toHaveBeenNthCalledWith(
index + 1,
argument
);
});
  • Methods should be restored to their initial implementation before each test case.
// package.json
"jest": {
// ...
"restoreMocks": true
}
// ...

Demo

The demo with the mentioned examples is available here.

Boilerplate

Here is the link to the template I use for the development.

Server-Sent Events 101

August 18, 2021

Server-Sent Events (SSE) is a unidirectional communication between the client and server. The client initiates the connection with the server using EventSource API.

The previously mentioned API can also listen to the events from the server, listen for errors, and close the connection.

const eventSource = new EventSource(url);
eventSource.onmessage = ({ data }) => {
const eventData = JSON.parse(data);
// handling the data from the server
};
eventSource.onerror = () => {
// error handling
};
eventSource.close();

A server can send the events in text/event-stream format to the client once the client establishes the client-server connection. A server can filter clients by query parameter and send them only the appropriate events.

In the following example, the NestJS server sends the events only to a specific client distinguished by its e-mail address.

import { Controller, Query, Sse } from '@nestjs/common';
import { EventEmitter2 } from '@nestjs/event-emitter';
import { Observable, Subject } from 'rxjs';
import { map } from 'rxjs/operators';
import { MessageEvent, MessageEventData } from './message-event.interface';
import { SseQueryDto } from './sse-query.dto';
@Controller()
export class AppController {
constructor(private readonly eventService: EventEmitter2) {}
@Sse('sse')
sse(@Query() sseQuery: SseQueryDto): Observable<MessageEvent> {
const subject$ = new Subject();
this.eventService.on(FILTER_VERIFIED, data => {
if (sseQuery.email !== data.email) return;
subject$.next({ isVerifiedFilter: data.isVerified });
});
return subject$.pipe(
map((data: MessageEventData): MessageEvent => ({ data })),
);
}
// ...
}

Emitting the event mentioned above is done in the following way.

const filterVerifiedEvent = new FilterVerifiedEvent();
filterVerifiedEvent.email = user.email;
filterVerifiedEvent.isVerified = true;
this.eventService.emit(FILTER_VERIFIED, filterVerifiedEvent);

Boilerplate

Here is the link to the template I use for the development.

2020