homecourse
 
   
🔍

HTTP timeout with Axios

September 26, 2024

Setting up a timeout for HTTP requests can prevent the connection from hanging forever, waiting for the response. It can be set on the client side to improve user experience, and on the server side to improve inter-service communication.

axios package provides a timeout parameter for this functionality.

const HTTP_TIMEOUT = 3000;
const URL = 'https://www.google.com:81';
(async () => {
try {
await axios(URL, {
timeout: HTTP_TIMEOUT
});
} catch (error) {
console.error('Request timed out', error.cause);
}
})();

Use this snippet also to simulate aborted requests.

Course

Build your SaaS in 2 weeks - Start Now

RabbitMQ container with Docker Compose

September 3, 2024

Docker Compose facilitates spinning up a container for the RabbitMQ broker without installing it locally.

Prerequisites

  • Docker Compose installed

Configuration

The following configuration spins up the RabbitMQ container with the management UI tool.

The connection string for the RabbitMQ broker with local virtual host is amqp://localhost:5672/local.

RabbitMQ management UI is available at the http://localhost:15672 link. Default credentials are guest as username and guest as password.

# docker-compose.yml
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_VHOST=local
volumes:
- 'rabbitmq_data:/data'
volumes:
rabbitmq_data:

Run the following command to spin up the container.

docker-compose up

Course

Build your SaaS in 2 weeks - Start Now

Simulating keyboard typing with JavaScript

August 21, 2024

Simulating keyboard typing in JavaScript can be useful for automating tasks or testing applications. The KeyboardEvent API allows developers to trigger keyboard events programmatically.

Examples

  • The snippet below simulates pressing the Ctrl + Enter command. The bubbles flag ensures the event moves up through the DOM, so any elements higher up in the document can also detect and respond to it.
const event = new KeyboardEvent('keydown', {
key: 'Enter',
ctrlKey: true,
bubbles: true
});
document.dispatchEvent(event);
  • The snippet below simulates pressing the Shift + Enter command on a specific input field.
const event = new KeyboardEvent('keydown', {
key: 'Enter',
shiftKey: true,
bubbles: true
});
document.querySelector('input').dispatchEvent(event);

Course

Build your SaaS in 2 weeks - Start Now

Profiling Node.js apps with Chrome DevTools profiler

July 5, 2024

Profiling refers to analyzing and measuring an application's performance characteristics.

Profiling helps identify performance bottlenecks in a Node.js app, such as CPU-intensive tasks like cryptographic operations, image processing, or complex calculations.

This post covers running a profiler for various Node.js apps in Chrome DevTools.

Prerequisites

  • Google Chrome installed

  • Node.js app bootstrapped

Setup

  • Run node --inspect app.js to start the debugger.

  • Open chrome://inspect, click Open dedicated DevTools for Node and then navigate to the Performance tab. Start recording.

  • Run load testing via autocannon package using the following command format npx autocannon <COMMAND>.

  • Stop recording in Chrome DevTools.

Profiling

On Perfomance tab in Chrome DevTools open Bottom-Up subtab to identify which functions consume the most time.

Look for potential performance bottlenecks, such as synchronous functions for hashing (pbkdf2Sync) or file system operations (readFileSync).

Course

Build your SaaS in 2 weeks - Start Now

Load and stress testing with k6

May 10, 2024

k6 is a performance testing tool. This post explains types of performance testing and dives into k6 usage, from configuration to running tests.

Load and stress testing

Load and stress testing are two types of performance testing used to evaluate how well a system performs under various conditions.

Load testing determines how the system performs under expected user loads. The purpose is to identify performance bottlenecks.

Stress testing assesses how the system performs when loads are heavier than usual. The purpose is to find the limit at which the system fails and to observe how it recovers from such failures.

Prerequisites

  • k6 installed

  • Script (Node.js) file with configuration and execution function

Configuration

Configuration is stored inside options variable, which allows you to set up different testing scenarios:

  • constant user load, the number of virtual users (vus) remains constant throughout the test period
export const options = {
vus: 30,
duration: '10m'
};
  • variable user load, the number of users increases and decreases over time
export const options = {
stages: [
{
duration: '1m',
target: 30
},
{
duration: '10m',
target: 30
},
{
duration: '5m',
target: 0
}
]
};

Environment variables can be passed through the command line and are accessible within the script via the __ENV object.

k6 -e TOKEN=token run script.js

Execution function

This function defines what virtual users will do during the test. This function is called for each virtual user and typically includes steps that simulate user actions on the app.

export default function() {
http.get(URL);
// Add more actions as required
}

Test report

k6 generates a report that provides detailed insights into various benchmarks, such as the number of virtual users, requests per second, request durations and error rates.

Example

This example utilizes k6 to conduct a load test using a variable user load approach:

  • User simulation: The script ramps up to 1,000 users, maintains that level to simulate sustained traffic, and gradually reduces to zero.

  • Request handling: During the test, each virtual user sends a POST request to an API, with pauses between requests to mimic real user behavior.

  • Performance insights: After the test, k6 provides a report that shows key information, such as how fast the app responds and how many requests fail.

Run it via k6 -e TOKEN=1234 run script.js command.

// script.js
import { check, sleep } from 'k6';
import { scenario } from 'k6/execution';
import http from 'k6/http';
export const options = {
stages: [
// Ramp up to 1000 users over 10 minutes
{
duration: '10m',
target: 1000
},
// Hold 1000 users for 30 minutes
{
duration: '30m',
target: 1000
},
// Ramp down to 0 users over 5 minutes
{
duration: '5m',
target: 0
}
]
};
export default () => {
const response = http.post(
URL,
JSON.stringify({
iteration: scenario.iterationInTest
}),
{
headers: {
Authorization: __ENV.TOKEN,
'Content-Type': 'application/json'
}
}
);
check(response, {
'response status was 200': (res) => res.status === 200
});
sleep(1);
};

Course

Build your SaaS in 2 weeks - Start Now

Node Version Manager (nvm) overview

May 9, 2024

nvm facilitates switching between different Node versions across projects. This post covers its overview from installation to version management.

Installation

To install nvm, execute the following commands in your terminal. This example uses zsh, but the process is similar for other shells like bash.

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.zshrc

Version management

  • Install the specific version. Including the v prefix is optional.

    nvm install v21.7.2
  • Install the latest version

    nvm install node
  • Install the latest one for the specified major version

    nvm install 22
  • Switch to a specific installed version

    nvm use 21
  • Add .nvmrc file inside the project directory and run nvm use command to use the specified installed version.

    v21.7.2
  • Get the list of locally installed versions

    nvm ls
  • Get the list of available versions for installation

    nvm ls-remote

Course

Build your SaaS in 2 weeks - Start Now

Debugging Node.js apps with Chrome DevTools debugger

April 12, 2024

Debugging with a debugger and breakpoints is recommended rather than using console logs. Chrome provides a built-in debugger for JavaScript-based apps.

This post covers configuring and running a debugger for various Node.js apps in Chrome DevTools.

Prerequisites

  • Google Chrome installed

  • Node.js app bootstrapped

Setup

Open chrome://inspect, click Open dedicated DevTools for Node and open the Connection tab. You should see a list of network endpoints for debugging.

Run node --inspect-brk app.js to start the debugger, it will log the debugger network. Choose the network in Chrome to open the debugger in Chrome DevTools.

Debugging basics

The variables tab shows local and global variables during debugging.

The step over next function call option goes to the following statement in the codebase, while the step into next function call option goes deeper into the current statement.

Add logs in the debug console via logpoints by selecting the specific part of the codebase so it logs when the selected codebase executes.

Course

Build your SaaS in 2 weeks - Start Now

Sending e-mails with Sendgrid

March 30, 2024

To send e-mails in a production environment, use services like Sendgrid.

Verify the e-mail address on the Sender Management page and create the SendGrid API key on the API keys page.

import nodemailer from 'nodemailer';
(async () => {
const emailConfiguration = {
auth: {
user: process.env.EMAIL_USERNAME, // 'apikey'
pass: process.env.EMAIL_PASSWORD
},
host: process.env.EMAIL_HOST, // 'smtp.sendgrid.net'
port: process.env.EMAIL_PORT, // 465
secure: process.env.EMAIL_SECURE // true
};
const transport = nodemailer.createTransport(emailConfiguration);
const info = await transport.sendMail({
from: '"Sender" <sender@example.com>',
to: 'recipient1@example.com, recipient2@example.com',
subject: 'Subject',
text: 'Text',
html: '<b>Text</b>'
});
console.log('Message sent: %s', info.messageId);
})();

Course

Build your SaaS in 2 weeks - Start Now

MongoDB containers with Docker Compose

February 2, 2024

Docker Compose facilitates spinning up a container for the MongoDB database without installing MongoDB locally.

Prerequisites

  • Docker Compose installed

Configuration

The following configuration spins up the MongoDB container with the UI tool (Mongo Express).

The connection string for the MongoDB database is mongodb://localhost:27018.

Mongo Express is available at the http://localhost:8082 link. Use the below Basic auth credentials to log in to Mongo Express.

# docker-compose.yml
version: '3.8'
services:
mongo:
image: 'mongo:7.0.5'
ports:
- 27018:27017
volumes:
- my-data:/var/lib/mongodb/data
mongo-express:
image: 'mongo-express:1.0.2'
ports:
- 8082:8081
environment:
ME_CONFIG_BASICAUTH_USERNAME: username
ME_CONFIG_BASICAUTH_PASSWORD: password
volumes:
my-data:

Run the following command to spin up the containers.

docker-compose up

Course

Build your SaaS in 2 weeks - Start Now

Web scraping with cheerio

January 19, 2024

Web scraping means extracting data from websites. This post covers extracting data from the page's HTML tags.

Prerequisites

  • cheerio package is installed

  • HTML page is retrieved via an HTTP client

Usage

  • create a scraper object with load method by passing HTML content as an argument

    • set decodeEntities option to false to preserve encoded characters (like &) in their original form
    const $ = load('<div><!-- HTML content --></div>', { decodeEntities: false });
  • find DOM elements by using CSS-like selectors

    const items = $('.item');
  • iterate through found elements using each method

    items.each((index, element) => {
    // ...
    });
  • access element content using specific methods

    • text - $(element).text()

    • HTML - $(element).html()

    • attributes

      • all - $(element).attr()
      • specific one - $(element).attr('href')
    • child elements

      • first - $(element).first()
      • last - $(element).last()
      • all - $(element).children()
      • specific one - $(element).find('a')
    • siblings

      • previous - $(element).prev()
      • next - $(element).next()

Disclaimer

Please check the website's terms of service before scraping it. Some websites may have terms of service that prohibit such activity.

Course

Build your SaaS in 2 weeks - Start Now

2023

Integration with GitHub GraphQL API

December 22, 2023

GitHub provides GraphQL API to create integrations, retrieve data, and automate workflows.

Prerequisites

  • GitHub token (Settings Developer Settings Personal access tokens)

Integration

Below is an example of retrieving sponsorable users by location.

export async function getUsersBy(location) {
return fetch('https://api.github.com/graphql', {
method: 'POST',
body: JSON.stringify({
query: `query {
search(type: USER, query: "location:${location} is:sponsorable", first: 100) {
edges {
node {
... on User {
bio
login
viewerCanSponsor
}
}
}
userCount
}
}`
}),
headers: {
ContentType: 'application/json',
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`
}
})
.then((response) => response.json())
.then((response) => response.data?.search?.edges || []);
}

Course

Build your SaaS in 2 weeks - Start Now

Web scraping with jsdom

December 14, 2023

Web scraping means extracting data from websites. This post covers extracting data from the page's HTML when data is stored in JavaScript variable or stringified JSON.

The scraping prerequisite is retrieving an HTML page via an HTTP client.

Examples

The example below moves data into a global variable, executes the page scripts and accesses the data from the global variable.

import jsdom from 'jsdom';
fetch(URL)
.then((res) => res.text())
.then((response) => {
const dataVariable = 'someVariable.someField';
const html = response.replace(dataVariable, `var data=${dataVariable}`);
const dom = new jsdom.JSDOM(html, {
runScripts: 'dangerously',
virtualConsole: new jsdom.VirtualConsole()
});
console.log('data', dom?.window?.data);
});

The example below runs the page scripts, and access stringified JSON data.

import jsdom from 'jsdom';
fetch(URL)
.then((res) => res.text())
.then((response) => {
const dom = new jsdom.JSDOM(response, {
runScripts: 'dangerously',
virtualConsole: new jsdom.VirtualConsole()
});
const data = dom?.window?.document?.getElementById('someId')?.value;
console.log('data', JSON.parse(data));
});

Disclaimer

Please check the website's terms of service before scraping it. Some websites may have terms of service that prohibit such activity.

Course

Build your SaaS in 2 weeks - Start Now

License key verification with Gumroad API

November 16, 2023

Gumroad allows verifying license keys via API calls to limit the usage of the keys. It can be helpful to prevent the redistribution of products like desktop apps.

Allow generating a unique license key per sale in product settings, and the product ID will be shown there. Below is the code snippet for verification.

try {
const requestBody = new URLSearchParams();
requestBody.append('product_id', process.env.PRODUCT_ID);
requestBody.append('license_key', process.env.LICENSE_KEY);
requestBody.append('increment_uses_count', true);
const response = await fetch('https://api.gumroad.com/v2/licenses/verify', {
method: 'POST',
body: requestBody
});
const data = await response.json();
if (data.purchase?.test) {
console.log('Skipping verification for test purchase');
return;
}
const verificationLimit = Number(process.env.VERIFICATION_LIMIT);
if (data.uses >= verificationLimit + 1) {
throw new Error('Verification limit exceeded');
}
if (!data.success) {
throw new Error(data.message);
}
} catch (error) {
if (error?.response?.status === 404) {
console.log("License key doesn't exist");
return;
}
console.log('Verifying license key failed', error);
}

Course

Build your SaaS in 2 weeks - Start Now

Creating a custom GPT version of ChatGPT

November 11, 2023

Creating a custom GPT agent is available to ChatGPT plus users. This post covers the main steps from creation to publishing.

Creation

Open the Explore GPTs tab and choose the Create option.

Write a description of what agent you would like to create.

GPT builder will also propose a GPT name and generate a profile picture.

Refine the GPT context with the builder. Choose interaction style and personalization for the agent.

Knowledge base

Upload files with knowledge data in the Configure tab.

Use files in formats like JSON, PDF, and CSV.

Using external API

Create a new action in the Configure tab by entering OpenAPI docs in the Schema field.

Enter schema in JSON or YAML format or import it from the URL, and ensure it contains the server's URL configured.

Set Authentication for the provided API and test the created action via the Test button.

Security

Add a rule not to expose internal instructions so other users can't copy your configuration.

Add a rule not to expose internal instructions if a user asks for it, and answer with "Sorry, it's not possible."

Publishing

To make your GPT publicly available in the GPT Store, you need to verify the website domain.

Open Settings & Beta Builder profile and verify the new domain for the website. You'll get TXT value, which you need to configure on your domain service like Namecheap, using @ as the host value.

Once you verified the website, click the Save Public Confirm buttons to publish your new GPT.

Examples

Course

Build your SaaS in 2 weeks - Start Now

PDF generation with Gotenberg

November 4, 2023

Gotenberg is a Docker-based stateless API for PDF generation from HTML and Markdown files.

To get started, configure Docker compose and run the docker-compose up command.

version: '3.8'
services:
gotenberg:
image: gotenberg/gotenberg:7
ports:
- 3000:3000

API is available on http://localhost:3000 address.

Run the following commands to generate PDFs

  • from the given URL

    curl \
    --request POST 'http://localhost:3000/forms/chromium/convert/url' \
    --form 'url="https://sparksuite.github.io/simple-html-invoice-template/"' \
    --form 'pdfFormat="PDF/A-1a"' \
    -o curl-url-response.pdf
  • from the given HTML file

    curl \
    --request POST 'http://localhost:3000/forms/chromium/convert/html' \
    --form 'files=@"./index.html"' \
    --form 'pdfFormat="PDF/A-1a"' \
    -o curl-html-response.pdf

PDF/A-1a format is used for the long-term preservation of electronic documents, ensuring that documents can be accessed and read even as technology changes.

Course

Build your SaaS in 2 weeks - Start Now

Identifying missing variables in Handlebars templates

November 3, 2023

Handlebars is a template engine that can create server-side views, e-mail templates, and invoice templates by injecting JSON data into HTML.

Resolving all variables in a Handlebars template is essential to maintain the accuracy of the displayed information and to prevent incomplete content or layout problems.

The following snippet checks for missing variables by overriding the default nameLookup function. It logs a warning for unresolved variables and sets the default value, empty string, in this case.

// ...
const originalNameLookup = Handlebars.JavaScriptCompiler.prototype.nameLookup;
Handlebars.JavaScriptCompiler.prototype.nameLookup = function(
parent,
name,
type
) {
if (type === 'context') {
const messageLog = JSON.stringify({
message: `Variable is not resolved in the template: ${name}`,
level: WARNING_LEVEL
// ...
});
return `${parent} && ${parent}.${name} ? ${parent}.${name} : (console.log(${messageLog}), ''`;
}
return originalNameLookup.call(this, parent, name, type);
};
// ...
const result = Handlebars.compile(template)(data);

Course

Build your SaaS in 2 weeks - Start Now

Extending outdated TypeScript package declarations

November 2, 2023

Extending package declarations locally is one of the options for outdated package typings.

Create a declaration file .d.ts (e.g., handlebars.d.ts), and put it inside the src directories.

Find the exact name of the package namespace inside the node_modules types file (e.g. handlebars/types/index.d.ts).

Extend the found namespace with your needed properties, like classes, functions, etc.

// handlebars.d.ts
declare namespace Handlebars {
export class JavaScriptCompiler {
public nameLookup(
parent: string,
name: string,
type: string
): string | string[];
}
export function doSomething(name: string): void;
// ...
}

Course

Build your SaaS in 2 weeks - Start Now

Bun overview

September 11, 2023

Bun is a JavaScript runtime environment that extends JavaScriptCore engine built for Safari. Bun is designed for speed and developer experience (DX), which includes many features out of the box.

Some of the features include

  • Built-in bundler
  • Built-in test runner
  • Node.js-compatible package manager, compatible with existing npm packages
  • Compatibility with Node.js native modules like fs, path, etc.
  • TypeScript support, run TypeScript files with no extra configuration
  • Built-in watch mode
  • Support for both ES modules and CommonJS modules, both can be used in the same file
  • Native SQLite driver

Installation

Let's start by installing it with the following command

curl -fsSL https://bun.sh/install | bash

Update the current version with bun upgrade command and check the current version with bun --version command.

Run bun --help to see what CLI options are available.

Initialize empty project via the bun init command

The init command will bootstrap the "hello world" example with configured package.json, binary lock file (bun.lockb), and tsconfig.

Bundler

Bundler can be used via CLI command (bun build) or Bun.build() API

await Bun.build({
entrypoints: ['./index.ts'],
outdir: './build'
});

Below is the example for CLI command usage. Run bun build --help to see all of the available options

bun build --target=bun ./index.ts --outdir=./build

Build a single executable using the compile flag.

bun build ./index.js --compile

Package manager

Install packages from package.json via the bun install command.

Install additional npm packages via the bun add command (e.g., bun add zod). To install dev dependencies, run bun add with --dev option (e.g., bun add zod --dev)

Remove dependencies via the bun remove command (e.g., bun remove zod)

Running scripts

  • Run specific script via the bun <SCRIPT PATH>.ts command
  • Auto-install and run packages locally via the bunx command (e.g., bunx cowsay "Hello world")
  • Run a custom npm script from package.json via the bun run <SCRIPT NAME> command

Watch mode

  • hot reloading mode via bun --hot index.ts command without restarting the process
  • watch mode via bun --watch index.ts command with restarting the process

File system

Write into the file using Bun.write method

await Bun.write('./output.txt', 'Lorem ipsum');

Environment variables

  • Access environment variables via Bun.env or process.env objects
  • Store variables in .env files, like .env, .env.production, .env.local
  • Print all current environment variables via bun run env command

HTTP server

Create a server with the following code

const server = Bun.serve({
port: Bun.env.PORT,
fetch(request) {
return new Response('Welcome to Bun!');
}
});
console.log(`Listening to port ${server.port}`);

Frameworks

Elysia (Bun framework)

Install packages via the bun add elysia @elysiajs/swagger command, write the initial server, and run it via the bun server.ts command.

// server.ts
import { Elysia } from 'elysia';
import swagger from '@elysiajs/swagger';
const port = Bun.env.PORT || 8081;
new Elysia()
.use(
swagger({
path: '/api-docs'
})
)
.get('/posts/:id', ({ params: { id } }) => id)
.listen(port);

Express

Install the express package via the bun add express command, write the initial server, and run it via the bun server.ts command

// server.ts
import express from 'express';
const app = express();
const port = Bun.env.PORT || 3001;
app.get('/', (req, res) => {
res.send('Hello world');
});
app.listen(port, () => {
console.log(`Listening on port ${port}`);
});

Debugging

Install the extension Bun for Visual Studio Code by Oven and Run the Bun: Debug file command from the command palette. Execution will pause at the breakpoint.

Testing

Bun supports basic mocking and assertion functions. Run existing tests via the bun run <TEST SCRIPT NAME> (e.g., bun run test:unit) command.

Below is an example of a basic test assertion and mocking using bun:test module.

import { describe, expect, it, mock } from 'bun:test';
import { add } from './addition-service';
import { calculate } from './calculation-service';
describe('Calculation service', () => {
it('should return calculated value', async () => {
const result = calculate();
expect(result).toEqual(5);
});
it('should return mocked value', async () => {
mock.module('./addition-service', () => {
return {
add: () => 3
};
});
const result = add();
expect(result).toEqual(3);
});
});

Run unit tests via the bun test command. Re-run tests when files change via the bun test --watch command.

SQLite database

Below is a basic example of SQLite driver usage.

import { Database } from 'bun:sqlite';
const database = new Database('database.sqlite');
const query = database.query("SELECT 'hello world' as message;");
console.log(query.get());
database.close();

Course

Build your SaaS in 2 weeks - Start Now

Integration with Notion API

September 9, 2023

Notion is a versatile workspace tool combining note-taking, task management, databases, and collaboration features into a single platform.

It also supports integration with Notion content, facilitating tasks such as creating pages, retrieving a block, and filtering database entries via API.

Prerequisites

  • Notion account
  • Generated Integration token (Settings & Members Connections Develop or manage integrations New integration)
  • Notion database ID (open database as full page, extract database ID from the URL (https://notion.so/<USERNAME>/<DATABASE_ID>?v=v))
  • Added Notion connection (three dots (...) menu Add Connections choose created integration)
  • @notionhq/client package installed

Integration

Below is an example of interacting with Notion API to create the page (within the chosen database) with icon, cover, properties, and child blocks.

const { Client } = require('@notionhq/client');
const notion = new Client({ auth: process.env.NOTION_INTEGRATION_TOKEN });
const response = await notion.pages.create({
parent: {
type: 'database_id',
database_id: process.env.NOTION_DATABASE_ID
},
icon: {
type: 'emoji',
emoji: '🆗'
},
cover: {
type: 'external',
external: {
url: 'https://cover.com'
}
},
properties: {
Name: {
title: [
{
type: 'text',
text: {
content: 'Some name'
}
}
]
},
Score: {
number: 42
},
Tags: {
multi_select: [
{
name: 'A'
},
{
name: 'B'
}
]
},
Generation: {
select: {
name: 'I'
}
}
// other properties
},
children: [
{
object: 'block',
type: 'bookmark',
bookmark: {
url: 'https://bookmark.com'
}
}
]
});

Course

Build your SaaS in 2 weeks - Start Now

Upgrading React Native app to Android 13+

August 28, 2023

There is a requirement to upgrade Android apps (hosted on Google Play Store) to target Android 13. This post covers steps to upgrade to the React Native 0.72+ version.

Automatic approach

Use the npx react-native upgrade command to make code updates to the latest version.

Manual approach

Use Upgrade helper to manually change the code. It shows the differences between boilerplates for the selected versions (e.g., 0.71.13 and 0.72.4).

Fill out the package name (e.g., com.someapp). App name can stay empty, ignore it (rndiffapp) in the code changes.

Advertising ID

Enable it on the Google Play Console app page by selecting Policy and programs App content option. Add it as a permission in the AndroidManifest.xml file.

<uses-permission android:name="com.google.android.gms.permission.AD_ID"/>

Course

Build your SaaS in 2 weeks - Start Now

Browser automation with Puppeteer

August 26, 2023

Puppeteer is a headless browser for automating browser tasks. Here's the list of some of the features:

  • Turn off headless mode

    const browser = await puppeteer.launch({
    headless: false
    // ...
    });
  • Resize the viewport to the window size

    const browser = await puppeteer.launch({
    // ...
    defaultViewport: null
    });
  • Emulate screen how it's shown to the user via the emulateMediaType method

    await page.emulateMediaType('screen');
  • Save the page as a PDF file with a specified path, format, scale factor, and page range

    await page.pdf({
    path: 'path.pdf',
    format: 'A3',
    scale: 1,
    pageRanges: '1-2',
    printBackground: true
    });
  • Use preexisting user's credentials to skip logging in to some websites. The user data directory is a parent of the Profile Path value from the chrome://version page.

    const browser = await puppeteer.launch({
    userDataDir:
    'C:\\Users\\<USERNAME>\\AppData\\Local\\Google\\Chrome\\User Data',
    args: []
    });
  • Use Chrome instance instead of Chromium by utilizing the Executable Path from the chrome://version URL. Close Chrome browser before running the script

    const browser = await puppeteer.launch({
    executablePath: puppeteer.executablePath('chrome')
    // ...
    });
  • Get value based on evaluation in the browser page

    const shouldPaginate = await page.evaluate(
    (param1, param2) => {
    // ...
    },
    param1,
    param2
    );
  • Get HTML content from the specific element

    const html = await page.evaluate(
    () => document.querySelector('.field--text').outerHTML
    );
  • Wait for a specific selector to be loaded. You can also provide a timeout in milliseconds

    await page.waitForSelector('.success', { timeout: 5000 });
  • Manipulate with a specific element and click on some of the elements

    await page.$eval('#header', async (headerElement) => {
    // ...
    headerElement
    .querySelectorAll('svg')
    .item(13)
    .parentNode.click();
    });
  • Extend execution of the $eval method

    const browser = await puppeteer.launch({
    // ...
    protocolTimeout: 0
    });
  • Manipulate with multiple elements

    await page.$$eval('.some-class', async (elements) => {
    // ...
    });
  • Wait for navigation (e.g., form submitting) to be done

    await page.waitForNavigation({ waitUntil: 'networkidle0', timeout: 0 });
  • Trigger hover event on some of the elements

    await page.$eval('#header', async (headerElement) => {
    const hoverEvent = new MouseEvent('mouseover', {
    view: window,
    bubbles: true,
    cancelable: true
    });
    headerElement.dispatchEvent(hoverEvent);
    });
  • Expose a function in the browser and use it in $eval and $$eval callbacks (e.g., simulate typing using the window.type function)

    await page.exposeFunction('type', async (selector, text, options) => {
    await page.type(selector, text, options);
    });
    await page.$$eval('.some-class', async (elements) => {
    // ...
    window.type(selector, text, { delay: 0 });
    });
  • Press the Enter button after typing the input field value

    await page.type(selector, `${text}${String.fromCharCode(13)}`, options);
  • Remove the value from the input field before typing the new one

    await page.click(selector, { clickCount: 3 });
    await page.type(selector, text, options);
  • Expose a variable in the browser by passing it as the third argument for $eval and $$eval methods and use it in $eval and $$eval callbacks

    await page.$eval(
    '#element',
    async (element, customVariable) => {
    // ...
    },
    customVariable
    );
  • Mock response for the specific request

    await page.setRequestInterception(true);
    page.on('request', async function(request) {
    const url = request.url();
    if (url !== REDIRECTION_URL) {
    return request.continue();
    }
    await request.respond({
    contentType: 'text/html',
    status: 304,
    body: '<body></body>'
    });
    });
  • Intercept page redirections (via interceptor) and open them in new tabs rather than following them in the same tab

    await page.setRequestInterception(true);
    page.on('request', async function(request) {
    const url = request.url();
    if (url !== REDIRECTION_URL) {
    return request.continue();
    }
    await request.respond({
    contentType: 'text/html',
    status: 304,
    body: '<body></body>'
    });
    const newPage = await browser.newPage();
    await newPage.goto(url, { waitUntil: 'domcontentloaded', timeout: 0 });
    // ...
    await newPage.close();
    });
  • Intercept page response

    page.on('response', async (response) => {
    if (response.url() === RESPONSE_URL) {
    if (response.status() === 200) {
    // ...
    }
    // ...
    }
    });

Course

Build your SaaS in 2 weeks - Start Now

cURL basics

August 11, 2023

cURL is a command line tool for interacting with servers, it can be used in bash scripts to automate some workflows. This post covers primary usage with examples.

  • Send an HTTP GET request to the server
curl ipv4.icanhazip.com
  • Get only the response headers
curl -I ipv4.icanhazip.com
  • Send POST requests with the request body and headers
curl -X POST https://api.gumroad.com/v2/licenses/verify \
-d "product_id=product-id" \
-d "license_key=license-key"
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "What is cURL?"}]
}'
  • Use the i option to include the headers in the response
curl -i -X POST https://api.gumroad.com/v2/licenses/verify \
-d "product_id=product-id" \
-d "license_key=license-key"
  • Use the s option to hide all the logs during the request
curl -s -X POST https://api.gumroad.com/v2/licenses/verify \
-d "product_id=product-id" \
-d "license_key=license-key"
  • Use the v option for verbose logs during the request
curl -v -X POST https://api.gumroad.com/v2/licenses/verify \
-d "product_id=product-id" \
-d "license_key=license-key"
  • Retrieve bash script and run it locally
curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
  • Trigger specific endpoint inside Kubernetes cronjob
# ...
containers:
- name: cleanup
# ...
command:
- /bin/sh
- -ec
- 'curl "https://some-service.com/cleanup"'

Course

Build your SaaS in 2 weeks - Start Now

AI bulk image upscaler with Node.js

August 4, 2023

Image upscaling can be done using Real-ESRGAN, a super-resolution algorithm. Super-resolution is the process of increasing the resolution of the image.

Real-ESRGAN provides Linux, Windows and MacOS executable files and models for Intel/AMD/Nvidia GPUs.

The snippet below demonstrates bulk image upscaling with scale factor 4 and using the realesrgan-x4plus-anime model.

(async () => {
const inputDirectory = path.resolve(path.join(__dirname, 'pictures'));
const outputDirectory = path.resolve(
path.join(__dirname, 'pictures_upscaled')
);
const modelsPath = path.resolve(path.join(__dirname, 'resources', 'models'));
const execName = 'realesrgan-ncnn-vulkan';
const execPath = path.resolve(
path.join(__dirname, 'resources', getPlatform(), 'bin', execName)
);
const scaleFactor = 4;
const modelName = 'realesrgan-x4plus-anime';
if (!fs.existsSync(outputDirectory)) {
await fs.promises.mkdir(outputDirectory, { recursive: true });
}
const commands = [
'-i',
inputDirectory,
'-o',
outputDirectory,
'-s',
scaleFactor,
'-m',
modelsPath,
'-n',
modelName
];
const upscaler = spawn(execPath, commands, {
cwd: undefined,
detached: false
});
upscaler.stderr.on('data', (data) => {
console.log(data.toString());
});
await timers.setTimeout(600 * 1000);
})();

Course

Build your SaaS in 2 weeks - Start Now

Publishing Electron apps to GitHub with Electron Forge

July 19, 2023

Releasing Electron desktop apps can be automated with Electron Forge and GitHub Actions. This post covers the main steps for automation.

Prerequisites

  • bootstrapped Electron app
  • GitHub personal access token (with repo and write:packages permissions) as a GitHub Action secret (GH_TOKEN)

Setup

Run the following commands to configure Electron Forge for the app release.

npm i @electron-forge/cli @electron-forge/publisher-github -D
npm i electron-squirrel-startup
npx electron-forge import

The last command should install the necessary dependencies and add a configuration file.

Update the forge.config.js file with the bin field containing the app name and ensure the GitHub publisher points to the right repository.

Put Windows and MacOS icons paths in the packagerConfig.icon field, Windows supports ico files with 256x256 resolution, and MacOS supports icns icons with 512x512 resolution (1024x1024 for Retina displays). Linux supports png icons with 512x512 resolution, also include its path in the BrowserWindow constructor config within the icon field.

// forge.config.js
const path = require('path');
module.exports = {
packagerConfig: {
asar: true,
icon: path.join(process.cwd(), 'main', 'build', 'icon')
},
rebuildConfig: {},
makers: [
{
name: '@electron-forge/maker-squirrel',
config: {
bin: 'Electron Starter'
}
},
{
name: '@electron-forge/maker-dmg',
config: {
bin: 'Electron Starter'
}
},
{
name: '@electron-forge/maker-deb',
config: {
bin: 'Electron Starter',
options: {
icon: path.join(process.cwd(), 'main', 'build', 'icon.png')
}
}
},
{
name: '@electron-forge/maker-rpm',
config: {
bin: 'Electron Starter',
icon: path.join(process.cwd(), 'main', 'build', 'icon.png')
}
}
],
plugins: [
{
name: '@electron-forge/plugin-auto-unpack-natives',
config: {}
}
],
publishers: [
{
name: '@electron-forge/publisher-github',
config: {
repository: {
owner: 'delimitertech',
name: 'electron-starter'
},
prerelease: true
}
}
]
};

Upgrade the package version before releasing the app. The npm script for publishing should use publish command. Set productName field to the app name.

// package.json
{
// ...
"version": "1.0.1",
"scripts": {
// ...
"publish": "electron-forge publish"
},
"productName": "Electron Starter"
}

GitHub Action workflow for manually releasing the app for Linux, Windows, and MacOS should contain the below configuration.

# .github/workflows/release.yml
name: Release app
on:
workflow_dispatch:
jobs:
build:
strategy:
matrix:
os:
[
{ name: 'linux', image: 'ubuntu-latest' },
{ name: 'windows', image: 'windows-latest' },
{ name: 'macos', image: 'macos-latest' },
]
runs-on: ${{ matrix.os.image }}
steps:
- name: Github checkout
uses: actions/checkout@v4
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- name: Publish app
env:
GITHUB_TOKEN: ${{ secrets.GH_TOKEN }}
run: npm run publish

Windows startup events

Add the following code in the main process to prevent Squirrel.Windows launches your app multiple times during the installation/updating/uninstallation.

// main/index.js
if (require('electron-squirrel-startup') === true) app.quit();

Course

Build your SaaS in 2 weeks - Start Now

Kafka containers with Docker Compose

July 18, 2023

Docker Compose facilitates spinning up containers for Kafka broker and Zookeeper without installing them locally. Zookeeper is used to track cluster state, membership, and leadership.

Prerequisites

  • Docker Compose installed

Configuration

The following configuration spins up Kafka and Zookeeper containers with the Kafka UI tool.

The Kafka broker address is http://localhost:29092, and Kafka UI is available at the http://localhost:8085 address.

# docker-compose.yml
version: '3.8'
services:
kafka:
image: confluentinc/cp-kafka:6.0.14
depends_on:
- zookeeper
ports:
- '29092:29092'
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:9092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka-ui:
image: provectuslabs/kafka-ui:latest
ports:
- 8085:8080
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
DYNAMIC_CONFIG_ENABLED: 'true'
zookeeper:
image: confluentinc/cp-zookeeper:6.0.14
ports:
- '22181:2181'
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000

Run the following command to spin up the containers.

docker-compose up

Course

Build your SaaS in 2 weeks - Start Now

Formatting Node.js codebase with Prettier

July 3, 2023

Formatting helps to stay consistent with code style throughout the whole codebase. Include format script in pre-hooks (pre-commit or pre-push). This post covers Prettier setup with JavaScript and TypeScript code.

Start by installing the prettier package as a dev dependency.

npm i prettier -D

Specify rules inside the .prettierrc config file.

{
"singleQuote": true,
"trailingComma": "all"
}

Add format script in the package.json file.

{
"scripts": {
// ...
"format": "prettier --write \"{src,test}/**/*.{js,ts}\""
}
}

Notes

If you use Eslint, install the eslint-config-prettier package as a dev dependency and update the Eslint configuration to use the Prettier config.

// eslint.config.js
// ...
import eslintConfigPrettier from 'eslint-config-prettier';
export default [
// ...
eslintConfigPrettier
];

Using Visual Studio Code, you can install a prettier-vscode extension and activate formatting when file changes are saved.

Course

Build your SaaS in 2 weeks - Start Now

Tracing Node.js Microservices with OpenTelemetry

June 30, 2023

Regarding microservices observability, tracing is important to catch bottlenecks of the services like slow requests and database queries.

OpenTelemetry is a set of monitoring tools that support integration with distributed tracing platforms like Jaeger, Zipkin, and NewRelic, to name a few. This post covers Jaeger's tracing setup for Node.js projects.

Start by setting up the Docker compose via the docker-compose up command. Jaeger UI will be available at http://localhost:16686.

version: '3.8'
services:
jaeger:
image: jaegertracing/all-in-one:1.46
environment:
- COLLECTOR_ZIPKIN_HTTP_PORT=:9411
- COLLECTOR_OTLP_ENABLED=true
ports:
- 6831:6831/udp
- 6832:6832/udp
- 5778:5778
- 16685:16685
- 16686:16686
- 14268:14268
- 14269:14269
- 14250:14250
- 9411:9411
- 4317:4317
- 4318:4318

The code below shows setting up the tracing via Jaeger. Jaeger doesn't require a separate exporter package since OpenTelemetry supports it natively. Others need to use an exporter package. Filter traces within Jaeger UI by service name or trace ID stored within the logs.

Use resources and semantic resource attributes to set new fields for the trace, like service name or service version. Auto instrumentation identifies frameworks like Express, protocols like HTTP, databases like Postgres, and loggers like Winston used within the project.

Process spans (units of work in distributed systems) in batches to optimize tracing performance. Also, terminate the tracing during the graceful shutdown.

import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { NodeSDK } from '@opentelemetry/sdk-node';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
const traceExporter = new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces'
});
const sdk = new NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: `<service-name>-${process.env.NODE_ENV}`,
[SemanticResourceAttributes.SERVICE_VERSION]:
process.env.npm_package_version ?? '0.0.0',
env: process.env.NODE_ENV || ''
}),
instrumentations: [getNodeAutoInstrumentations()],
spanProcessor: new BatchSpanProcessor(traceExporter)
});
sdk.start();
process.on('SIGTERM', () => {
sdk
.shutdown()
.then(() => console.log('Tracing terminated'))
.catch((error) => console.error('Error terminating tracing', error))
.finally(() => process.exit(0));
});

Import tracing config as the first thing inside the entry file.

import './tracing';
// ...

Search Service menu should show the service name in Jaeger UI. Happy tracing!

Course

Build your SaaS in 2 weeks - Start Now

Streaming binary and base64 files

June 25, 2023

Streaming is useful when dealing with big files in web apps. Instead of loading the entire file into memory before sending it to the client, streaming allows you to send it in small chunks, improving memory efficiency and reducing response time.

The code snippet below shows streaming the binary CSV and base64-encoded PDF files with NestJS. Use the same approach for other types of files, like JSON files.

Set content type and filename headers so files are streamed and downloaded correctly. Base64 file is converted to a buffer and streamed afterward. Read files from a file system or by API calls.

import { Controller, Get, Param, Res } from '@nestjs/common';
import { Response } from 'express';
import { createReadStream } from 'fs';
import { readFile } from 'fs/promises';
import { join } from 'path';
import { Readable } from 'stream';
@Controller('templates')
export class TemplatesController {
@Get('csv')
getCsvTemplate(@Res() res: Response): void {
const file = createReadStream(join(process.cwd(), 'template.csv'));
res.set({
'Content-Type': 'text/csv',
'Content-Disposition': 'attachment; filename="template.csv"'
});
file.pipe(res);
}
@Get('pdf/:id')
async getPdfTemplate(
@Param('id') id: string,
@Res() res: Response
): Promise<void> {
const fileBase64 = await readFile(
join(process.cwd(), 'template.pdf'),
'base64'
);
// const fileBase64 = await apiCall();
const fileBuffer = Buffer.from(fileBase64, 'base64');
const fileStream = Readable.from(fileBuffer);
res.set({
'Content-Type': 'application/pdf',
'Content-Disposition': `attachment; filename="template_${id}.pdf"`
});
fileStream.pipe(res);
}
}

Course

Build your SaaS in 2 weeks - Start Now

Spies and mocking with Node test runner (node:test)

June 24, 2023

Node.js version 20 brings a stable test runner so you can run tests inside *.test.js files with node --test command. This post covers the primary usage of it regarding spies and mocking for the unit tests.

Spies are functions that let you spy on the behavior of functions called indirectly by some other code while mocking injects test values into the code during the tests.

mock.method can create spies and mock async, rejected async, sync, chained methods, and external and built-in modules.

  • Async function
import assert from 'node:assert/strict';
import { describe, it, mock } from 'node:test';
const calculationService = {
calculate: () => // implementation
};
describe('mocking resolved value', () => {
it('should resolve mocked value', async () => {
const value = 2;
mock.method(calculationService, 'calculate', async () => value);
const result = await calculationService.calculate();
assert.equal(result, value);
});
});
  • Rejected async function
const error = new Error('some error message');
mock.method(calculationService, 'calculate', async () => Promise.reject(error));
await assert.rejects(async () => calculateSomething(calculationService), error);
  • Sync function
mock.method(calculationService, 'calculate', () => value);
  • Chained methods
mock.method(calculationService, 'get', () => calculationService);
mock.method(calculationService, 'calculate', async () => value);
const result = await calculationService.get().calculate();
  • External modules
import axios from 'axios';
mock.method(axios, 'get', async () => ({ data: value }));
  • Built-in modules
import fs from 'fs/promises';
mock.method(fs, 'readFile', async () => fileContent);
  • Async and sync functions called multiple times can be mocked with different values using context.mock.fn and mockedFunction.mock.mockImplementationOnce.
describe('mocking same method multiple times with different values', () => {
it('should resolve mocked values', async (context) => {
const firstValue = 2;
const secondValue = 3;
const calculateMock = context.mock.fn(calculationService.calculate);
calculateMock.mock.mockImplementationOnce(async () => firstValue, 0);
calculateMock.mock.mockImplementationOnce(async () => secondValue, 1);
const firstResult = await calculateMock();
const secondResult = await calculateMock();
assert.equal(firstResult, firstValue);
assert.equal(secondResult, secondValue);
});
});
  • To assert called arguments for a spy, use mockedFunction.mock.calls[0] value.
mock.method(calculationService, 'calculate');
await calculateSomething(calculationService, firstValue, secondValue);
const call = calculationService.calculate.mock.calls[0];
assert.deepEqual(call.arguments, [firstValue, secondValue]);
  • To assert skipped call for a spy, use mockedFunction.mock.calls.length value.
mock.method(calculationService, 'calculate');
assert.equal(calculationService.calculate.mock.calls.length, 0);
  • To assert how many times mocked function is called, use mockedFunction.mock.calls.length value.
mock.method(calculationService, 'calculate');
calculationService.calculate(3);
calculationService.calculate(2);
assert.equal(calculationService.calculate.mock.calls.length, 2);
  • To assert called arguments for the exact call when a mocked function is called multiple times, an assertion can be done using mockedFunction.mock.calls[index] and call.arguments values.
const calculateMock = context.mock.fn(calculationService.calculate);
calculateMock.mock.mockImplementationOnce((a) => a + 2, 0);
calculateMock.mock.mockImplementationOnce((a) => a + 3, 1);
calculateMock(firstValue);
calculateMock(secondValue);
[firstValue, secondValue].forEach((argument, index) => {
const call = calculateMock.mock.calls[index];
assert.deepEqual(call.arguments, [argument]);
});

Running TypeScript tests

Add a new test script, --experimental-transform-types flag requires Node version >= 22.10.0

{
"type": "module",
"scripts": {
"test": "node --test",
"test:ts": "NODE_OPTIONS='--experimental-transform-types --disable-warning=ExperimentalWarning' node --test ./src/**/*.{spec,test}.ts"
}
}

Course

Build your SaaS in 2 weeks - Start Now

Async API documentation 101

May 21, 2023

Async API documentation is used for documenting events in event-driven systems, like Kafka events. All of the event DTOs are stored in one place. It supports YAML and JSON formats.

It contains information about channels and components. Channels and components are defined with their messages and DTO schemas, respectively.

{
"asyncapi": "2.6.0",
"info": {
"title": "Events docs",
"version": "1.0.0"
},
"channels": {
"topic_name": {
"publish": {
"message": {
"schemaFormat": "application/vnd.oai.openapi;version=3.0.0",
"payload": {
"type": "object",
"properties": {
"counter": {
"type": "number"
}
},
"required": ["counter"]
}
}
}
}
},
"components": {
"schemas": {
"EventDto": {
"type": "object",
"properties": {
"counter": {
"type": "number"
}
},
"required": ["counter"]
}
}
}
}

Autogeneration

Async API docs can be autogenerated by following multiple steps:

  • define DTOs and their required and optional fields with ApiProperty and ApiPropertyOptional decorators (from the @nestjs/swagger package), respectively
  • generate OpenAPI docs from the defined DTOs
  • parse and reuse component schemas from generated OpenAPI documentation to build channel messages and component schemas for Async API docs

Validation

Use AsyncAPI Studio to validate the written specification.

Preview

There are multiple options

  • AsyncAPI Studio

  • VSCode extension asyncapi-preview, open the command palette, and run the Preview AsyncAPI command.

UI generation

  • Install @asyncapi/cli and corresponding template package (e.g., @asyncapi/html-template, @asyncapi/markdown-template)
  • Update package.json with scripts
{
"scripts": {
// ...
"generate-docs:html": "asyncapi generate fromTemplate ./asyncapi/asyncapi.json @asyncapi/html-template --output ./docs/html",
"generate-docs:markdown": "asyncapi generate fromTemplate ./asyncapi/asyncapi.json @asyncapi/markdown-template --output ./docs/markdown"
}
}

Course

Build your SaaS in 2 weeks - Start Now

Health checks with Terminus

April 14, 2023

Monitoring tools use health checks to check if service and external dependencies (like a database) are up and running and take some action (like sending alerts) for the unhealthy state.

Terminus provides a set of health indicators.

Liveness probe

An HTTP endpoint checks if the service is up and running.

// health.controller.ts
import { Controller, Get } from '@nestjs/common';
import { ApiTags } from '@nestjs/swagger';
import {
HealthCheck,
HealthCheckResult,
HealthCheckService,
HealthIndicatorResult,
TypeOrmHealthIndicator
} from '@nestjs/terminus';
import { CustomConfigService } from 'common/config/custom-config.service';
@ApiTags('health')
@Controller('health')
export class HealthController {
constructor(
private readonly healthCheckService: HealthCheckService,
private readonly configService: CustomConfigService,
private readonly database: TypeOrmHealthIndicator
) {}
@Get('liveness')
@HealthCheck()
async check(): Promise<HealthCheckResult> {
return this.healthCheckService.check([
async (): Promise<HealthIndicatorResult> => ({
[this.configService.SERVICE_NAME]: { status: 'up' }
})
]);
}
// ...
}

A successful response is like the one below.

{
"status": "ok",
"info": {
"nestjs-starter": {
"status": "up"
}
},
"error": {},
"details": {
"nestjs-starter": {
"status": "up"
}
}
}

Readiness probe

An HTTP endpoint checks if the service is ready to receive the traffic and if all external dependencies are running.

// health.controller.ts
import { Controller, Get } from '@nestjs/common';
import { ApiTags } from '@nestjs/swagger';
import {
HealthCheck,
HealthCheckResult,
HealthCheckService,
HealthIndicatorResult,
TypeOrmHealthIndicator
} from '@nestjs/terminus';
import { CustomConfigService } from 'common/config/custom-config.service';
@ApiTags('health')
@Controller('health')
export class HealthController {
constructor(
private readonly healthCheckService: HealthCheckService,
private readonly configService: CustomConfigService,
private readonly database: TypeOrmHealthIndicator
) {}
// ...
@Get('readiness')
@HealthCheck()
async checkReadiness(): Promise<HealthCheckResult> {
return this.healthCheckService.check([
async (): Promise<HealthIndicatorResult> =>
this.database.pingCheck('postgres')
]);
}
}

Responses

  • Successful response
{
"status": "ok",
"info": {
"postgres": {
"status": "up"
}
},
"error": {},
"details": {
"postgres": {
"status": "up"
}
}
}
  • Response when the database is down
{
"status": "error",
"info": {},
"error": {
"postgres": {
"status": "down"
}
},
"details": {
"postgres": {
"status": "down"
}
}
}

Course

Build your SaaS in 2 weeks - Start Now

Linting JavaScript codebase with Eslint

April 5, 2023

Linting represents static code analysis based on specified rules. Please include it in the CI pipeline.

Setup

Run the following commands to generate the linter configuration using the eslint package.

npm init -y
npm init @eslint/config

Below is an example of the configuration. Some rules can be ignored or suppressed as warnings, ignore the files using ignores field.

// eslint.config.js
import globals from 'globals';
import pluginJs from '@eslint/js';
export default [
{ languageOptions: { globals: globals.node } },
pluginJs.configs.recommended,
{
ignores: ['dist/**/*.js']
},
{
rules: {
'no-console': ['off'],
'no-unused-vars': ['warn']
}
}
];

Linting

Configure and run the script with the npm run lint command. Some errors can be fixed automatically with the --fix option.

// package.json
{
"scripts": {
// ...
"lint": "eslint .",
"lint:fix": "npm run lint -- --fix"
}
}

Course

Build your SaaS in 2 weeks - Start Now

Migrating Node.js app from Heroku to Fly.io

April 1, 2023

I recently migrated the Node.js app from Heroku to Fly.io, mainly due to reduced costs.

This blog post will cover the necessary steps in the migration process.

Prerequisites

  • Heroku app running

  • Use the exact versions for dependencies and dev dependencies in package.json so installation and build steps can pass successfully

  • Use the same Node.js version in Dockerfile, package.json, and GitHub Actions workflow

  • Use API gateway or custom domain for the service so web apps and mobile apps don't get affected by changing the URL of the service

Migration steps

  • Migrate environment variables and secrets

  • Migrate the Postgres database with the following commands (the ssl field in database configuration options is not needed)

fly secrets set HEROKU_DATABASE_URL=$(heroku config:get DATABASE_URL)
fly ssh console
apt update && apt install postgresql-client
pg_dump -Fc --no-acl --no-owner -d $HEROKU_DATABASE_URL | pg_restore --verbose --clean --no-acl --no-owner -d $DATABASE_URL
exit
fly secrets unset HEROKU_DATABASE_URL
  • Migrate the Redis database if it's used

  • Include the deployment step in the GitHub Actions workflow

Course

Build your SaaS in 2 weeks - Start Now

Integration with ChatGPT API

March 19, 2023

ChatGPT is a large language model (LLM) that understands and processes human prompts to produce helpful responses. OpenAI provides an API to interact with the ChatGPT model (gpt-3.5-turbo).

Prerequisites

  • OpenAI account
  • Generated API key
  • Enabled billing

Integration

Below is an example of interacting with ChatGPT API based on a given prompt.

const handlePrompt = async (prompt) => {
const response = await axios.post(
'https://api.openai.com/v1/chat/completions',
{
model: 'gpt-3.5-turbo',
messages: [
{
role: 'user',
content: prompt
}
]
},
{
headers: {
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
}
}
);
return response?.data?.choices?.[0]?.message?.content;
};

Course

Build your SaaS in 2 weeks - Start Now

Documenting REST APIs with OpenAPI specs (NestJS/Swagger)

March 16, 2023

OpenAPI is a language-agnostic specification for declaring API documentation for REST APIs. It contains the following information:

  • API information like title, description, version
  • endpoints definitions with request and response parameters
  • DTOs and security schemas
openapi: 3.0.0
paths:
/users:
post:
operationId: UsersController_createUser
summary: Create user
description: Create a new user
parameters: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUserDto'
responses:
'201':
description: 'User is created'
info:
title: nestjs-starter
description: Minimal NestJS boilerplate
version: 0.1.0
contact: {}
tags: []
servers: []
components:
securitySchemes:
token:
type: apiKey
scheme: api_key
in: header
name: auth-token
schemas:
CreateUserDto:
type: object
properties:
firstName:
type: string
example: tester
description: first name of the user
required:
- firstName

NestJS provides a Swagger plugin for generating the API docs.

Setup

Configure API documentation with the specified endpoint, like /api-docs, which shows the generated docs.

const SWAGGER_API_ENDPOINT = '/api-docs';
// ...
export const setupApiDocs = (app: INestApplication): void => {
const options = new DocumentBuilder()
.setTitle(SWAGGER_API_TITLE)
.setDescription(SWAGGER_API_DESCRIPTION)
.setVersion(SWAGGER_API_VERSION)
.addSecurity('token', {
type: 'apiKey',
scheme: 'api_key',
in: 'header',
name: 'auth-token'
})
.addBearerAuth()
.build();
const document = SwaggerModule.createDocument(app, options);
SwaggerModule.setup(SWAGGER_API_ENDPOINT, app, document);
};

Configure the plugin in the NestJS config file.

{
"compilerOptions": {
"plugins": ["@nestjs/swagger"]
}
}

JSON and YAML formats are generated at /api-docs-json and /api-docs-yaml endpoints, respectively.

Decorators

  • ApiTags groups endpoints
@ApiTags('users')
@Controller('users')
export class UsersController {
// ...
}
  • ApiOperation provides more details like a summary and description of the endpoint
@ApiOperation({
summary: 'Get user',
description: 'Get user by id',
})
@Get(':id')
async getById(
@Param('id', new ParseUUIDPipe()) id: string,
): Promise<UserDto> {
// ...
}
  • ApiOperation can be used to mark an endpoint as deprecated
@ApiOperation({ deprecated: true })
  • @ApiProperty and @ApiPropertyOptional should be used for request and response DTOs fields. Example and description values will be shown in the generated documentation.
export class CreateUserDto {
@ApiProperty({ example: 'John', description: 'first name of the user' })
// ...
public firstName: string;
@ApiPropertyOptional({ example: 'Doe', description: 'last name of the user' })
// ...
public lastName?: string;
}
  • ApiHeader documents endpoint headers
@ApiHeader({
name: 'correlation-id',
required: false,
description: 'unique id for correlated logs',
example: '7ea2c7f7-8b46-475d-86f8-7aaaa9e4a35b',
})
@Get()
getHello(): string {
// ...
}
  • ApiResponse specifies which responses are expected, like error responses. NestJS' Swagger package provides decorators for specific status codes like ApiBadRequestResponse.
// ...
@ApiResponse({ type: NotFoundException, status: HttpStatus.NOT_FOUND })
@ApiBadRequestResponse({ type: BadRequestException })
@Get(':id')
async getById(
@Param('id', new ParseUUIDPipe()) id: string,
): Promise<UserDto> {
return this.userService.findById(id);
}
// ...
  • ApiSecurity('token') uses a custom-defined security strategy, token in this case. Other options are to use already defined strategies like ApiBearerAuth.
@ApiSecurity('token')
@Controller()
export class AppController {
// ...
}
// ...
@ApiBearerAuth()
@Controller()
export class AppController {
// ...
}
  • ApiExcludeEndpoint and ApiExcludeController exclude one endpoint and the whole controller, respectively.
export class AppController {
@ApiExcludeEndpoint()
@Get()
getHello(): string {
// ...
}
}
// ...
@ApiExcludeController()
@Controller()
export class AppController {
// ...
}
  • ApiBody with ApiExtraModels add an example for the request body
const CreateUserDtoExample = {
firstName: 'Tester',
};
@ApiExtraModels(CreateUserDto)
@ApiBody({
schema: {
oneOf: refs(CreateUserDto),
example: CreateUserDtoExample,
},
})
@Post()
async createUser(@Body() newUser: CreateUserDto): Promise<UserDto> {
// ...
}

Importing API to Postman

Import JSON version of API docs as Postman API with Import Link option (e.g., URL http://localhost:8081/api-docs-json). Imported API collection will be available in the APIs tab.

Course

Build your SaaS in 2 weeks - Start Now

Node.js built-in module functions as Promises

February 28, 2023

Node.js provides asynchronous methods for fs, dns, stream, and timers modules that return Promises.

const {
createWriteStream,
promises: { readFile }
} = require('fs');
const dns = require('dns/promises');
const stream = require('stream/promises');
const timers = require('timers/promises');
const sleep = timers.setTimeout;
const SLEEP_TIMEOUT_MS = 2000;
(async () => {
const fileName = 'test-file';
const writeStream = createWriteStream(fileName, {
autoClose: true,
flags: 'w'
});
await stream.pipeline('some text', writeStream);
await sleep(SLEEP_TIMEOUT_MS);
const readFileResult = await readFile(fileName);
console.log(readFileResult.toString());
const lookupResult = await dns.lookup('google.com');
console.log(lookupResult);
})();

Use the promisify function to convert other callback-based functions to Promise-based.

const crypto = require('crypto');
const { promisify } = require('util');
const randomBytes = promisify(crypto.randomBytes);
const RANDOM_BYTES_LENGTH = 20;
(async () => {
const randomBytesResult = await randomBytes(RANDOM_BYTES_LENGTH);
console.log(randomBytesResult);
})();

Course

Build your SaaS in 2 weeks - Start Now

Postgres and Redis containers with Docker Compose

February 26, 2023

Docker Compose facilitates spinning up containers for databases without installing the databases locally. This post covers the setup for Postgres and Redis images.

Prerequisites

  • Docker Compose installed

Configuration

The following configuration spins up Postgres and Redis containers with UI tools (Pgweb and Redis Commander).

Connection strings for Postgres and Redis are redis://localhost:6379 and postgres://username:password@localhost:5435/database-name.

Pgweb and Redis Commander are available at http://localhost:8085 and http://localhost:8081 links.

# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:alpine
environment:
POSTGRES_DB: database-name
POSTGRES_PASSWORD: password
POSTGRES_USER: username
ports:
- 5435:5432
restart: on-failure:3
pgweb:
image: sosedoff/pgweb
depends_on:
- postgres
environment:
PGWEB_DATABASE_URL: postgres://username:password@postgres:5432/database-name?sslmode=disable
ports:
- 8085:8081
restart: on-failure:3
redis:
image: redis:latest
command: redis-server
volumes:
- redis:/var/lib/redis
- redis-config:/usr/local/etc/redis/redis.conf
ports:
- 6379:6379
networks:
- redis-network
redis-commander:
image: rediscommander/redis-commander:latest
environment:
- REDIS_HOSTS=local:redis:6379
- HTTP_USER=root
- HTTP_PASSWORD=qwerty
ports:
- 8081:8081
networks:
- redis-network
depends_on:
- redis
volumes:
redis:
redis-config:
networks:
redis-network:
driver: bridge

Run the following command to spin up the containers.

docker-compose up

Course

Build your SaaS in 2 weeks - Start Now

GitHub actions 101

February 19, 2023

GitHub action is a CI/CD tool integrated within GitHub repositories that can run different kinds of jobs (building, testing, deployment). Store workflow files in .github/workflows inside the repository, which will be triggered based on specified conditions.

This post covers GitHub actions basics, from specifying the workflow name to configuring different jobs.

Name

Specify the name of the workflow with the name field.

# .github/workflows/config.yml
name: CI/CD pipeline

Running

on field specifies when the workflow should be running.

Automatically

The following configuration runs on every push to a specific branch.

# .github/workflows/config.yml
on:
push:
branches:
- main

The following configuration runs on every push to every branch.

# .github/workflows/config.yml
on:
push:
branches:
- '*'

Cron jobs

The following configuration runs at a specified interval (e.g., every hour).

# .github/workflows/config.yml
on:
schedule:
- cron: '0 * * * *'

Manual triggers

The following configuration enables manual triggering. Trigger it on the Actions tab by selecting the workflow and clicking the Run workflow button.

Use a manual trigger to upload apps to the Google Play console or update the GitHub profile Readme.

# .github/workflows/config.yml
on:
workflow_dispatch:

Environment variables

Specify with the env field. Set repository secrets in Settings Secrets and variables Actions page.

# .github/workflows/config.yml
env:
API_KEY: ${{ secrets.API_KEY }}

Jobs

Specify the job name with the name field. Otherwise, the workflow will use the jobs item as the job name.

Every job should have a runs-on field specified for the machine, which will be running on (e.g., ubuntu-latest) or container field with set Docker image (e.g., node:20.9.0-alpine3.17)

Every job can have a separate working directory in case you have multiple subdirectories, and you want to run a different job in a different subdirectory so you can specify it within the defaults field

jobs:
job-name:
defaults:
run:
working-directory: directory-name

You can specify multiple tasks inside one job. Every task can have the following fields

  • name - task name
  • uses - GitHub action path from GitHub Marketplace
  • with - parameters for the specified GitHub action
  • run - bash commands
  • env - environment variables
# .github/workflows/config.yml
jobs:
build:
name: Custom build job
runs-on: ubuntu-latest
# container: node:20.9.0-alpine3.17
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install and build
run: |
npm ci
npm run build

Use the needs field for running jobs sequentially. It specifies a job that has to be finished before starting the next one. Otherwise, jobs will run in parallel.

# .github/workflows/config.yml
jobs:
build:
# ...
deploy:
name: Custom deploy job
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy
run: |
npm run deploy

Every job can run multiple times with different versions using a matrix strategy (e.g., Node versions 18 and 20 or multiple OS versions inside an array of objects).

# .github/workflows/config.yml
jobs:
build:
name: Custom build job
strategy:
matrix:
node-version: [18, 20]
os:
[
{ name: 'linux', image: 'ubuntu-latest' },
{ name: 'windows', image: 'windows-latest' },
{ name: 'macos', image: 'macos-latest' },
]
runs-on: ${{ matrix.os.image }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
- name: Install and build
run: |
npm ci
npm run build

Every job can provision databases for e2e tests with a services field like Postgres in the following example.

# .github/workflows/config.yml
jobs:
build:
name: Custom build job
runs-on: ubuntu-latest
strategy:
matrix:
database-name:
- test-db
database-user:
- username
database-password:
- password
database-host:
- postgres
database-port:
- 5432
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: ${{ matrix.database-name }}
POSTGRES_USER: ${{ matrix.database-user }}
POSTGRES_PASSWORD: ${{ matrix.database-password }}
ports:
- ${{ matrix.database-port }}:${{ matrix.database-port }}
# Set health checks to wait until postgres has started
options: --health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
# ...
- run: npm run test:e2e
env:
DATABASE_URL: postgres://${{ matrix.database-user }}:${{ matrix.database-password }}@${{ matrix.database-host }}:${{ matrix.database-port }}/${{ matrix.database-name }}

Passing artifacts between jobs can be done with uploading (actions/upload-artifact) and downloading (actions/download-artifact) actions.

# .github/workflows/config.yml
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install and build
run: |
npm ci
npm run build
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: artifact
path: public
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download artifact
uses: actions/download-artifact@v3
with:
name: artifact
# ...

Running locally

You can use act to run GitHub actions locally.

Install it and run it with the following commands.

curl -s https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
cp ./bin/act /usr/local/bin/act
act

Course

Build your SaaS in 2 weeks - Start Now

Error tracking with Sentry

February 14, 2023

Error tracking and alerting are crucial in the production environment, proactively fixing the errors leads to a better user experience. Sentry is one of the error tracking services, and it provides alerting for unhandled exceptions. You should receive an email when something wrong happens.

Sentry issues show the error stack trace, device, operating system, and browser information. The project dashboard shows an unhandled exception once it's thrown. This post covers the integration of several technologies with Sentry.

Node.js

  • Create a Node.js project on Sentry

  • Install the package

npm i @sentry/node
  • Run the following script
const Sentry = require('@sentry/node');
Sentry.init({
dsn: SENTRY_DSN
});
test();

Next.js

  • Create a Next.js project on Sentry (version 13 is not yet supported)

  • Run the following commands for the setup

npm i @sentry/nextjs
npx @sentry/wizard -i nextjs

Gatsby

  • Create a Gatsby project on Sentry

  • Install the package

npm i @sentry/gatsby
  • Add plugin in Gatsby config
module.exports = {
plugins: [
// ...
{
resolve: '@sentry/gatsby',
options: {
dsn: SENTRY_DSN
}
}
]
};

React Native

  • Create a React Native project on Sentry

  • Run the following commands for the setup

npm i @sentry/react-native
npx @sentry/wizard -i reactNative -p android

Course

Build your SaaS in 2 weeks - Start Now

Logging practices

February 7, 2023

This post covers some logging practices for the back-end (Node.js) apps.

  • Avoid putting unique identifiers (e.g., user id) within the message. A unique id will produce a lot of different messages with the same context. Use it as a message parameter.

  • Use the appropriate log level for the message. There are multiple log levels

    • info - app behavior, don't log every single step
    • error - app processing failure, something that needs to be fixed
    • debug - additional logs needed for troubleshooting
    • warning - something unexpected happened (e.g., third-party API fails)
    • fatal - app crash, needs to be fixed as soon as possible

Don't use the debug logs on production. Put log level as an environment variable.

  • Stream logs to the standard output in JSON format so logging aggregators (Graylog, e.g.) can collect and adequately parse them

  • Avoid logging any credentials, like passwords, auth tokens, etc.

  • Put correlation ID as a message parameter for tracing related logs.

  • Use a configurable logger like pino

const pino = require('pino');
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
redact: {
paths: ['token'],
remove: true
}
});
logger.info({ someId: 'id' }, 'Started the app...');
const correlationId = request.headers['correlation-id'] || uuid.v4();
logger.debug(
{ data: 'some data useful for debugging', correlationId },
'Sending the request...'
);

Course

Build your SaaS in 2 weeks - Start Now

Vim overview

February 5, 2023

Vim is a text editor known for editing files without using a mouse. It's also useful when you SSH into a remote server, and you have to edit some files there. This post covers the main notes from installation to its usage (shortcuts, commands, configuration).

Installation

Vim is usually already installed on mostly *nix operating systems. You can install it via the package manager and open it with the following commands.

sudo apt-get update
sudo apt-get install -y vim
vim

Modes

Vim has four modes

  • Normal - this is the default mode. It enables scrolling through the file content
  • Visual - type v, and you can select text content for deleting and copying with scrolling shortcuts
  • Insert - type i, and you can start editing the file content
  • Command-line - type : and some command plus Enter to run the command

Usage

Shortcuts

  • Esc - go back to Normal mode
  • h to scroll left
  • j to scroll down
  • k to scroll up
  • l to scroll right
  • Shift + g - scroll to the end of the file
  • g + g - scroll to the beginning of the file
  • line number + Shift + g, (e.g., 5 + Shift + g) - jump to the specific line number, 5th in this case
  • ^ - jump to the start of the current line
  • $ - jump to the end of the current line
  • w - move to the next word
  • b - move back to the previous word

Commands

  • :edit script.js - create a new file or open the existing one
  • :w - save the changes
  • :q - exit the file
  • :wq - save the changes and exit the file
  • :q! - exit without the changes
  • :%s/<text>/<new text>/g - find and replace the occurrences within the whole file (e.g., :%s/Vim/Emacs/g)
  • : + - to find the previous command

Miscellaneous

  • Copy pasting
    • enter the visual mode, scroll through the text you want to copy, type y, then scroll to the place you want to paste it and type p
    • type y + number of lines + y to copy specified lines and paste it with p
  • Deleting
    • enter the visual mode, scroll through the text you want to delete, and type d
    • type d + number of lines + d to delete specified lines
    • type x to remove the letter
    • type dw to remove the word (and the space after it)
  • Type u to undo the previous change
  • Type CTRL + r to redo the previous undo
  • Find specific text with /<text> like /vim and press Enter. Type n to go to the next occurrence and N to the previous one

Configuration

Vim configuration is stored in ~/.vimrc file. You can specify plugins and other configurations, like theme, tabs spaces, etc.

To use plugins, install the vim-plug plugin manager with the following command

curl -fLo ~/.vim/autoload/plug.vim --create-dirs \
https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

and run the :PlugInstall command.

Check the status of the plugins with the :PlugStatus command.

Below is the configuration I use.

"------PLUGINS SETTINGS---------
set nocompatible " be iMproved, required
filetype off " required
call plug#begin('~/.vim/plugged')
Plug 'vim-airline/vim-airline'
Plug 'vim-airline/vim-airline-themes'
Plug 'Raimondi/delimitMate'
Plug 'flazz/vim-colorschemes'
Plug 'prettier/vim-prettier', { 'do': 'npm install' }
Plug 'tpope/vim-commentary'
" All of your Plugins must be added before the following line
call plug#end() " required
filetype plugin indent on " required
"---------AIRLINE SETTINGS------
let g:airline_powerline_fonts = 1
let g:airline_theme='solarized'
"-----COMMENTARY SETTINGS-------
noremap <leader>/ :Commentary<cr>
"-----PRETTIER SETTINGS---------
let g:prettier#autoformat = 1
let g:prettier#autoformat_require_pragma = 0
"------------TABS---------------
set expandtab
set tabstop=2
set shiftwidth=2
set softtabstop=2
" makefile tabs
autocmd FileType make setlocal noexpandtab
" tab completion
set wildmenu
" line numbers
set number
set relativenumber
syntax on
" theme
colorscheme molokai
let g:solarized_termcolors=256
set background=dark
" indention
set autoindent
" highlight found words
set hlsearch
" press left/right and move to the previous/next line after reaching the first/last character in the line
set whichwrap+=<,>,h,l,[,]
" long lines
nnoremap k gk
nnoremap j gj
" disable arrow keys in normal mode
map <Left> <Nop>
map <Right> <Nop>
map <Up> <Nop>
map <Down> <Nop>
" toggling paste mode
set pastetoggle=<F2>
" last command
set showcmd
" disable swap files and backups
set noswapfile
set nobackup
set nowritebackup
" mouse click navigation
set mouse=a

Further learning

Try vimtutor with the following command to dive deep into Vim features.

vimtutor

Course

Build your SaaS in 2 weeks - Start Now

Integration testing Node.js apps

January 25, 2023

Integration testing means testing a component with multiple sub-components and how they interact. Some sub-components can be external services, databases, and message queues.

External services are running, but their business logic is mocked based on received parameters (request headers, query parameters, etc.). Databases and message queues are spun up using test containers.

This post covers testing service as a component and its API endpoints. This approach can be used with any framework and language. NestJS and Express are used in the examples below.

API endpoints

Below is the controller for two endpoints. First communicates with an external service and retrieves some data based on the sent parameter. The second one retrieves the data from the database.

// users.controller.ts
@Controller('users')
export class UsersController {
constructor(private userService: UsersService) {}
@Get()
async getAll(@Query('type') type: string) {
return this.userService.findAll(type);
}
@Get(':id')
async getById(@Param('id', new ParseUUIDPipe()) id: string) {
return this.userService.findById(id);
}
}

External dependencies

External service is mocked to send data based on the received parameter.

export const createDummyUserServiceServer = async (): Promise<DummyServer> => {
return createDummyServer((app) => {
app.get('/users', (req, res) => {
if (req.query.type !== 'user') {
return res.status(403).send('User type is not valid');
}
res.json(usersResponse);
});
});
};

Tests setup

Tests for endpoints can be split into two parts. The first is related to the external dependencies setup.

The example below creates a mocked service and spins up the database using test containers. The environment variables are set for before mentioned dependencies, and the leading service starts running.

The database is cleaned before every test run. External dependencies (mocked service and database) are closed after tests finish.

// test/users.spec.ts
describe('UsersController (integration)', () => {
let app: INestApplication;
let dummyUserServiceServerClose: () => void;
let postgresContainer: StartedTestContainer;
let usersRepository: Repository<UsersEntity>;
const databaseConfig = {
databaseName: 'nestjs-starter-db',
databaseUsername: 'user',
databasePassword: 'some-r4ndom-pasS',
databasePort: 5432
};
beforeAll(async () => {
const dummyUserServiceServer = await createDummyUserServiceServer();
dummyUserServiceServerClose = dummyUserServiceServer.close;
postgresContainer = await new GenericContainer('postgres:15-alpine')
.withEnvironment({
POSTGRES_USER: databaseConfig.databaseUsername,
POSTGRES_PASSWORD: databaseConfig.databasePassword,
POSTGRES_DB: databaseConfig.databaseName
})
.withExposedPorts(databaseConfig.databasePort)
.start();
const moduleFixture: TestingModule = await Test.createTestingModule({
imports: [AppModule]
})
.overrideProvider(ConfigService)
.useValue({
get: (key: string): string => {
const map: Record<string, string | undefined> = process.env;
map.USER_SERVICE_URL = dummyUserServiceServer.url;
map.DATABASE_HOSTNAME = postgresContainer.getHost();
map.DATABASE_PORT = `${postgresContainer.getMappedPort(
databaseConfig.databasePort
)}`;
map.DATABASE_NAME = databaseConfig.databaseName;
map.DATABASE_USERNAME = databaseConfig.databaseUsername;
map.DATABASE_PASSWORD = databaseConfig.databasePassword;
return map[key] || '';
}
})
.compile();
app = moduleFixture.createNestApplication();
usersRepository = app.get(getRepositoryToken(UsersEntity));
await app.init();
});
beforeEach(async () => {
await usersRepository.delete({});
});
afterAll(async () => {
await app.close();
dummyUserServiceServerClose();
await postgresContainer.stop();
});
// ...
});

Tests

The second part covers tests for the implemented endpoints. The first test suite asserts retrieving data from the external service based on the sent type as a query parameter.

// test/users.spec.ts
describe('/users (GET)', () => {
it('should return list of users', async () => {
return request(app.getHttpServer())
.get('/users?type=user')
.expect(HttpStatus.OK)
.then((response) => {
expect(response.body).toEqual(usersResponse);
});
});
it('should throw an error when type is forbidden', async () => {
return request(app.getHttpServer())
.get('/users?type=admin')
.expect(HttpStatus.FORBIDDEN);
});
});

The second test suite asserts retrieving the data from the database.

// test/users.spec.ts
describe('/users/:id (GET)', () => {
it('should return found user', async () => {
const userId = 'b618445a-0089-43d5-b9ca-e6f2fc29a11d';
const userDetails = {
id: userId,
firstName: 'tester'
};
const newUser = await usersRepository.create(userDetails);
await usersRepository.save(newUser);
return request(app.getHttpServer())
.get(`/users/${userId}`)
.expect(HttpStatus.OK)
.then((response) => {
expect(response.body).toEqual(userDetails);
});
});
it('should return 404 error when user is not found', async () => {
const userId = 'b618445a-0089-43d5-b9ca-e6f2fc29a11d';
return request(app.getHttpServer())
.get(`/users/${userId}`)
.expect(HttpStatus.NOT_FOUND);
});
});

Course

Build your SaaS in 2 weeks - Start Now

Internal testing React Native Android apps

January 21, 2023

Internal testing on Google Play Console is used for testing new app versions before releasing them to the end users. This post covers the main notes from setting up the app (on the Google Play Console) to automatic uploads.

Prerequisites

  • bootstrapped app
  • installed Android studio
  • verified developer account on Google Play Console
  • paid one-time fee (25\$)

Google Play Console setup

Create an app with the essential details, such as app name, default language, and app type, and choose if it is paid or free.

Testers

Add email addresses to the email list.

Release

Signing config

Generate a private signing key with a password using keytool

sudo keytool -genkey -v -keystore my-upload-key.keystore -alias my-key-alias -keyalg RSA -keysize 2048 -validity 10000

Move the generated file to android/app directory.

Edit ~/.gradle/gradle.properties to add the following keys and replace the alias and password with the correct values.

MYAPP_UPLOAD_STORE_FILE=my-upload-key.keystore
MYAPP_UPLOAD_KEY_ALIAS=my-key-alias
MYAPP_UPLOAD_STORE_PASSWORD=*****
MYAPP_UPLOAD_KEY_PASSWORD=*****

Edit android/app/build.gradle to add the release signing config, which uses the generated key.

android {
...
defaultConfig { ... }
signingConfigs {
...
release {
if (project.hasProperty('MYAPP_UPLOAD_STORE_FILE')) {
storeFile file(MYAPP_UPLOAD_STORE_FILE)
storePassword MYAPP_UPLOAD_STORE_PASSWORD
keyAlias MYAPP_UPLOAD_KEY_ALIAS
keyPassword MYAPP_UPLOAD_KEY_PASSWORD
}
}
}
buildTypes {
...
release {
...
signingConfig signingConfigs.release
}
}
}

Versioning

Update versionCode and versionName fields in android/app/build.gradle file before generating the bundle. The version code should be incremented by 1.

android {
...
defaultConfig {
...
versionCode 2
versionName "1.1.0"
...
}
...

Android App Bundle (aab file)

Generate the Android app bundle with the following command.

cd android
./gradlew bundleRelease

android/app/build/outputs/bundle/release/app-release.aab is the path for the generated file.

Manual uploads

Upload the aab file and write the release name and notes. The link for downloading the app should be available on the Testers tab.

Automatic uploads

This section configures the necessary steps for the pipeline with Github actions.

Versioning

Version apps with np and react-native-version packages running np script.

// package.json
{
"scripts": {
"np": "np --no-publish",
"postversion": "react-native-version -t android"
},
"repository": {
"type": "git",
"url": "<REPOSITORY_URL>"
}
}

Signing config

Remove the release signing config from android/app/build.gradle file. An app will be bundled first and signed after that.

Credentials

Get the signing key with the following command and set it as ANDROID_SIGNING_KEY Github action secret in the Settings Secrets Actions page.

cd android/app
openssl base64 < my-upload-key.keystore | tr -d '\n' | tee my-upload-key.keystore.base64.txt

Reuse credentials from ~/.gradle/gradle.properties file and set them as Github secrets.

  • MYAPP_UPLOAD_STORE_PASSWORD ANDROID_KEY_STORE_PASSWORD
  • MYAPP_UPLOAD_KEY_ALIAS ANDROID_ALIAS
  • MYAPP_UPLOAD_KEY_PASSWORD ANDROID_KEY_PASSWORD

For automatic uploads, create a service account by following the next steps.

  • go to Google Play console Setup API Access Google Cloud project, create a Google Cloud project, and link it
  • on the same page, go to Credentials Service accounts heading and click on Learn how to create service accounts, follow the mentioned steps
    • create a service account with a service account user role
    • click on Actions Manage keys button and create JSON key, which will be downloaded
  • set downloaded JSON file as ANDROID_SERVICE_ACCOUNT_JSON_TEXT Github secret

Pipeline

The following pipeline sets up the necessary tools, does CI checks (linting, testing, audit), generates the app bundle, signs it, and uploads it to the Google Play console.

name: Android Build
on:
push:
branches:
- release
jobs:
android-build:
name: Android Build
runs-on: ubuntu-latest
steps:
- name: Check out Git repository
uses: actions/checkout@v4
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: 18
distribution: temurin
- name: Set up Android SDK
uses: android-actions/setup-android@v3
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm run lint
- run: npm test
- run: npm audit
- name: Make Gradlew Executable
run: cd android && chmod +x ./gradlew
- name: Generate App Bundle
run: |
cd android && ./gradlew clean && \
./gradlew bundleRelease --no-daemon
- name: Sign App Bundle
id: sign_aab
uses: r0adkll/sign-android-release@v1
with:
releaseDirectory: android/app/build/outputs/bundle/release
signingKeyBase64: ${{ secrets.ANDROID_SIGNING_KEY }}
alias: ${{ secrets.ANDROID_ALIAS }}
keyStorePassword: ${{ secrets.ANDROID_KEY_STORE_PASSWORD }}
keyPassword: ${{ secrets.ANDROID_KEY_PASSWORD }}
- name: Upload App Bundle to Google Play
uses: r0adkll/upload-google-play@v1
with:
serviceAccountJsonPlainText: ${{ secrets.ANDROID_SERVICE_ACCOUNT_JSON_TEXT }}
packageName: com.flatmeapp
releaseFiles: android/app/build/outputs/bundle/release/*.aab
track: internal
status: draft
inAppUpdatePriority: 2

Edit the draft release and roll it out.

Troubleshooting

  • In case of a problem with signatures not matching the previously installed version, uninstall the app with the following commands.

    adb devices
    # adb -s <DEVICE_KEY> uninstall <PACKAGE_NAME>
    adb -s emulator-5554 uninstall "com.yourapp"
  • If the link for downloading the app installs some old version, clear the cache and data of the Google Play Store app on your device

Course

Build your SaaS in 2 weeks - Start Now

Markdown overview

January 15, 2023

Markdown is a markup language mainly used for writing documentation. Its extension is .md, and most of the IDEs provide a previewer of the written documents. This post will cover basic syntax, some use cases with examples, and different flavors.

Basics

  • Headers

    • # some text for h1 header
    • ## some text for h2 header
    • ### some text for h3 header
    • #### some text for h4 header
    • ##### some text for h5 header
    • ###### some text for h6 header
  • Blockquotes

    • > some text
  • Links

    • External links
      • [link text](link URL)
    • Internal links (e.g., the link to Gitlab flavored header)
      • [Gitlab flavored](#gitlab-flavored)
  • Images

    • ![image description](image URL)
  • Space between paragraphs

    • blank line
  • Lists

    • unordered
      - list item
      - sublist item
      - list item
      - subitem item
    • ordered
      1. post
      1. post
      1. post
  • Text modifications

    • Highlighted

      • `text`
    • Bold

      • **text**
    • Italic

      • *text*
    • Underlined

      • <u>underlined</u>
    • Strike-through

      • ~~some text~~
  • Tables

    • | Hackathon | Location | Date |
      | --------- | -------- | ---- |
      | [hackathon](URL) | place | 1-2 June 2023 |
  • Code snippets with syntax highlighting

    • ```js
      console.log('Hello world');
      ```
  • HTML tags and entities

    • <p>paragraph with some text &lt;3</p>
  • Comments

    • <!-- this is a comment text -->
  • Escape characters

    • \> not a blockquote text
  • Show markdown content without rendering

  • ````markdown
    ```js
    console.log('Hello world');
    ```
    ````
  • console.log('Hello world');

Usage

Documentation

Every repository should contain a Readme file with (at least) a project description and instructions on how to set up the project, run it, run the tests, and which technologies are used. Here is the link to the template example.

Blog posts

This post is written in Markdown and converted to HTML.

Diagrams

Different diagrams can be written in PlantUML. Check the rendered output in PlantUML editor.

  • Sequence diagrams
@startuml
User -> PaymentService: POST /payments
PaymentService -> PaymentService: handlePayment()
PaymentService -> User: response
@enduml
  • Architecture diagrams
@startuml
title Calculation
package calculationService {
database postgres {
collections scores
collections users
}
interface POST_calculations
interface GET_calculated_scores
component calculator
component scoreService
interface userEventConsumer
}
package worker {
component scheduler
interface userEventProducer
}
package gateway {
interface GET_scores
}
file USER_EVENT
actor User
userEventProducer --> USER_EVENT: Message event flow
USER_EVENT --> userEventConsumer: Message event flow
userEventConsumer --> users: keep users updated
scheduler --> POST_calculations: trigger calculation
POST_calculations --> calculator: calculate scores
calculator --> scores: store scores
User -> GET_scores: get scores
GET_scores --> GET_calculated_scores: get scores
GET_calculated_scores --> scoreService: get scores
scoreService --> scores: get scores
@enduml

Flavors

  • GitHub flavored

    • Task lists
      • - [x] completed
        - [ ] in progress
    • Emojis
      • :tada:
  • Gitlab flavored

    • Task lists

      • - [x] completed
        - [~] inapplicable
        - [ ] in progress
    • Emojis

      • :tada:
    • Table of content

      • [[_TOC_]]

Miscellaneous

Front matter

It is metadata placed at the beginning of the file before the content. This data can be used by static site generators like Gatsby or blogging platforms like dev.to.

---
title: Markdown overview
published: true
tags: ['markdown']
cover_image: https://picsum.photos/200/300
canonical_url: https://sevic.dev/notes/markdown-overview/
---

Mdx

Allows using JSX in Markdown documents.

import { Dashboard } from './dashboard.js';
<Dashboard year={2023} />

Course

Build your SaaS in 2 weeks - Start Now

Git cheatsheet

January 6, 2023

Git is one of the version control systems, and it's a prerequisite for development jobs. This post covers most of the git commands I use.

Configuration

  • Set user configuration for every project if you use multiple accounts
    git config user.name "<USERNAME>"
    git config user.email "<EMAIL_ADDRESS>"
  • Use the current branch for push commands
    git config --global push.default current

SSH keys setup

  • Generate separate SSH keys for Github and Bitbucket with the following command, type the filename path and passphrase.

    ssh-keygen
  • Add generated public keys to Github and Bitbucket

  • Run the following commands to activate SSH keys

    eval `ssh-agent -s`
    ssh-add ~/.ssh/id_rsa_github
    ssh-add ~/.ssh/id_rsa_bitbucket

Basic commands

  • The repository setup
    • Initialize a git repository
      git init
    • Clone an existing repository
      # git clone <REPOSITORY_URL>
      git clone git@github.com:zsevic/pwa-starter.git
    • Add the remote repository
      # git remote add <REMOTE_NAME> <REPOSITORY_URL>
      git remote add origin git@github.com:zsevic/pwa-starter.git
      git remote add upstream git@github.com:zsevic/pwa-starter.git
    • Update the URL for the remote repository
      # git remote set-url <REMOTE_NAME> <REPOSITORY_URL>
      git remote set-url origin git@github.com:zsevic/pwa-starter.git
    • Get a list of configured remote connections
      git remote -v
  • Branches
    • Get a list of the branches
      git branch
    • Create and switch to the new branch
      git checkout -b new-branch
    • Checkout to a specific commit and create a new branch out of it
      git log # find a hash from a specific commit
      git checkout <COMMIT_HASH>
      git switch -c <NEW_BRANCH_NAME>
    • Switch to another branch
      git checkout existing-branch
    • Rename the current branch
      git branch -m <NEW_BRANCH_NAME>
    • Delete branch
      git branch -D other-existing-branch
    • Fetch all the remote branches
      git fetch --all
    • Get a list of remote branches without cloning the repo or verify if the user has "read" access
      git ls-remote <REPOSITORY_URL>
  • Get the status of the local changes
    git status
  • Add new changes
    git add some-file.js
    git add .
  • Commits
    • Commit the changes
      git commit -m "Commit message"
    • Empty commit without any files
      git commit --allow-empty -m "Trigger CI pipeline"
    • Commit the changes and skip running git hooks
      git commit -m "Commit message" --no-verify
    • Update the latest commit message and add new changes to the latest commit
      git commit -m "Commit message" --amend
  • Push the changes to the remote repository
    • Push the changes to the current branch when the current branch is configured as the default one
      git push
    • Push the changes to the remote branch
      # git push <REMOTE_NAME> <BRANCH_NAME>
      git push origin master
    • Force push the changes to the feature branch
      # git push <REMOTE_NAME> <FEATURE_BRANCH_NAME>
      git push origin feature-branch -f
  • Fetch and merge remote changes to the local branch
    # git pull <REMOTE_NAME> <BRANCH_NAME>
    git pull origin master
  • Remove (unstage) the changes from the local stage
    git reset some-file.js
    git reset
  • Differences between commits
    • Get a difference compared to the latest commit
      git diff some-file.js
      git diff
    • Get a difference between the last two commits
      git diff HEAD^ HEAD
      # or
      git diff HEAD HEAD~1
  • Revert the file changes
    git checkout -- some-file.js
  • Merge the specified branch into the current one
    git merge <BRANCH_NAME>
  • Revert specific commit. The following command creates a new commit
    git revert <COMMIT_HASH>

Miscellaneous

  • Resets

    • Soft reset (commits are removed, but changes from the removed commits are staged)
      # git reset --soft HEAD~{NUMBER_OF_COMMITS_TO_SOFT_REMOVE}
      git reset --soft HEAD~2
    • Hard reset (both commits and changes are removed)
      # git reset --hard HEAD~{NUMBER_OF_COMMITS_TO_HARD_REMOVE}
      git reset --hard HEAD~1 # equal as git reset --hard HEAD^
    • Get the latest remote changes when pulling doesn't work
      git reset --hard origin/<BRANCH_NAME>
  • Stashing

    git add .
    git stash save <STASH_NAME>
    git stash list
    git stash apply --index 0
  • Tags

    • Remove the following tag locally
      git tag -d v0.13.29
  • Find removed commits

    git reflog
    git checkout <COMMIT_HASH>
  • Remove the initial commit

    git update-ref -d HEAD
  • Patching

    • Create a patch from the latest commits
      # git format-patch -{NUMBER_OF_COMMITS}
      git format-patch -1
    • Apply the patches
      git apply 0001-latest-commit.patch
  • Git submodules

    • Add git submodule
      # git submodule add -- <REPOSITORY_URL> <DIRECTORY_PATH>
      git submodule add -- git@github.com:zsevic/pwa-starter.git template
    • Retrieve the latest changes for the git submodule
      # git submodule update --remote <DIRECTORY_PATH>
      git submodule update --remote template
    • Resolve conflict in the submodule
      # git reset HEAD <DIRECTORY_PATH>
      git reset HEAD template
      git commit

Course

Build your SaaS in 2 weeks - Start Now