Load and stress testing with Artillery
April 25, 2026Artillery is a performance testing tool. This post explains types of performance testing and dives into Artillery usage, from configuration to running tests and checking results.
Load and stress testing
Load and stress testing are two types of performance testing used to evaluate how well a system performs under various conditions.
Load testing determines how the system performs under expected user loads. The purpose is to identify performance bottlenecks.
Stress testing assesses how the system performs when loads are heavier than usual. The purpose is to find the limit at which the system fails and to observe how it recovers from such failures.
Prerequisites
Artillery available via
npx(or installed in project dev dependencies)YAML config file that defines target, phases and scenarios
Optional CSV payload file for dynamic test data
Configuration
Artillery test configuration is defined in a YAML file (for example, .artillery/main.yml). It typically contains:
target: base URL of the app (http://localhost:3000)phases: load shape over timepayload: CSV file for dynamic valuesscenarios: user flows and weights
For realistic load tests, configure these parts intentionally:
- Define phases for warm-up, ramp-up, and sustained load.
- Use a CSV payload when requests require dynamic values (IDs, emails, tokens, and similar).
- Use weighted scenarios to simulate realistic traffic distribution across endpoints.
config:target: 'http://localhost:3000'phases:- duration: 30arrivalRate: 5name: Warm up- duration: 30arrivalRate: 5rampTo: 50name: Ramp up load- duration: 30arrivalRate: 50name: Sustained loadpayload:path: 'dynamic_data.csv'fields:- 'CustomerId'scenarios:- name: "Get customer's tracks"flow:- get:url: '/customers/{{ CustomerId }}/tracks'weight: 4- name: 'Get customers pdf'flow:- get:url: '/customers/pdf'weight: 1
Scenario flow and dynamic data
Scenarios define user actions during the test.
In the example:
- 80% of users run
Get customer's tracks(weight: 4) - 20% of users run
Get customers pdf(weight: 1)
Dynamic data is loaded from .artillery/dynamic_data.csv, and {{ CustomerId }} is injected into request URLs.
Running tests
Load tests can be executed through an npm script:
"scripts": {"test:load": "artillery run ./.artillery/main.yml"}
Run the test with:
npm run test:load
You can also run Artillery directly with npx:
npx artillery run ./.artillery/main.yml
Test report
By default, Artillery prints a terminal summary with key metrics such as:
- failed virtual users and HTTP error/status-code distribution
- response-time percentiles (especially p95 and p99)
- median response time (p50)
- requests per second and total request volume
To save raw results and generate an HTML report:
npx artillery run --output report.json ./.artillery/main.ymlnpx artillery report report.json --output report.html
This generates a shareable HTML report with latency percentiles, throughput and error trends.
When reading Artillery reports, focus on these metrics first:
- Error rate (
vusers.failed, non-2xx/3xx codes): the most important health signal. Even small error percentages can indicate instability under load. - Tail latency (
p95,p99): shows worst-case user experience. Systems can have a good median but still feel slow for a significant group of users. - Median latency (
p50/median): useful for baseline responsiveness, but should always be evaluated together with p95/p99. - Throughput (
http.requests, requests/sec): confirms how much traffic the system handled during the test. - Trend over time (
intermediatesnapshots): helps identify whether performance degrades during ramp-up or sustained load.
In short: start with reliability (errors), then check latency percentiles (especially p95/p99), and finally validate throughput and time-based stability.