This repository is devoted benchamarking different JS (frameworks / runtimes) a.k.a "transports" in different usecases building web-server dynamically in runtime based on CLI.
These transports are:
- Node.js (v20)
- Bun (v1.1.26)
- Express.js (v4.19.2)
- Fastify (v4.28.1)
- uWebSockets (v20.48.0)
These usecases are:
- Empty request
- Heavy non-blocking request (setTimeout)
- Heavy blocking request (heavy CPU-bound)
- Pg-pool create user request
- Pg-pool get user request
- Redis create user request
- Redis get user request
- dynamic scaffolding web-server based on CLI (transport + usecase)
- dynamic scaffolding benchmark test based on CLI ()
In this mode you should separately run web server and benchmark usecase. For example:
- Run web server:
npm run server -- -t node -u emptySupported flags for manual benchmark running:
- u — usecase (*)
- t — transport (*)
- Run benchmark:
npm run benchmark:manual -- -u empty -c 100 -p 1 -w 3 -d 60Supported flags for manual benchmark running:
- u — usecase (*)
- c — connections
- p — pipelining factor
- w — workers
- d — duration
or
autocannon http://localhost:3001/empty -d 30 -c 100 -w 3Such mode will produce benchmark result only in terminal for specific usecase.
In this mode you should run only one script which under the hood will test all usecases running them on each transport (you change config at src/benchmark/automate-config.ts):
Run automate script:
npm run benchmark:automateSuch mode will produce benchmark result in new file /benchmarks-data/benchmark-${last-snapshot}.json and create / upgrade benchmark-summary.md file which will contain comparison table based on last snapshot json file.
- To check "raw" data check
/benchmarks-data/benchmark-${last-snapshot}.json(each new run of benchmark generates new json file with result) - To check summary data of last benchmark (comparison table) check
benchmark-summary.mdfile.