Benchmarking the Next.js server vs nginx at serving a static site
Update: I did a follow-up with a more realistic setup, showing what the performance looks like when this is running on a remote server.
Next.js is a React framework that allows you to build web applications. It offers a bunch of neat features to make web development easier.
While Next.js leans heavily into server-side features like SSR and API routes, it can be used to build static websites. Three ways to deploy these static sites are:1These are ways to do it if you’re hosting on bare metal. There are of course also services that will effectively do one of these for you, like Vercel’s cloud hosting or GitHub Pages.
- Export the files and serve them with nginx or similar
- Use the default Next.js production server (
next build && next start
) - (not recommended2
I think the HMR functionality opens up additional attack surface area, and I expect this to be a lot slower. I’m testing this anyway as I could imagine people doing this by mistake.
) Use the Next.js development server (next dev
)
My suspicion is that 1 is likely the fastest. However, 2 might often be simpler - both because you don’t need to configure a second web server, and it means your deployment configuration can be the same across static and dynamic Next.js apps.
Knowing how much slower this is allows us to properly evaluate this penalty, and make evidence-based decisions about what trade-offs we might be making.
So, how fast is it? I took this blog (yes, the very one you’re reading) and tested out how long it takes to load.
Specifically, I took commit 34df60, ran npm install
with Node v20.12.2, then ran:
Tested with nginx/1.25.5 installed via Homebrew.
Create nginx.conf:
events {}
http {
server {
listen 8080;
root ./dist;
index index.html;
error_page 404 /404.html;
location / {
try_files $uri $uri/ $uri.html =404;
}
}
}
Then run npm run build && nginx -c $PWD/nginx.conf -p $PWD
Edit next.config.js
to remove output: 'export'
Run npm run build && PORT=8080 npx next start
Run npx next dev --port 8080
Then run benchmark-next-vs-nginx, with URL_TO_TEST=http://localhost:8080/blog/dont-use-contact-forms/ npm start
Median HTTP load times, high reqs/second (i.e. for the HTML response, using k6):
Median page load times, no additional reqs/second (i.e. for the browser to consider the page fully loaded, using Puppeteer):
nginx | next start | next dev | |
---|---|---|---|
load_passes | 500 | 500 | 500 |
load_avg | 41.39144283 | 61.76591118 | 244.250632 |
load_med | 38.690521 | 58.83 | 241.6322715 |
load_p90 | 48.1870875 | 71.9440042 | 266.7428548 |
load_p95 | 54.7479899 | 77.74398715 | 278.5116128 |
req_passes | 285245 | 224018 | 25081 |
req_duration_avg | 5.348423601 | 245.8335255 | 1782.082563 |
req_duration_median | 0.052 | 253.466 | 1688.108 |
req_duration_p90 | 0.064 | 276.589 | 2267.1042 |
req_duration_p95 | 0.093 | 280.7 | 2539.82495 |
Takeaways
Wow, nginx is fast - 4874x faster than Next.js under heavy load for median response time. I had some sense of this before, but just quite how much it dominates on request duration. I think this is largely because Next.js slows down quite a bit under heavy request load, given Next.js’s median time when only processing a single request was much lower.
That said, the Next.js server is plenty fast for real-world use cases. It was just 20ms slower (a fifth of a blink) to actually load the page under low load, and even under heavy load (3733 requests/second) it still had a median request duration of 253ms.
The real-world penalty will also likely be lower than this: network latency will likely start to dominate. And if you’ve got dynamic requests to APIs, that will almost certainly be the bottleneck rather than the static file serving.
Overall, with this small of a penalty in most cases I’d pick running Next.js given it’s significantly easier to set up (provided you’re building a Next.js app already), and means your deployment can be standardised across static and dynamic applications. Only when you’re getting hundreds or thousands of requests per second, or if you've already got a really smooth nginx pipeline, does it probably become worth looking at this.
Update: I did a follow-up with a more realistic setup, showing what the performance looks like when this is running on a remote server.
Footnotes
-
These are ways to do it if you’re hosting on bare metal. There are of course also services that will effectively do one of these for you, like Vercel’s cloud hosting or GitHub Pages. ↩
-
I think the HMR functionality opens up additional attack surface area, and I expect this to be a lot slower. I’m testing this anyway as I could imagine people doing this by mistake. ↩