Benchmarking Node.js (Express) Webserver vs RocketMap’s Flask/werkzeug

One of RocketMap’s biggest limiting factors has always been its primitive development webserver (Flask/werkzeug). Because of its synchronous design (even the official docs specify it’s a development webserver and not intended to be used on production systems), it can’t be expected to properly support the load of a public, online webserver.

As RocketMap’s community grew and it started being used for projects and events, and it couldn’t even handle the webserver load created by just family and friends, it became clear that it needed to be replaced.

So I wrote a Node.js webserver with Express.

To keep it short, I:

  • separated the data API from the front-end’s static components: the new webserver only handles dynamic data requests,
  • added docs on the repository to use nginx (Linux)/apache2 (Windows) to serve static files (which nginx does amazingly well on Linux),
  • optimized SQL queries and added query limits,
  • added sorting to the queries to first send items closest to the center of the requested viewport,
  • removed whitespace from the JSON response (RocketMap originally sent “beautified” responses, the default of Flask’s jsonify),
  • added gzip compression,
  • added a load limiter that gracefully rejects new requests if the system is overloaded,
  • changed the webserver to use a persistent database connection pool (default 1 to 5 connections) instead of Flask/werkzeug creating hundreds of database connections (during our benchmark, we had to increase our connection limit to 1000 for Flask/werkzeug),
  • added a process manager with Node.js’ cluster to leverage multiprocessing on each CPU core (configurable), easily disabled to replace with a more robust process manager if preferred.

I’ve shared most of this information on Discord while I was building it, but the benchmarks and results are new for everyone.

About the load limiter.

The load limiter changes how the webserver behaves under impossible load. Impossible load is when the load on the webserver is above the limit of requests it can handle. Understanding how a webserver handles impossible load is important, because it determines how your webserver will behave when things unavoidably (or unexpectedly) go wrong.

Instead of continuing to queue up requests and wanting to handle all of them (but not being able to keep up), we intentionally fail gracefully by rejecting the request instantly (HTTP 503 – Service Unavailable). This article on Mozilla’s blog goes into more detail: Building A Node.JS Server That Won’t Melt – A Node.JS Holiday Season, part 5.

We care mostly about the results, so here is the webserver’s behavior when the load goes above the maximum and we’re not using a load limiter:

Webserver w/o load limiter

and this is how it behaves when we use a load limiter:

Webserver w/ load limiter

Since a load limiter will fail gracefully, not all responses are HTTP 200 OK, but the goal is to keep our servers alive during impossible load, not to respond to all requests (which is impossible, as the requests are intentionally above the limit of what the webserver can handle).

The benchmarks.

Note: These benchmarks aren’t meant to test each and every individual component that was used in building the new webserver. The comparison between Flask/werkzeug and Node.js/Express is not a fair one because they serve entirely different purposes (and werkzeug shouldn’t be used in any production environment), but these benchmarks show practical and direct results for our RocketMap users, giving an example of the real change in performance if they switch from Flask/werkzeug to the new Node.js webserver.

ApacheBench was used for benchmarking, with 1000 requests and 100 concurrency. The dataset (MariaDB database) was manually crafted to return the exact same dataset for both platforms (1000 active items in the viewport).

First, the total time our benchmark took:

Flask/werkzeug: 172.135s
devkat w/o load limiter: 19.609s
devkat w/ load limiter: 15.331s

The difference with the load limiter is because 900 out of 1000 requests were handled, while the other 100 requests failed gracefully due to the server being busy. This behavior reduces mean response time on handled requests due to the event loop being less busy.

Mean requests per second:

Flask/werkzeug: 5.81 RPS
devkat: 54 RPS

Transfer rate (w/ concurrency):

Flask/werkzeug: 4586.3 Kib/s
devkat: 31916.41 Kib/s

The mean time (in ms) it took to serve requests (load limiter is insignificant here, as it only comes into play when we’re overloaded):

Requests served

Flask/werkzeug’s behavior over time worsens while our new webserver remains more consistent. The fastest response time (request fully completed) for our new webserver (without load limiter) was 392ms, and slowest was 2492ms. For Flask/werkzeug, the fastest was 10845ms and the slowest was 23364ms.

 

That’s a success. 👍

2 Comments Benchmarking Node.js (Express) Webserver vs RocketMap’s Flask/werkzeug

  1. Pingback: New Releases and This Thing Called "a Digital Nomad" - Sébastien Vercammen

  2. Pingback: A Trip to Bulgaria and Release of the Sublimely Magnificent Node.js RM Webserver Mark III - Sébastien Vercammen

Leave a Reply

Your email address will not be published. Required fields are marked *