Twenty-five million, five hundred thousand visits in only 45 days. Let that sink in for a minute.
If the entire country of Belgium would visit my website once per day, it would take them three days to reach that amount of visits.
Every single person in Belgium, their fathers and mothers, their grandmothers and grandfathers, all of the babies and everyone’s friends – and three days.
Wait, what project?
I recently launched a project about Pokémon Go (RareSpawns.be). The concept is simple: in the Pokémon games, there are some Pokémon that are considered rare; they are hard to get or rarely appear in the game at all.
In Pokémon Go, a rare Pokémon is even rarer than in the regular games because we can’t control the spawns: we have to wait for Niantic to create one, and we’ve all seen what happens when this happens in a dense area.
A rare Pokémon can move the world.
The website is a realtime, crowdsourced platform that shows where some of the rare Pokémon have spawned in the world – it includes their coordinates, their potential (called their IV, displayed as a percentage) and the in-game moves the Pokémon can use.
The secret to getting so many visitors?
There is no secret.
The project did what it had to and it supports a huge amount of concurrent visitors without delay. During the peak of the activity, I remodeled and rewrote the entire program’s structure to support the activity and the inevitable 24/7 DDoS attacks – in the end, it took ~2 weeks of full-time development to make it what it is today (the tech stack deserves a separate article).
I launched, went to bed with ~250 concurrent users, and I woke up to a few thousand.
I worked, went to bed again, and woke up with hundreds of thousands of visits.
What did we learn? The more I go to bed, the more visits I get.
People loved it so much, they started to attack it.
After a certain amount of popularity, the attacks started. 24/7 DDoS attacks that just never stopped.
After the rework of the structure, I had separated all of the work/responsibilities between multiple instances. There was no single point of failure and the most important ones (i.e. the data entry points and API) were no longer running on the same instance as the instance which serves the real-time feed on the front-end.
In simpler terms, it means they could attack all they want and they could even send it all as web traffic, it still wouldn’t increase the load on the most important instances. Oh, and the front-end feed was being served separately from the webserver that was hosting the files – every server/instance does exactly what it has to, nothing more.
And the number of active database connections I was using during all of this? One. A single connection. Built almost entirely on Node.js, I needed only one instance with a single database connection to handle all of it.
And I’ve got to hand it to my host: OVH has an amazing infrastructure.
It’s been amazing. I’ve learned a lot and I couldn’t have hoped for it to get as popular as it did.
I’ve met a lot of interesting people along the way (PokemonGo-Map team, pogoapi and all of our regulars) and I want to add a special thank you for the people that have always helped me to stay on track and work on my goals every single day.
Up next is a completely different project. Another day, another challenge. 🙂
And what about you?
Do you have any projects you’d like to talk about, or any challenges in your near future?
Let me know in the comments, I want to know all about it!
Psst, in the meantime, enjoy this song from twenty one pilots. Time to relax.