Scalability Recommendations

Hey guys! I’m looking for some feedback on some concerns I’m having about scalability of Adonis on a particular application that I’m writing and would like some recommendations or reassurance.

I am using Adonis as a centralized API for a multi-purpose business application that will service anywhere from 10-20, up to potentially 500+, active users at a given time. This will replace a legacy system that I wrote in an now-defunct application platform and I’m trying to plan for a more modular, scalable, and fault tolerant foundation.

My concern is that I’m putting too much on a single API with database handling, websockets, scheduled tasks, events, mail, SMS, etc. on a single threaded node application. I’ve considered splitting out functional tasks to two separate Adonis instances where one would handle taking requests, interacting with the database, and websockets and the other would handle scheduled tasks and worker tasks such as sending email, SMS, processing, etc with Redis communicating between the two. I also don’t believe hardware is a concern as the front-end UI, Adonis API, and database(s) are on their own servers with good hardware (64Gb RAM, Xeon Gold 6126).

Are there any recommendations for the best way to handle this? I like having everything centralized but I don’t want the performance bottlenecks that I had with the previous system. Should I be concerned or can Adonis handle this sort of load just fine as is?

Hi @tehshortbus - good question.

This might not be straightforward answer or the answer you want or need.

First run Adonis with pm2. This will make your life way easier.
Next, in order to test I’d suggest looking at the performance testing frameworks, such as JMeter or Siege or you can google for performance testing frameworks. There is plenty of them.

They are hard to set up but it is crucial for the test. It’s not going to be 100% of what can and will happen once live but it will give you more clear picture of how the system will behave and will outline if there are bottlenecks.

I’ve helped to build an internal app for the company in Laravel, which has around 100 concurrent users every day and works as is out of the box. I believe Adonis will do the same.

For larger enterprise projects (10k’s and more) concurrent users I cannot say, but if it does not handle, then microservices are needed - or perhaps even different or vanilla nodejs + express.

1 Like

Try benchmarking first. I find Apache utils useful but there are plenty of options. Adonis should be fast enough.

I’m not a fan of your comment “I like having everything centralized”. Even monolithic applications can be broken down into mini-monoliths but they should not share the same datastore.

Good luck.

1 Like

Thanks for the responses.

@Tsume I utilize pm2 to run Adonis currently. I was also looking into pm2’s cluster mode to help with scalability but I also utilize adonis-scheduler and think that having multiple instances running would mean that the scheduler is running in each thereby potentially duplicating those tasks.

I believe going with the original idea of having the “run right now” functions running in the primary Adonis application (with clustering) and breaking off the “run whenever” functions such as scheduled tasks, sending email/sms, and other processing functions in a secondary Adonis application would likely fix both of these issues.

@jacksoncharles - I agree with multiple mini-monoliths and it’s something I’m trying to feel out since I come from a history of everything being in one place.

I will try benchmarking and performance testing frameworks for sure and I appreciate those recommendations.

Hi @tehshortbus!

You already have quite some answers in here, but I’ll give my 2 cents.

I have several Adonis projects running, some in Kubernetes cluster, some (one actually) in small PM2 cluster and some as single applications.

In k8 we have application that served 30k req/sec at peak times. It never autoscaled over 3 GCloud small VMs (3 is minimum for HA). Application handles SMS sending, hashing, crypting, email sending, database management but no websocket connections. It is one monolith and no performance issues in there.

One small project that runs on PM2 cluster is mostly WS only, about 1k WS connections are up all the time, but data movement is quite small. It is running on same physical server with a lot of other stuff, including Minecraft server (MC server is eating resources for breakfast). Challenges in there were mostly related to syncing WS. Redis was key in there.

Looking at

I am using Adonis as a centralized API for a multi-purpose business application that will service anywhere from 10-20, up to potentially 500+, active users at a given time.


I wrote in an now-defunct application platform and I’m

I think more business value gives you currently getting it up faster and since you are solo developer on it handling monolith is a lot easier that bunch of micro-services. Your hardware is quite beefy too, so having 500+ connections should not be any problem, unless you are dealing with some heavy cryptography, since SMS, mails etc are mostly async calls to other APIs

If you get any performance issues in future you can setup Redis + PM2 cluster or Adonis in cluster mode quite easily

1 Like

Thank you for sharing your experience, @McSneaky, it was extremely helpful.

I’m not doing any intensive processing like cryptography outside of generating passwords so it sounds like clustering is looking promising instead of splitting it out. I still feel like I should probably decouple the task scheduling to it’s own process to prevent same task from being fired more than once based on how many instances.

If anyone else has some real world experience with this issue I would appreciate the input as well.