I created the Node.js (Meteor) application, and I look at scaling handling strategies in the future. I developed my application as a set of microservices, and now I am considering the possibility of its implementation in the production process.
However, I would like for many microservices to work on the same server instance in order to maximize the use of resources, while they use a small amount of resources. I know that containers are useful for this, but I'm curious if there is a way to create a dynamically scaled set of containers where I can:
- Record commands such as "providing another application container on this server if the containers on which this application is running reach 80% of the CPU / other limiting indicators",
- Providing and preparing other servers, if necessary for additional containers,
- Connect balancing connections between these containers (and does this affect the load balancing on the server, for example, send fewer connections to servers with fewer containers?)
I looked at AWS EC2, Docker Compose, and nginx, but I'm not sure if I will go in the right direction.
source share