Scaling microservices using Docker

I created the Node.js (Meteor) application, and I look at scaling handling strategies in the future. I developed my application as a set of microservices, and now I am considering the possibility of its implementation in the production process.

However, I would like for many microservices to work on the same server instance in order to maximize the use of resources, while they use a small amount of resources. I know that containers are useful for this, but I'm curious if there is a way to create a dynamically scaled set of containers where I can:

  • Record commands such as "providing another application container on this server if the containers on which this application is running reach 80% of the CPU / other limiting indicators",
  • Providing and preparing other servers, if necessary for additional containers,
  • Connect balancing connections between these containers (and does this affect the load balancing on the server, for example, send fewer connections to servers with fewer containers?)

I looked at AWS EC2, Docker Compose, and nginx, but I'm not sure if I will go in the right direction.

+5
source share
3 answers

Explore Kubernetes and / or Mesos and you will never look back. They are specially designed for what you want to do. The two components you should focus on are:

  • Service Discovery: This allows interdependent services (microservice "A" calls "B") to "find" each other. This is usually done using DNS, but with registration functions on top of it that handle what happens when the instances are scaled.

  • Planning: In Docker-land, scheduling is not related to CRON jobs, which means that containers are scaled and "packaged" on servers in various ways to make the most of available resources.

In fact, there are dozens of options here: Docker Swarm, Rancher, etc. are also competing alternatives. Many cloud vendors, such as Amazon, also offer special services (such as ECS) with these features. But Kubernetes and Mesos are becoming standard, so you will be in good company if you even start there.

0
source

Metrics can be compiled through the Docker API (and the cool post blog ), and this is often used for this. Tricks with DAPI and docker tools (compose / swarm / machine) can provide many tools to efficiently scale the microservice architecture.

I could advise Consul to manage discovery in such a resource-oriented system.

0
source

We use AWS to host our miroservices application and use ECS (AWS Docker Service) to containerize the various APIs.

And in this context, we use AWS auto-scaling to control scaling. Check it out .

Hope it helps.

0
source

Source: https://habr.com/ru/post/1245521/


All Articles