Microservices in practice

I studied the concept of microservices for a while, and I understand what it is and why they are needed.

Quick update

In short, a monolithic application is broken down into independent deployable units, each of which typically provides its own web API and has its own database. Each service has a single responsibility and does it well. These services communicate over synchronous web services, such as REST or SOAP, or using asynchronous messages, such as JMS, to execute a specific request in synergy. Our monolithic application has become a distributed system. Typically, all of these fine-grained APIs are accessible through an API gateway or proxy server, which acts as a facade with a single entry point, performing security and monitoring tasks.

The main reasons for adapting microservices are high availability, zero downtime updates and high performance achieved through horizontal scaling of a particular service, and a weaker connection in the system, which means easier maintenance. In addition, the IDE process, the build and deployment process will be much faster and easier to change the structure or even the language.

Microservices go hand in hand with clustering and containerization technologies such as Docker. Each microservice can be packaged as a docker container to run on any platform. The basic concepts of clustering are discovery, replication, load balancing, and fault tolerance. Docker Swarm is a clustering tool that organizes these container services, glues them together and processes all these tasks under the hood declaratively, preserving the desired state of the cluster.

It sounds easy and simple in theory, but I still don’t understand how to put this into practice, even I know that Docker Swarm is pretty good. Consider a specific example.

Here is the question

I am creating a simplified java application with Spring Boot supported by a MySQL database. I want to create a system where a user receives a web page from service A and submits a form. Service A will do some data manipulation and send it to service B, which will further manage the data, write to the database, return something, and in the end, some kind of response is sent back to the user.

Now the problem is that service A does not know where to find service B, and service B does not know where to find the database (because they can be deployed to any node in the cluster), so I do not know how I should configure the application Spring boot. The first thing that comes to my mind is to use DNS, but I cannot find tutorials on setting up such a system in a swarm of dockers. What is the right way to configure connection settings in Spring to deploy distributed clouds? I researched the Spring Cloud project, but don't understand if this is the key to this dilemma.

I am also confused about how databases should be deployed. Should they live in a cluster, deploy with the service (possibly using a docker socket), or is it better to manage them in the traditional way with fixed IP addresses?

The last question concerns load balancing. I am confused if for each service there should be several load balancers or only one main load balancer. If the load balancer has a static IP address mapped to the domain name and all user requests are targeted to this load balancer? What if load balancing fails, doesn't it make every effort to scale services aimlessly? Do I even need to install a load balancer with Docker Swarm, since it has its own route grid? Which end user node should target?

+6
source share
1 answer

If you use Docker Swarm, you don’t have to worry about DNS settings, as it is already being processed by the overlay network. Let's say you have three services:

a b c

A is your database, B may be the first service to collect data, and C to receive data and update the database (A)

docker network create \ --driver overlay \ --subnet 10.9.9.0/24 \ youroverlaynetwork docker service create --network youroverlaynetwork --name A docker service create --network youroverlaynetwork --name B docker service create --network youroverlaynetwork --name C 

After creating all the services, they can refer to each other directly by name

These requests are balanced by load on all container replicas in this overlay network. That way, A can always get the IP for B by referring to http: // b "or simply by calling hostname B.

When you are dealing with load balancing in Docker, the roaming service is already balancing the load internally. After you have defined the listening service on port 8018, all the swarm hosts will listen on port 8018 and the mesh route to allow the container to cycle.

However, it is best to use application load balancing in front of hosts in the event of a host failure.

+2
source

Source: https://habr.com/ru/post/1012842/


All Articles