Performance and reliability when using multiple Docker VS containers Node standard cluster

Performance and reliability when using multiple Docker VS standard Node cluster containers

Hi, I have a question about the performance, reliability and growth potential of the two installations that I came across. I am far from a Docker expert or cluster, so any recommendations or tips will be really appreciated.

application

A typical MEAN stack web application runs on Node v6.9.4 . Nothing unusual, standard setting.

The problem and possible solutions that I found

a) Standard Linux server with NGINX (reverse proxy) and NodeJS

b) Standard Linux server with NGINX (reverse proxy) and NodeJS Cluster. Using Node Cluster Module

c) The "Dockerized" NodeJS application has been cloned 3 times (3 containers) using the NGINX load balancer. Credit for the idea belongs to Anand Shankar

 // Example nginx load balance config server app1:8000 weight=10 max_fails=3 fail_timeout=30s; server app2:8000 weight=10 max_fails=3 fail_timeout=30s; server app3:8000 weight=10 max_fails=3 fail_timeout=30s; // Example docker-compose.yml version: '2' services: nginx: build: docker/definitions/nginx links: - app1:app1 - app2:app2 - app3:app3 ports: - "80:80" app1: build: app/. app2: build: app/. app3: build: app/. 

d) All together. A NodeJS "pre-redesigned" application (multiple containers) with a cluster configured inside and above three containers - NGINX load balancer.

If I get it right, having 3 NodeJS containers running the application, where each of these application replicas supports NodeJS clustering, this should lead to incredible performance.

3 x containers x 4 workers, should mean 12 nodes to handle all requests / responses. If this were correct, the only drawback would be a more powerful, in terms of hardware, mechanism to support this.

In any case, my logic may be completely wrong, so I'm looking for any comments or feedback on this!

goal

My goal is to prepare ready-made, stable environments ready for download. We are not talking about thousands of simultaneous connections at the same time, etc. Maintaining a scalable and flexible infrastructure is a big +.


Hope the question makes sense. Sorry for the long post, but I wanted it to be clear.

Thanks!

+3
source share
1 answer

In my experience, I believe that options C or D are the most convenient to maintain, and assuming that the resources available on server D are the most efficient.

However, did you even stop by Kubernet? I found that they have a small learning curve, but this is a great resource that allows you to dynamically scale, load balance and offer much smoother deployment options than Docker Compose. The largest Kubernetes cluster hosting is more expensive than a single server.

0
source

Source: https://habr.com/ru/post/1272994/


All Articles