Docking nginx and flask

I am creating a Flask / uswgi web server. I'm still interested in microservice architecture:

Should I put both nginx and Flask with uwsgi in one container or should I put them in two different containers and bind them?

I intend to run these services in a Kubernetes cluster.

thanks

+5
source share
3 answers

Short answer:

I would use the nginx and uwsgi / flask application as separate containers. This gives you a more flexible architecture that allows you to associate more microservice containers with an nginx instance, as your demands for more services grow.

Explanation:

With docker, the usual strategy is to split the nginx service and the uwsgi / flask service into two separate containers. You can then link both of them using links. This is a general philosophy of architecture in the docker world. Tools such as docker-compose simplify the management of multiple container launches and form links between them. The following sample configuration file containing the file for the dock shows an example of this:

version: '2' services: app: image: flask_app:latest volumes: - /etc/app.cfg:/etc/app.cfg:ro expose: - "8090" http_proxy: image: "nginx:stable" expose: - "80" ports: - "80:8090" volumes: - /etc/app_nginx/conf.d/:/etc/nginx/conf.d/:ro links: - app:app 

This means that if you want to add more application containers, you can easily connect them to a single ngnix proxy by linking them. In addition, if you want to upgrade one part of your infrastructure, say, upgrade nginx or switch from apache to nginx, you only rebuild the appropriate container and leave everything else in place.

If you were to add both services in one container (for example, by starting the supervisor process from the Dockerfile ENTRYPOINT), this will make it easier for you to choose the connection between the nginx and uwsgi process using the socks file, and not via IP, but I don’t think it In itself, it is a strong enough reason to place both in the same container.

Also, consider if you end up with twenty microservices, and each of them starts each own instance of nginx, which means that now you have twenty sets of nginx logs (access.log / error.log) to track 20 containers.

If you use the architecture of "microservices", this means that over time you will add more and more containers. In such an ecosystem, starting nginx as a separate docker process and linking microservices to it makes growth easier to meet your expanding requirements.

Service Discovery Note

If the containers run on the same host, then connecting all the containers is easy. If the containers run on multiple hosts using Kubernetes or Docker swarm, then the situation may become a little complicated, since you (or your cluster infrastructure) need to associate your DNS address with your nginx instance, and docker containers should be able to "find" each other - this adds a bit of conceptual overhead. Kubernetes helps you achieve this by grouping containers into containers, defining services, etc.

+4
source

Docker's philosophy uses microservices in containers. The term " Microservice Architecture " has emerged over the past few years to describe a particular way of developing software applications in the form of sets of independently deployable services .

With this, you can deploy uwcgi in a separate container and use microservices architecture .

Some advantages of microservices architecture :

  • Improved Crash Protection
  • Eliminates long-term commitment to a single technology stack
  • Easy for a new developer to understand the functionality of the service
  • Easy update management
+1
source

If you use Nginx in front of your Flask / uwsgi server, you use Nginx as a proxy: it takes care of forwarding traffic to the server, ultimately taking care of TLS encryption, possibly authentication, etc.

The point of using a proxy server, such as Nginx, should be able to balance the traffic on the server (s): the Nginx proxy server receives requests and distributes the load between several servers.

This means that you need one Nginx instance and one or more Flask / usqgi server instances as upstream servers.

To do this, the only way is to use separate containers.

Please note: if you are in a cloud provider such as AWS or GKE that provides load balancing to attract external traffic to your Kubernetes cluster, and if you just use Nginx to forward traffic (i.e. do not use it for TLS or auth), then you probably don’t even need a Nginx proxy server, but you can use a Service that performs proxying for you. Adding Nginx just gives you more control.

+1
source

Source: https://habr.com/ru/post/1260211/


All Articles