Short answer:
I would use the nginx and uwsgi / flask application as separate containers. This gives you a more flexible architecture that allows you to associate more microservice containers with an nginx instance, as your demands for more services grow.
Explanation:
With docker, the usual strategy is to split the nginx service and the uwsgi / flask service into two separate containers. You can then link both of them using links. This is a general philosophy of architecture in the docker world. Tools such as docker-compose simplify the management of multiple container launches and form links between them. The following sample configuration file containing the file for the dock shows an example of this:
version: '2' services: app: image: flask_app:latest volumes: - /etc/app.cfg:/etc/app.cfg:ro expose: - "8090" http_proxy: image: "nginx:stable" expose: - "80" ports: - "80:8090" volumes: - /etc/app_nginx/conf.d/:/etc/nginx/conf.d/:ro links: - app:app
This means that if you want to add more application containers, you can easily connect them to a single ngnix proxy by linking them. In addition, if you want to upgrade one part of your infrastructure, say, upgrade nginx or switch from apache to nginx, you only rebuild the appropriate container and leave everything else in place.
If you were to add both services in one container (for example, by starting the supervisor process from the Dockerfile ENTRYPOINT), this will make it easier for you to choose the connection between the nginx and uwsgi process using the socks file, and not via IP, but I donβt think it In itself, it is a strong enough reason to place both in the same container.
Also, consider if you end up with twenty microservices, and each of them starts each own instance of nginx, which means that now you have twenty sets of nginx logs (access.log / error.log) to track 20 containers.
If you use the architecture of "microservices", this means that over time you will add more and more containers. In such an ecosystem, starting nginx as a separate docker process and linking microservices to it makes growth easier to meet your expanding requirements.
Service Discovery Note
If the containers run on the same host, then connecting all the containers is easy. If the containers run on multiple hosts using Kubernetes or Docker swarm, then the situation may become a little complicated, since you (or your cluster infrastructure) need to associate your DNS address with your nginx instance, and docker containers should be able to "find" each other - this adds a bit of conceptual overhead. Kubernetes helps you achieve this by grouping containers into containers, defining services, etc.