Docker Containers: Services and Complete Applications

I have been constantly discussing how to think and use Docker containers.

From the literature and examples, it seems that the container should really provide a service or part of the stack. For example, a container can run MySQL, or Apache, or redis, or something else. I can understand why it is good and clean, and it makes sense.

In our scenario, we want to host several completely separate web applications (e-commerce stores, Wordpress sites, static websites, node.js applications) on the same server, and we want to use Docker. Therefore, it is more reasonable for me that each container is completely self-container, and the entire stack itself, for example. each of my possibly several running Wordpress containers will have its own LAMP installation.

Applying a model with one container and one service to this scenario is very difficult - each application will depend on other containers in the system, on which other things depend. And what if you need several versions of a particular service.

While this seems like a way, it seems like it can be very inefficient? I am not an expert on how LXC works, but despite the fact that everything is containerized, in fact, all working apache2 and mysqlds work on the system with all the associated overheads - will there be performance problems?

Does anyone have any thoughts?

+6
source share
3 answers

I would prefer to use one container for each application . If you place each service in one image / container, you have some advantages:

  • You can easily create new stacks, use Apache instead of Nginx.
  • You can reuse components, for example. I use the same Logstash image to deploy with every logging application.
  • You can use the predefined services from the Docker index (now called the Docker Hub). If you need to configure the Memcached service, you can just pull the image.
  • You can manage each service, for example. stop it or update. If you want to update the application, you only need to rebuild one image and upload or download only one image.

Since LXC and Docker seem to be very efficient, I would not mind using multiple containers. For this, Docker was developed. And I think you will have a reasonable number, say, 100 containers. Therefore, this should not be a problem.

+2
source

I agree with @Thomasleveil, and what's more, I want to mention the FLOSS Weekly episode 330 , where the author of the original dockers and the current technical director point out the same fact that Docker is just a building block. Enlighten yourself on this and use it if it suits your needs. Many people use docker in both directions - the process for the container and the application for the container. Both ways have their pros and cons.

But also I want to caution against using Supervisor as a PID1 process to control multiple processes in a container. If you open supervisord.org , one of the first things you will see:

Unlike some of these programs, it [Supervisor] is not intended to run as a replacement for init as "process identifier 1". Instead, it is designed to control the processes associated with a project or client, and is designed to run like any other program at boot time.

This means that with the Supervisor you will have a zombie process problem described by phusion and minit author . In addition, Supervisor manages only the foreground processes because it begets them as its children and does not control the children of children. So forget about /etc/init.d/mysql start and think about how to start everything in the foreground.

I managed to solve this problem with the above minit and Monit . minit necessary because Monit also cannot fulfill the role of PID1 (but it is planned for 2015, see # 176 ). Monit is good because it allows you to express a controlled dependency of a service (say, do not run the application until the database is running) and can process daemons as they are, monitor memory, the CPU and have a web interface to find out what going on. Here, besides the docker file with which I used this approach with Debain Wheezy:

 # installing the rest of dependencies RUN apt-get install --no-install-recommends -qy monit WORKDIR /etc/monit/conf.d ADD webapp.conf ./ RUN echo "set httpd port 2812 and allow localhost" >> /etc/monit/monitrc ADD minit /usr/bin/minit RUN mkdir /etc/minit RUN echo '#!/bin/bash\n /etc/init.d/monit start; monit start all' \ > /etc/minit/startup RUN echo '#!/bin/bash\n \ monit stop all; while monit status | grep -q Running; do sleep 1; done; \ /etc/init.d/monit stop' > /etc/minit/shutdown RUN chmod u+x /etc/minit/* ENTRYPOINT ["/usr/bin/minit"] 

And here is Monit webapp.conf :

 check process webapp with pidfile /var/run/webapp/webappd.pid start program = "/etc/init.d/webapp start" stop program = "/etc/init.d/webapp stop" if failed host 127.0.0.1 port 8080 for 2 cycles then restart if totalmem > 64 MB for 10 cycles then restart depends mysql, nginx group server check process mysql with pidfile /var/run/mysqld/mysqld.pid start program = "/etc/init.d/mysql start" stop program = "/etc/init.d/mysql stop" group database check process nginx with pidfile /var/run/nginx.pid start program = "/etc/init.d/nginx start" stop program = "/etc/init.d/nginx stop" group server 
+1
source

Docker is just a tool, use it as it best suits your needs.

Nothing prevents you from running multiple processes in a Docker container. One way to do this is to start processes using a supervisor, as described in this Docker article .

You can also take a look at the Phusions approach of this use case. They emphasize what might go wrong if you run multiple processes in a Docker container and provide a Docker image ( Phusion / baseimage ), which helps to set things up correctly.

0
source

Source: https://habr.com/ru/post/970768/


All Articles