How to deploy a full (network) application using Docker, if each process should be a container?

I am trying to understand how Docker should be used.

It is not clear whether I should put everything I need into a single Docker file. I read some people saying that at the moment it is best to use one container for each process, for example: a web server, a database and a language interpreter will make 3 containers.

But how to pack all these containers together? Should this responsibility belong to Docker, or do I need to use something else? For starters, I could write a simple bash script that installs all the containers I need. Is that the way?

Another question (maybe I should open a separate thread for this): What is the most common practice? Use default server for "docker push" or host your own?

+4
source share
1 answer

First is your second question. A good reason to use a private repository is that your images are, well ... private. The most common practice, I think, is that people who do not have a private repository use a public index, simply because it is easy. If you want to open something, be sure to use a public index. But if you have a private project, it's time to open a private index.

. , . , Docker, , UML . , . , , . , , , ( - , , ). Me: bash script, . .

.

+1

Source: https://habr.com/ru/post/1531873/


All Articles