Before I begin, let me clarify a few misconceptions and define some terminology for new and old users. First, docker images are more or less instant copies of container configurations. Everything from file systems to network configurations is contained in the image and can be used to quickly create new instances (containers) of the specified image.
Containers launch instances of a particular image, and that is where all the magic happens. Dock containers can be thought of as tiny virtual machines, but unlike virtual machines, system resources are unanimously shared and have several other functions that VMs do not have. You can get more information about this in another article.
Creating an image is done either by saving the container ( docker commit *container* *repoTag* ), or by creating from the Dockerfile , which is an automatic assembly instruction, as if you were making changes to the container yourself. It also provides the Transaction end-user with all the commands necessary to run your application.
Reduce build time ... of my Docker images
Correct me if I'm wrong, but it looks like you are trying to create your own image for each new container. Images of dockers are needed only to turn the container around. Yes, it takes some time to create them, especially for dockers, but once they are built, it actually takes a trivial amount of time to deploy the container with the right application that you really need. Again, docker images are the preservation of the states of previous container configurations, and loading the preservation state does not require and should not spend much time , so you really should not worry about dockerfiles build time.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Despite this, work to reduce the Dockerfiles build time and the size of the final container file remains a relevant issue, and resorting to automatic dependency resolution is a common approach. In fact, I asked a similar question almost 2 years ago , so it may have some information that can help in this endeavor. But...
To reduce build time and reduce the deployment time of my Docker images, I need to get the minimum size of the context sent to these images.
To which Taco, the person who answered my previous question would answer
Docker is not going to offer you painless builds. Docker does not know what you want.
Yes, of course, it would be less trouble if Docker knew what you want from get-go, but the fact remains: you need to say exactly what you want, if you strive to create it using the best size and best time . However, there is more than one way to get the best build time and / or build size.
- One frankly obvious, as mentioned by Andreas Wederbrand in
this is the same post that you can get application logs from a previous run to check what it does or does not need. Suppose you created one of your project applications, dropping all possible dependencies into it.
You can systematically extract all dependencies, run the application,
check for failures in your logs, add a dependency, check the output
difference. If the output is the same, remove the broken dependency,
otherwise, keep the addiction.
If I wrote this specific command in the docker file, something like this might happen, assuming the container is built from a Linux system:
#ASSUMING LINUX CONTAINER! ... WORKDIR path/to/place/project RUN mkdir dependencyTemp COPY path/to/project/and/dependencies/ . #Next part is written in pseudo code for the time being RUN move all dependencies to dependencyTemp \ && run app and store state and logs\ && while [$appState != running]; do {\ add dependency to folder && run app and store state and logs \ if [$logsOriginal == $logsNew]; then remove dependency from folder \ else keep dependency && logsOriginal = logsNew fi}
However, this is terribly inefficient as you start and stop your application internally to find the dependencies needed for your application, resulting in an awfully long build time. True, this will somewhat hide the problem of finding dependencies on its own and reduce some size, but it may not work for 100% of the time, and it will probably take less time to find the dependencies needed to run the application, and not to develop the code, to avoid this gap.
- Another solution / alternative, although more complex, is to link containers across the network . Network containers have remained a challenge for me, but its simplicity is what you want for this. Say you are deploying 3 containers, of which 2 projects, and the other a dependency container. Through the network, one container can refer to the dependency container and receive all necessary dependencies similar to the current configuration. Unlike yours, however, the dependencies are not in the application, which means that your other applications can be built with minimal size and time.
However, if the dependency container drops, other applications will also drop, which may not lead to a stable system in the long run. In addition, you will have to stop and start each container every time you need to add a new dependency or project.
- Finally, if your containers will be stored locally, you can take a look at volumes . Volumes are a great way to install file systems to active containers, so applications in containers can reference files that are not explicitly specified. This translates to a more elegant docker, since all dependencies can legitimately be βsharedβ without being explicitly included.
With its live mount, you can add dependencies and files to update all of your applications that they need at the same time, as an added bonus. However, volumes do not work very well when you plan to scale your projects outside of your local system and undergo local intervention.
~~~~~~~~~~~~~~~~~~~
The bottom line contains dockers that cannot automatically resolve dependencies for you, and the workarounds for it are too complicated and / or require a lot of time to even remotely consider possible options for your desired solution, since it would be much faster if you figured out and determine the dependencies yourself. If you want to go out and develop a strategy yourself, go straight ahead.