I want to use caching / splitting docker images to save bandwidth, disk space and time spent.
Say:
- I have a docker image for web applications installed and deployed to multiple docker hosts.
- The docker image contains the source code of my web application.
- I worked on the code and now I have a new version of the code.
How can I automate the creation of a new docker latch above the last image containing only the fix?
My goal is that to upload new images for docker hosts that have already uploaded the previous image, you only need a small bugfix error.
This is a question of my current reflection on this:
- Most likely, I will return to
docker commit to save the update on the image. - But how can I access the contents of the image?
- And even then, how would I import my changes without cluttering up the original docker images with various tools (git and shell scripts) that have nothing to do with serving a web application?
- I looked through the volumes to share the code with another docker who will take care of the upgrade. But volumes are not committed.
Thanks for understanding how to do this!
EDIT: using multiple Docker files seems like a different way of doing this, thanks http://jpetazzo.imtqy.com/2013/12/01/docker-python-pip-requirements/ for similar problems. It seems I will need to generate my docker files on the fly.
vaab source share