How to edit code in a docker container in development?

I have code for all my sites under /srv in my containers.

My Dockerfile downloads the code using git and makes it part of the image to simplify deployment during production.

But how do I change the code in development? I thought using volumes was a solution, for example: -v /docker/mycontainer/srv:/srv . But it overwrites the directory in the container. If this is the first time I run it, it empties it because there is nothing in the host. So everything I did in the Dockerfile was lost.

There are also directories and files inside /srv/myapp that I want to use in different versions of my application, for example: /srv/myapp/user-uploads . This is a common practice in professional web development.

So what can I do to be able to do all this?

  • change code in / srv in development
  • share / srv / myapp / user-uploads in different versions
  • let Dockerfile load the code. Doing "git clone" or "git pull" outside of Docker, in my opinion, will defeat the Docker goal. In addition, there are things that I cannot run on the host, such as database migrations or other application-specific scripts.

Is there any way to do the reverse volume? I mean, the container is rewriting the host instead of the opposite.

I think one soluiton could copy / srv to / srv.deployment-copy before running the container demonstrator. And then, when I run the daemon, check if /srv.deployment-copy exists and copy everything back to / srv. That way I can use / srv as the volume and still be able to deploy the code using the Dockerfile. I already use aliases for all docker teams, so automating this will not be a problem. What do you think?

+46
docker deployment web-deployment
Apr 03 '14 at 15:56
source share
5 answers

I found that the best way to edit code in development is to install everything as usual (including cloning your application repository), but move all the code in the container to say /srv/myapp.deploy.dev . Then run the container with the rw volume for the /srv/myapp myapp and init.d script, which will clear this volume and copy its contents as follows:

 rm -r /srv/myapp/* rm -r /srv/myapp/.[!.]* cp -r /srv/myapp.deploy.dev/. /srv/myapp rm -r /srv/myapp.deploy.dev 
+9
May 30 '14 at 12:32
source share

There is another way to start a container with a volume from another container:

Take a look at https://docs.docker.com/userguide/dockervolumes/
Creating and installing a data volume container

If you have persistent data that you want to share between containers or want to use from mutable containers, it is best to create a named container for the data volume and then mount the data from it.

Create a new named container with shared volume.

 $ sudo docker run -d -v /dbdata --name dbdata training/postgres echo Data-only container for postgres 

Then you can use the --volumes-from flag to mount the / dbdata volume in another container.

 $ sudo docker run -d --volumes-from dbdata --name db1 training/postgres 

And further:

 $ sudo docker run -d --volumes-from dbdata --name db2 training/postgres 

Another useful feature that we can perform with volumes is to use them for backup, recovery, or migration. We do this using the --volumes-from flag to create a new container that mounts this volume, for example:

 $ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata 

==============

I think you should not use setting your host directory in a container. But you can use volumes with all its capabilities. You can edit files in volumes using other containers with the perfect mix of your editors and tools. And the container that your application will be clean without overhead.

Structure:
-) Container for application data
docker run -d -v /data --name data
-) Container for binary application files
docker run -d --volumes-from data --name app1
-) Container for editors and development tools
docker run -d --volumes-from data --name editor

+10
Nov 13 '14 at 18:31
source share

Note. You cannot connect the container directory to the host directory with -v .

I do not think you need mangle / srv and /srv.deployment-copy. If you

I think that:

  • You must use volume for persistent / shared data: -v /hostdir/user-uploads:/srv/myapp/user-uploads , or you can use the data volume container . You can consider this database with a backup of the file system that is stored on the host (only the data container), and the container is allowed to use it -v .

  • You are right: for production deployment - you can create an image with the source code ( git clone ), you create an image for each version. There should be no need to edit the source code during production.

  • for a development environment — you must create an image without source code, or you can obscure the source code directory with a volume if you use the same image for deployment / development. Then git the clone source code locally and use the volume -v /hostdir/project/src:/srv/project to exchange the source code with the container. It is advisable that the source code is read-only ( :ro at the end), and any temporary or intermediate files should be stored somewhere else in the container. I have installation scripts (data transfer, rebuilding of some index / cache data files, etc.) that are executed when the container starts, before the service starts. So whenever I feel like I need fresh re-init , I just kill the dev container and start it again. Or I do not stop the old container - I just start another one.

+5
Apr 04 '14 at 8:28
source share

I found a good way to do this using only git:

 CONTAINER=my_container SYNC_REPO=/tmp/my.git CODE=/var/www #create bare repo in container docker exec $CONTAINER git init --bare $SYNC_REPO #add executable syncing hook that checks out into code dir in container printf "#!/bin/sh\nGIT_WORK_TREE=$CODE git checkout -f\n" | \ docker exec -i $CONTAINER bash -c "tee $SYNC_REPO/hooks/post-receive;chmod +x \$_" #use git-remote-helper to use docker exec instead of ssh for git git remote add docker "ext::docker exec -i $CONTAINER sh -c %S% $SYNC_REPO" #push updated local code into docker git push docker master 

It is assumed that you have local git code with the code. git needs to be installed in the container. Alternatively, you could use docker run and a data container with a shared volume with git installed.

+5
Apr 21 '17 at 20:17
source share

Assuming git is not a container entry point, if git is installed in your docker container, you can ssh into the container and run git clone / git pull . Due to the way the shared volume is shared with the host, changes made from the container to the files will also be made to the host (indeed, these are the same files).

Here is some explanation of how fast ssh is to the container.

+1
Apr 03 '14 at
source share



All Articles