Dock application with elastic beanstalk not updating after deployment

I have a Dockerfile / elastic-beanstalk in a git repository that pulls the tarball of the current version of the application from s3 and runs it. This works great on first deployment; The Docker container is created, and the application starts and starts correctly. The problem occurs after I made changes to the application, reload tarball in s3 and run eb deploy .

 $ eb deploy INFO: Environment update is starting. INFO: Deploying new version to instance(s). INFO: Successfully built aws_beanstalk/staging-app INFO: Successfully pulled yadayada/blahblah:latest INFO: Docker container 06608fa37b2c is running aws_beanstalk/current-app. INFO: New application version was deployed to running EC2 instances. INFO: Environment update completed successfully. 

But the application is not updated on *.elasticbeanstalk.com . I assume that since the Dockerfile not changed, the docker does not rebuild the container (and pulls out the latest application archive). I would like to be able to force rebuild, but the eb tool does not seem to have such an option. I can force the rebuild of the website console, but obviously this is not suitable for automation. I do every change before git , and I was hoping eb would use this to know that a rebuild is needed, but that does not seem to make any difference. Am I using dockers / beanstalks incorrectly? Ideally, I want to commit git and beanstalk to automatically reinstall the application.

+6
source share
4 answers

The problem with using Docker for CI is that it does not act like a script, since it will not be restored if the Dockerfile does not change. Thus, you must place the material that needs to be rebuilt each time in the script launch wrapper, and not in the Dockerfile . So I moved the part loading the tarball of the application into the script that the Dockerfile installed in the container. Then, when the container starts, tarball loads and unpacks, and only then can the real application run. This works, and redeployment now works as expected. Compounding the process a bit aggravates it and leads me to believe that using Docker with EB for CI is a bit of a hack.

+5
source

I wonder if you can try using user input when defining your instances in Beanstalk? Something like this might fire at the end of the download:

  #!/bin/bash cd /app/dir/home sudo docker pull username/container ... other things you may need to do ... 

More than you can reference scripts and user data executables: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

0
source

You should probably read this first in order to better understand what our local site should have done in our local GIT repository and how to push it to ElasticBeanstalk to make the site live.

Locate or create a folder in the root of our site called .elasticbeanstalk.
Inside this folder, well create two files:

Config

 [global] ApplicationName=YourApplicationNameFromAWSConsole AwsCredentialFile=.elasticbeanstalk/aws_credentials DevToolsEndpoint=git.elasticbeanstalk.us-east-1.amazonaws.com EnvironmentName=EnvironmentNameFromAWSConsole Region=us-east-1 

aws_credentials

 [global] AWSAccessKeyId=AKIAxxxxxxxxxxxxxxxxxxxxx AWSSecretKey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 

Instead of eb deploy use git aws.push to add and transfer everything to ElasticBeanstalk

 git add *.* git commit -m "Adding AWS Configs" git aws.push 
0
source

TL; DR: You may be using ContainerDirectory without HostDirectory, or you may need to update 03build.sh to build using the --no-cache = true flag.

After several hours later, I finally fixed it using my use case. I am using CodePipeline to run CodeCommit, CodeBuild and Elastic Beanstalk to create a continuous integration / continuous delivery solution in AWS with docker. The problem I ran into was that CodeBuild successfully created and published new docker images in AWS ECR (EC2 container registry), and EBS correctly pulled a new image, but the docker image was never updated on the server.

After checking the whole process of how EBS creates an image of dockers (there is a really wonderful article here, part 1 and part 2 here , which gives an overview), I found a problem.

To add to the article, EBS has a 3-step process in instances of EC2 that are deployed to deploy docker images.

  • preliminarily
  • Enact
  • after

This three-step process is a sequence of bash files that are executed that are located in /opt/elasticbeanstalk/hooks/appdeploy/ .

Preliminary preparation contains the following shell scripts:

  • 00clean_dir.sh - Cleans the directory where the source will be loaded, deletes docker containers and images (for example, cleanup).
  • 01unzip.sh - Download source code from S3 and unpack it.
  • 02loopback-check.sh - Verifies that you do not have feedback loop settings
  • 03build.sh - The magic happens here when EC2 will create your docker image from your Dockerfile or Dockerrun.aws.json. After much testing, I realized that this build script created my updated image, but I modified this script to include the -no-cache flag in the docker build.

The decision-making phase is where the caching issue really occurred. The stage consists of:

  • 00run.sh - here the docker is run in relation to the image that was generated at the preliminary stage, based on the environment variables and settings in your Dockerrun.aws.json. This is what caused the caching problem for me.
  • 01flip.sh - Converts from aws-staging to the current application and many other things.

When I launched docker from the image that was generated in Pre stage, 03build.sh, I will see the updated changes. However, when I execute the 00run.sh script shell, the old changes will appear. After checking the docker launch command, it ran

 `Docker command: docker run -d -v null:/usr/share/nginx/html/ -v /var/log/eb-docker/containers/eb-current-app:/var/log/nginx ca491178d076` 

-v null:/usr/share/nginx/html/ is what violated it and didn't update it. This was because my Dockerrun.aws.json file had

 "Volumes": [ { "ContainerDirectory": "/usr/share/nginx/html/" } ], 

without reference to the host location. As a result, any future changes I made were not updated.

For my solution, I just deleted the "Volumes" array, since all my files are contained in the docker image that I upload to ECR. Note: You may need to add -no-cache to the 03build.sh file.

0
source

Source: https://habr.com/ru/post/978489/


All Articles