Deploy content to multiple servers (EC2)

I was working on a cloud-based web application (AWS EC2) and I am struggling with one problem when it comes to working with multiple servers (all under the AWS load balancer). On one server, when I download the latest files, they are instantly produced throughout the application. But this is not the case if you use several servers - you must upload files to each of them, every time you make changes. This may work fine if you do not update something very often, or if you only have one or two servers. But what if you update the system several times in one week on ten servers?

What I'm looking for is a way to "make changes from our developer or test server and force it to immediately push out all of our production servers." Ideally, the update will only apply to one server at a time (even if it only takes two or two to the server), so the ELB will not send traffic to it until the files change, so as not to disrupt any production traffic that may be flowing to an ELB. What is the best way to do this? One of my thoughts would be to use SVN on a dev server, but it really doesn’t “click on servers”. I am looking for a process that takes only a few seconds to complete an update and subsequently start applying it to the servers. Also, for those of you familiar with AWS, what's the best way to update AMI with the latest updates so that the auto-scanner always launches new instances with the latest software?

There must be good ways to do this .... cant really portray sites such as Facebook, Google, Apple, Amazon, Twitter, etc., passing and updating hundreds or thousands of servers manually and one after the other when they make changes.

Thanks in advance for your help. I hope that we can find some solution to this problem .... There must be at least 100 Google searches, both me and my business partner in the last day, have been unsuccessful for the most part in solving this problem .

Alex

+6
source share
2 answers

We use scalr.net to manage our web servers and load balancing instances. So far, it has worked very well. we have a server farm for each of our environments (2 production farms, production, sandbox). We have pre-configured roles for web servers, so it’s very easy to open new instances and scale when necessary. the web server pulls the code from github when it is loaded.

We have not completed all the deployment changes we want to make, but basically here, how we deploy new versions in our production environment:

  • we use phing to update source code and deploy to every web service. we created a task that performs git drag and drop database changes (dbdeploy phing task). http://www.phing.info/trac/
  • we wrote a shell script that does phing, and we added it to scalr as a script. Scalr has a nice scripting interface.

    #!/bin/sh cd /var/www phing -f /var/www/build.xml -Denvironment=production deploy 
  • The scalr function has the ability to run scripts in all instances in a particular farm, so each release we just click on the main branch in github and execute the scalr script file.

We want to create a github binding that automatically expands when you click on the main branch. Scalr has an api that can execute scripts, so this is possible.

+1
source

Take a look at KwateeSDCM . It allows you to deploy files and software on any number of servers and, if necessary, configure server parameters along this path. There is a message about deploying a web application on multiple tomcat instances, but it is agnostic and will work in PHP as well as you have ssh enabled on AWS servers.

0
source

Source: https://habr.com/ru/post/898909/


All Articles