Finding a Safe Way to Deploy PHP Code

How we do things now

We have a file server (using NFS) that supports several web servers and uses these mounts as the root of the website. When we deploy our code base, we collect the archive (tar.gz) on the NFS server and upload the data directly to the "web directory" of the file server.

Problem

During the deployment process, we see some I / O errors, mainly when the requested file cannot be read: Smarty error: unable to read resource: "header.tpl" These errors seem to go away after the deployment is completed, so we assume that this is because being in a web directory is not the safest of things. I guess we need something atomic.

My question

How can we automatically copy new files to an existing directory (web server root directory)?

EDIT

Files that we are uncompromising in the web directory are not the only files that are in the directory. We add files to a directory that already has files. So copying a directory or using a symbolic link is not an option (which I know of).

+6
source share
5 answers

That's what I'm doing.

DocumentRoot e.g. /var/www/sites/www.example.com/public_html/:

 cd /var/www/sites/www.example.com/ svn export http://svn/path/to/tags/1.2.3 1.2.3 ln -snf 1.2.3 public_html 

You can easily change this to expand your .tar.gz before changing the symbolic link instead of exporting from svn. The important part is that this change is an atomic use of a symbolic link.

+1
source

I think rsync is the best choice instead of scp , only modified files will be synchronized. but deploying code using a script is not suitable for deployment in a team, and deployment errors are not humanized.

you can think of Capistrano, Magallanes, Deployer, but they are also a script. I can recommend you try walle-web , a deployment tool written in PHP with yii2 out of the box. I have been taking it to our company for several months, it works smoothly, deploying a test, simulating a production environment.

it depends on the bash, rsync, git, link tool groups, but web ui is usually good for work, try :)

+1
source

Why don't you just add 2 dirs with two different versions of the site. Therefore, when you finished deploying to site_2, you simply switched the dir site to your web server configuration (for example, apache) and copied all the files to the site_1 directory. Then you can expand the site_1 directory and switch to it from site_2 using the same method.

0
source

RSync was born to run ... er ... I want to do just that.

RSync works on local file systems and ssh - very reliable and fast - sending / copying only modified files.

It can be configured to delete any files that were deleted (or simply missing from the source), or it can be configured to leave them alone. You can configure exclusion lists to exclude certain files / directories during synchronization.

Here is a link to the tutorial .

Re: atomic - link to another question in SO

0
source

I like the idea of ​​NFS. We deploy our code to an NFS server located on our interfaces. We actually run the shell script when we want to release a new version. What we do is use the symbolic link current for the latest release, for example:

 /fasmounts/website/current -> /fasmounts/website/releases/2013120301/ 

And the root of the apache document:

 /fasmounts/website/current/public 

(actually apache document root is / var / www, which is a symbolic link to / fasmounts / website / current / public)

The shell script updates the current symlink to a new version AFTER everything has been loaded correctly.

0
source

Source: https://habr.com/ru/post/903942/


All Articles