To deploy across multiple hot instances that are behind a load balancer, such as nginx I like to deploy the deployment using a tool like Fabric .
- Cloth connects you to the server 1
- Web server shutdown
- Deploy the changes either using VCS or using tarball migration with a new application
- Start the web server
- GOTO1 and connect to the next server.
This way you will never be offline, and that will be unimpeded, since nginx knows when the web server is reset, when it tries to round it, and moves on to the next, and as soon as the node / instance comes back, it will go back in production.
EDIT:
You can use the ip_hash module in nginx to ensure that all requests from the same IP address go to the same server for the session length
This directive forces requests to be distributed between upstream streams based on the client IP address. The key for the hash is the network address of the C-client class. This method ensures that a client request is always sent to the same server. But if this server is considered inoperative, the request of this client will be transferred to another server. This makes it highly likely that clients will always connect to the same server.
This means that after updating your web server and connecting the client to a new instance, all connections for this session will continue to be redirected to the same server.
It will leave you in a situation
- The client connects to the site, gets access from the server 1
- Server 1 is updated before the client completes everything they do.
- Is the client potentially left in a state of uncertainty?
This scenario raises the question: are you removing things from your API / site that could potentially leave the client in a state of uncertainty? If everything you do, for example, updates user interface elements or adds pages, etc., but does not change any front-end APIs, you should not have any problems. If you remove API functions, you may have problems.
source share