Python Deployment for a Distributed Application

We are developing a distributed application in Python. Now we are going to reorganize some of our system components and deploy them on separate servers, so I want to better understand the deployment for such an application. We will have several server code servers, several database servers (of different types) and, possibly, several front-end servers.

My question is / what are some good deployment patterns for distributed applications (in Python or in general)? How can I manage pushing the code to several servers (whose IP should be parameterized in the deployment system), static files on several interfaces, starting / stopping processes on servers, etc.? We are looking for an easy-to-use solution, but basically, what was once set up will go out of our way and allow us to deploy as painlessly as possible.

To clarify: we know that for this particular application there is no standard one solution, but this question is more likely to focus on a guide to the best methods for different types / parts of deployment than a single, unified solution.

Thank you very much! Any suggestions regarding this or other deployment / architecture pointers would be greatly appreciated.

+4
source share
1 answer

It all depends on your application.

You can:

  • Use Puppet to deploy servers.
  • use Fabric to connect to servers remotely and perform specific tasks,
  • use pip to distribute Python modules (even non-public ones) and install dependencies,
  • use other tools for specific tasks (for example, use boto to work with Amazon Web Services APIs, for example, to launch a new instance),

It's not always that simple, and you will likely need something tuned in. Just take a look at your system: it is not so "standard", so do not expect it to be processed in a standard way.

+2
source

Source: https://habr.com/ru/post/1444908/


All Articles