How can I automate deployment through multiple ssh firewalls (using PW auth)?

I am stuck in a slightly annoying situation.

There is a chain of machines between my desktop and production servers. Something like that:

desktop -> firewall 1 -> firewall 2 -> prod_box 1 -> prod_box 2 -> ... 

I am looking for a way to automate deployment in prod boxes via ssh.

I know that in general there are a number of solutions, but my limitations are:

  • Changes to the firewall not allowed 2
  • No configuration changes allowed for prod windows (content only)
  • Firewall 1 has a local account for me
  • firewall 2 and prod are available as root
  • port 22 is the only open port between each link

So, in the general case, the sequence of commands that I execute for deployment is:

 scp archive.tar user@firewall1 :archive.tar ssh user@firewall1 scp archive.tar root@firewall2 :/tmp/archive.tar ssh root@firewall2 scp /tmp/archive.tar root@prod1 :/tmp/archive.tar ssh root@prod1 cd /var/www/ tar xvf /tmp/archive.tar 

This is a bit more complicated than it’s in reality, but this is the main summary of the tasks that need to be completed.

I put my ssh key in firewall1: /home/user/.ssh/authorized_keys, so no problem.

However, I cannot do this for firewall2 or prod packages.

It would be great if I could run this (the commands above) from the shell script locally, enter my password 4 times and do it with it. Unfortunately, I cannot figure out how to do this.

I need to somehow bind ssh commands. I spent all day trying to use python for this, and eventually refused, because the ssh libraries did not seem to support password-style input.

What can i do here?

There must be some kind of library that I can use for:

  • login via ssh using either a key file or dynamically entered password
  • remote shell remote commands through ssh tunnel chain

I'm not quite sure what to flag this question, so I just left it as ssh, deployment for now.

NB. It would be great to use ssh tunnels and the deployment tool to push these changes out, but I still have to manually go into each box to configure the tunnel, and this will not work anyway due to a port lock.

+4
source share
2 answers

I am working on Net :: OpenSSH :: Gateway , an extension for my other Perl module Net :: OpenSSH , which does just that.

For instance:

 use Net::OpenSSH; use Net::OpenSSH::Gateway; my $gateway = Net::OpenSSH::Gateway->find_gateway( proxies => ['ssh:// user@firewall1 ', 'ssh://password: root@firewall2 '], backend => 'perl'); for my $host (@prod_hosts) { my $ssh = Net::OpenSSH->new($host, gateway => $gateway); if ($ssh->error) { warn "unable to connect to $host\n"; next; } $ssh->scp_put($file_path, $destination) or warn "scp for $host failed\n"; } 

It requires Perl in both firewalls, but there is no permission to write or install any additional software there.

+3
source

Unfortunately, this cannot be done as a single shell script. I tried, but ssh requires an interactive terminal to negotiate passwords, which you won’t get with ssh chained commands. You could do this with keys without a password, but since it is very unsafe and you cannot do it anyway, it doesn't matter.

The basic idea is that each server sends a bash script to the next, which is then activated and sends the next (and so on) until it reaches the last one that performs the distribution. <w> However, since this requires an interactive terminal at each stage, you will need to manually monitor the payload in the chain, executing each script as you do it, as it is now, but with a smaller character set.
Obviously you need to tweak them a bit, but try these scripts:

script1.sh

 #!/bin/bash user=doug firewall1=firewall_1 #Minimise password entries across the board. tar cf payload1.tar script3.sh archive.tar tar cf payload2.tar script2.sh payload1.tar scp payload2.tar ${user}@${firewall1}:payload2.tar ssh ${user}@${firewall1} "tar xf payload2.tar;chmod +x script2.sh" echo "Now connect to ${firewall1} and run ./script.sh" 

script2.sh

 #!/bin/bash user=root firewall2=firewall_2 # Minimise password entries scp payload1.tar ${user}@${firewall2}:/tmp/payload1.tar ssh ${user}@${firewall2} "cd /tmp;tar xf payload1.tar;chmod +x script3.sh" echo "Now connect to ${firewall2} and run /tmp/script3.sh" 

script3.sh

 #!/bin/bash user=root hosts="prod1 prod2 prod3 prod4" for host in $hosts do echo scp archive.tar ${user}@${host}:/tmp/archive.tar echo ssh ${user}@${host} "cd /var/www; tar xvf /tmp/archive.tar" done 

It requires 3 passwords for each firewall, which is a little annoying, but such a life.
Is that good for you?

+1
source

Source: https://habr.com/ru/post/1386244/


All Articles