Concurrent Iteration of IP Addresses in Bash

I am dealing with a large private / 8 network and must list all web servers that are listening on port 443 and have a specific version specified in the HTTP HEADER response.

At first I thought of running nmap with jumps in connections and grep myself through the output files, but it turned out to throw a lot of false positives when nmap stated that the port was "filtered", open "(using connection scan: nmap -sT -sV -Pn -n -oA foo 10.0.0.0/8 -p 443 ).

So, now I thought the script was something with bash and curl - the pseudocode would look like this:

 for each IP in 10.0.0.0/8 do: curl --head https://{IP}:443 | grep -iE "(Server\:\ Target)" > {IP}_info.txt; done 

Since I am not familiar with bash, I am not sure how the script is correct - I would have to:

  • through all IP addresses
  • make sure that only X-threats are running at a time.
  • ideally cuts out the output to record only the IP of the corresponding node in one file
  • ideally make sure that only the appropriate server versions are checked.

Any suggestion or indication of direction is much appreciated.

+6
source share
2 answers

Small scale - iteration

for a smaller range of IP addresses, it is probably recommended to repeat this:

 for ip in 192.168.1.{1..10}; do ... 

As indicated in this similar question .


Large scale - parallel!

Given that your problem is with a huge range of IP addresses, you should probably consider a different approach.

This one asks to use gnu parallel .

Iterating over a large range of IP addresses in bash using gnu parallel requires splitting the logic into multiple files (use for parallel command).

ip2int

 #!/bin/bash set -e function ip_to_int() { local IP="$1" local A=$(echo $IP | cut -d. -f1) local B=$(echo $IP | cut -d. -f2) local C=$(echo $IP | cut -d. -f3) local D=$(echo $IP | cut -d. -f4) local INT INT=$(expr 256 "*" 256 "*" 256 "*" $A) INT=$(expr 256 "*" 256 "*" $B + $INT) INT=$(expr 256 "*" $C + $INT) INT=$(expr $D + $INT) echo $INT } function int_to_ip() { local INT="$1" local D=$(expr $INT % 256) local C=$(expr '(' $INT - $D ')' / 256 % 256) local B=$(expr '(' $INT - $C - $D ')' / 65536 % 256) local A=$(expr '(' $INT - $B - $C - $D ')' / 16777216 % 256) echo "$A.$B.$C.$D" } 



scan_ip

 #!/bin/bash set -e source ip2int if [[ $# -ne 1 ]]; then echo "Usage: $(basename "$0") ip_address_number" exit 1 fi CONNECT_TIMEOUT=2 # in seconds IP_ADDRESS="$(int_to_ip ${1})" set +e data=$(curl --head -vs -m ${CONNECT_TIMEOUT} https://${IP_ADDRESS}:443 2>&1) exit_code="$?" data=$(echo -e "${data}" | grep "Server: ") # wasn't sure what are you looking for in your servers set -e if [[ ${exit_code} -eq 0 ]]; then if [[ -n "${data}" ]]; then echo "${IP_ADDRESS} - ${data}" else echo "${IP_ADDRESS} - Got empty data for server!" fi else echo "${IP_ADDRESS} - no server." fi 



scan_range

 #!/bin/bash set -e source ip2int START_ADDRESS="10.0.0.0" NUM_OF_ADDRESSES="16777216" # 256 * 256 * 256 start_address_num=$(ip_to_int ${START_ADDRESS}) end_address_num=$(( start_address_num + NUM_OF_ADDRESSES )) seq ${start_address_num} ${end_address_num} | parallel -P0 ./scan_ip # This parallel call does the same as this: # # for ip_num in $(seq ${start_address_num} ${end_address_num}); do # ./scan_ip ${ip_num} # done # # only a LOT faster! 


Improving the iterative approach:

The naive cycle loop execution time (which is estimated to take 200 days for 256 * 256 * 256 addresses) has been improved to per day according to @skrskrskr.

+8
source

In short:

 mycurl() { curl --head https://${1}:443 | grep -iE "(Server\:\ Target)" > ${1}_info.txt; } export -f mycurl parallel -j0 --tag mycurl {1}.{2}.{3}.{4} ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} 

A bit different using --tag instead of many _info.txt files:

 parallel -j0 --tag curl --head https://{1}.{2}.{3}.{4}:443 ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} | grep -iE "(Server\:\ Target)" > info.txt 

Turn off the fan to run more than 500 in parallel:

 parallel echo {1}.{2}.{3}.{4} ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} | \ parallel -j100 --pipe -N1000 --load 100% --delay 1 parallel -j250 --tag -I ,,,, curl --head https://,,,,:443 | grep -iE "(Server\:\ Target)" > info.txt 

This will cause a task up to 100 * 250, but will try to find the optimal number of tasks where there is no downtime for any of the processors. On my 8-core system, which is 7500. Make sure you have enough RAM to run the max potential (in this case 25000).

+2
source

Source: https://habr.com/ru/post/974075/


All Articles