In short:
mycurl() { curl --head https://${1}:443 | grep -iE "(Server\:\ Target)" > ${1}_info.txt; } export -f mycurl parallel -j0 --tag mycurl {1}.{2}.{3}.{4} ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255}
A bit different using --tag instead of many _info.txt files:
parallel -j0 --tag curl --head https://{1}.{2}.{3}.{4}:443 ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} | grep -iE "(Server\:\ Target)" > info.txt
Turn off the fan to run more than 500 in parallel:
parallel echo {1}.{2}.{3}.{4} ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} | \ parallel -j100 --pipe -N1000 --load 100% --delay 1 parallel -j250 --tag -I ,,,, curl --head https://,,,,:443 | grep -iE "(Server\:\ Target)" > info.txt
This will cause a task up to 100 * 250, but will try to find the optimal number of tasks where there is no downtime for any of the processors. On my 8-core system, which is 7500. Make sure you have enough RAM to run the max potential (in this case 25000).
source share