I did not like any of the solutions above, because A) they required external libraries that I did not have, and which I did not want to install. B) I did not receive all the pages.
The Docker API limits you to 100 items per request. This will loop over each βnextβ element and get them all (for Python 7 pages; others may be more or less ... dependent)
If you really want to spam yourself, delete | cut -d '-' -f 1 | cut -d '-' -f 1 | cut -d '-' -f 1 | cut -d '-' -f 1 from the last line and you will see absolutely everything.
url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 '# Initial url' ; \ ( \ while [ ! -z $url ]; do '# Keep looping until the variable url is empty' \ >&2 echo -n "." '# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)' ; \ content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') '# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be stored in a variable called content' ; \ url=$(echo "$content" | head -n 1) '# Let get the first line of content which contains the next URL for the loop to continue' ; \ echo "$content" | tail -n +2 '# Print the content without the first line (yes +2 is counter intuitive)' ; \ done; \ >&2 echo '# Finally break the line of dots' ; \ ) | cut -d '-' -f 1 | sort --version-sort | uniq;
Sample Output:
$ url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 '#initial url' ; \ > ( \ > while [ ! -z $url ]; do '#Keep looping until the variable url is empty' \ > >&2 echo -n "." '#Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)' ; \ > content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') '# Curl the URL and pipe the JSON to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be store in a variable called content' ; \ > url=$(echo "$content" | head -n 1) '#Let get the first line of content which contains the next URL for the loop to continue' ; \ > echo "$content" | tail -n +2 '#Print the content with out the first line (yes +2 is counter intuitive)' ; \ > done; \ > >&2 echo '#Finally break the line of dots' ; \ > ) | cut -d '-' -f 1 | sort --version-sort | uniq; ... 2 2.6 2.6.17 2.8 2.8.6 2.8.7 2.8.8 2.8.9 2.8.10 2.8.11 2.8.12 2.8.13 2.8.14 2.8.15 2.8.16 2.8.17 2.8.18 2.8.19 2.8.20 2.8.21 2.8.22 2.8.23 3 3.0 3.0.0 3.0.1 3.0.2 3.0.3 3.0.4 3.0.5 3.0.6 3.0.7 3.0.504 3.2 3.2.0 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5 3.2.6 3.2.7 3.2.8 3.2.9 3.2.10 3.2.11 3.2.100 4 4.0 4.0.0 4.0.1 4.0.2 4.0.4 4.0.5 4.0.6 4.0.7 4.0.8 32bit alpine latest nanoserver windowsservercore
If you want a bash_profile version:
function docker-tags () { name=$1
And just call it: docker-tags redis
Sample Output:
$ docker-tags redis ... 2 2.6 2.6.17 2.8 --trunc---- 32bit alpine latest nanoserver windowsservercore
Javier Buzzi Feb 22 '18 at 15:36 2018-02-22 15:36
source share