If the site does not offer such documentation for uploading to the archive (they often do), you can always use the spider tool (navigating the web links of the page from the landing page and downloading each linked page is a bit like a search engine) to download (part) of the site.
For example, I think wget can do this, or with the HTTrack GUI a good tool with a lot of options.
Warning: you must carefully set the parameters to avoid, for example, downloading the entire site (or even the entire Internet!) When all you need is documentation.
Some sites also provide protection against such spiders, as they have a limited bandwidth or monthly amount of data served, etc. You can set options to slow down downloads, resume downloads, etc.
source share