Your robots.txt file is invalid:
You need line breaks between records (a record starts with one or more User-agent
lines).
Disallow: bingbot
prohibits crawling URLs whose paths begin with "bingbot" (ie http://example.com/bingbot
), which is probably not what you want.
Not an error, but Disallow:
not required (since it is the default by default).
So, you probably want to use:
User-agent: * Disallow: *.axd Disallow: /cgi-bin/ Disallow: /member User-agent: bingbot User-agent: ia_archiver Disallow: /
This prohibits scanning anything for bingbot and ia_archiver. All other bots can scan everything except URLs whose paths begin with /member
, /cgi-bin/
or *.axd
.
Please note that *.axd
will be interpreted literally in accordance with the original robots.txt specification (therefore, they will not scan http://example.com/*.axd
, but will scan http://example.com/foo.axd
) However, many bots extend the specification and interpret *
as a kind of wildcard.
source share