I am currently working on a project where some parts of the website may be limited due to the area in which the user is located. So when the user accesses the page, he is redirected to the form that he must fill out in order to view the content.
Wanting search engines to index content, I create exceptions for search robots so that they can easily access the content.
I have a cherry pick of some search engines from this page , and my solution would be to check the IP address of the crawler (which can be found on the page I'm linked to) and based on this access to grants.
Is this solution viable enough? I ask for this because I read an article in an article on the official central Google Webmaster blog, which recommends performing a reverse DNS lookup on a bot to match its authenticity.
I must mention that this has no security implications.
TL DR Am I being punished if I allow the search bot agent to go directly to the content while the user is being redirected? Which one is better for this? (user agent, IP address or reverse DNS lookup for cost / benefit)
source
share