How to handle? _escaped_fragment_ = for AJAX scanners?

I try my best to make an AJAX-based website optimized for SEO. As recommended in online tutorials, I added the "pretty" href attributes to the links: <a href="#!site=contact" data-id="contact" class="navlink"></a> , and in div, where the content is loaded by default AJAX, PHP script for scanners:

 $files = glob('./pages/*.php'); foreach ($files as &$file) { $file = substr($file, 8, -4); } if (isset($_GET['site'])) { if (in_array($_GET['site'], $files)) { include ("./pages/".$_GET['site'].".php"); } } 

I have a feeling that at first I need to additionally cut out _escaped_fragment_= part from (...)/index.php?_escaped_fragment_=site=about , because otherwise the script will not be able to GET the site value from the URL, am I?

but be that as it may, how do I find out that the crawler converts beautiful links (those with #! ) to ugly links (containing ?_escaped_fragment_= )? I was told that this happens automatically, and I do not need to provide this mapping, but Fetch, like Googlebot, does not provide me with any information about what happens with the URL.

+6
source share
1 answer

The Google bot will automatically request the URLs ?_escaped_fragment_= .

So, from www.example.com/index.php#!site=about Google Bot will ask for: www.example.com/index.php?_escaped_fragment_=site=about

On a PHP site, you get it as $_GET['_escaped_fragment_'] = "site=about"

If you want to get the value of "site", you need to do something like this:

 if(isset($_GET['_escaped_fragment_'])){ $escaped = explode("=", $_GET['_escaped_fragment_']); if(isset($escaped[1]) && in_array($escaped[1], $files)){ include ("./pages/".$escaped[1].".php"); } } 

Take a look at the documentation:

https://developers.google.com/webmasters/ajax-crawling/docs/specification

+14
source

Source: https://habr.com/ru/post/954674/


All Articles