Building AJAX crawlable apps without backend control

5 answers

The question is, Can googlebot do basic javascript?

If not, then no. When you read, your application requires JS support to display any page. This leaves you without a bot-friendly access method.

If yes, then yes:

Because JavaScript can access url parameters through location.search , you can create plausible URLs for Google to get href attributes that are interpreted by your JS application and redefined for users in onclick attributes.

 <a href="/?a=My-Blog-Post" onclick="someFunc(this.href);return false;"> 

This will be coupled with the code in your onload application to search for location.search and get which .md can appear in the specified url parameter (after parsing the query string) in the hope that Google will launch the specified onload to get the specified content. This is an option for developing the style of domain.com/#!ajax/path sites. Both options are fully client-side, but the query string option will indicate to googlebot that the page should be selected as a separate URL.

You can test this with http://google.com/webmasters , which has a "fetch like googlebot" feature.

+2
source

I created a small module that helps him. take a look at http://alexferreira.imtqy.com/seojs/

+1
source

Without a server server executing some logic, this is a bit complicated ...

But maybe inspired by what it says here http://meta.discourse.org/t/seo-compared-to-other-well-known-tools/3914 and http://eviltrout.com/2013/ 06/19 / adding-support-for-search-engines-to-your-javascript-applications.html

You can use your build script to generate copies of your index file in the tree by following your post/:post_slug route post/:post_slug , e.g. /post/slug/index.html . Each page will have a <noscript> with very basic content and links for the current post. You can even add your CurrentPost JSON hash code on the page to save XHR.

This means using a history API, which is not very IE friendly, but perhaps not a big problem.

+1
source

You went to your dinner, ate your desert, and then looked at your vegetables.

What you really want to do is start pages without AJAX first. After the pages load correctly without the need for JavaScript, just add ?ajax=1 all your requests. If isset($_GET['ajax']) , then you can avoid loading headers, footers, sidebars, etc. Then just use the anonymous window.onclick and disconnect from there.

An example of a video with Web 3.0 technology without using third-party software (including frameworks) built using the strictest code that you are trying to create ...

http://www.youtube.com/watch?v=hZw8t-GVCB4

Feel free to watch javascript on my website. I will be happy to help you this weekend.

0
source

U've built a script, so why not use PhantomJs there to create static web pages?

U can provide static pages in normal mode and redirect to an AJAX page if JS is enabled.

The only thing is that the Ember-router hyperlink is not available for search engine bots. But I think there is absolutely no way to handle this without server code!

-2
source

Source: https://habr.com/ru/post/951422/


All Articles