fetch as google
Google

Fetch As Google Is Getting Better, Now Renders Pages As Googlebot Sees Them

Google announced on their Webmaster Central blog today that they have recently updated their Fetch As Google tool, which now gives users the ability to render a page exactly how Googlebot sees it.

How It Works

Before using the Fetch as Google, you’ll need to have added and verified your site in Webmaster Tools. Then, follow these instructions:

  • On the Webmaster Tools Home page, click the site you want.
  • On the Dashboard, under Crawl, click Fetch as Google.
  • In the text box, type the path to the page you want to check.
  • In the dropdown list, select the type of fetch you want. To see what our web crawler Googlebot sees, select Web. To see what our mobile crawler for smartphones sees, select Mobile Smartphone. To see what our mobile crawler for feature phones sees, select Mobile cHTML (this is used mainly for Japanese web sites) or Mobile XHTML/WML.
  • Click Fetch for having Googlebot fetch the path you entered, or click Fetch and Render to have Googlebot both fetch the path and render it as webpage.

You can use this tool to fetch up to 500 URLs a week per Webmaster Tools account. When rendering a page, Googlebot will try fetch all the external files as well. Such as images, CSS and JavaScript files. These files are then used to render a preview image that allows you to see your page as Googlebot sees it.

Practical Uses Of This Tool

Google suggests that this is useful to diagnose a page’s poor performance in search results, because you will be able to diagnose crawling errors. If Google is not able to render the page as you intend for Googlebot to see it, then that could have a negative effect on your ranking in search results.

Google also suggests that this new feature is useful for identifying problematic pages in the event that your site has been hacked. For example, if your site is appearing in search results for popular spam terms when those terms don’t exist in your source code, then you can use the Fetch as Google tool to understand exactly what Google is seeing on your site.

Something like the above example can happen when the security of your site is compromised by a hacker. Hackers can disguise the content of your site so that it doesn’t appear to normal users, only to Googlebot. Since the content appears normal to everyone else, the problem is difficult to diagnose without the Fetch as Google tool.

Fetch As Google will not render anything being blocked by robots.txt. If you are disallowing the crawling of some of your files, then Google won’t be able to show them to you in the rendered view. For more information, see this support article.

 Fetch As Google Is Getting Better, Now Renders Pages As Googlebot Sees Them

Matt Southern

Freelance Writer at MattSouthern.com
Matt Southern is the lead news writer at Search Engine Journal. His passion for helping people in all aspects of online marketing flows through in the expert articles he contributes to many well respected publications across the web. Contact him via his website if you'd like him to write for you.
 Fetch As Google Is Getting Better, Now Renders Pages As Googlebot Sees Them

You Might Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

3 thoughts on “Fetch As Google Is Getting Better, Now Renders Pages As Googlebot Sees Them

  1. I just tried it. I Upon rendering, it gave a “Partial” status. The Googlebot couldnt get few scripts (blocked by robots.txt) which includes Adsense and twitter,stumbleupon,pinterest scripts. Should I be worried? What is the ideal “status” of “Fetch and render”.

    Thanks in advance!

    1. @Pretti,

      External scripts won’t have any effect on your SEO ranking. Only Google would know for sure but I would think that if external scripts block through robots.txt you’re going to receive a better ranking from google for quality and relevance of links because the random .js files filled with what Googlebot would see as gibberish aren’t being looked at. But they’re probably smart enough to factor that sort of thing in. Either way it’s nothing negative.

      What you want to be looking at is: 1. actual relevance of your site to your target audience and 2. your ratio of relevant content to useless code.

      The most important thing we can learn from the render feature of Fetch as Google is that the natural horizontal viewport is what they consider the most relevant content to index. So the most important SEO content needs to be prioritized to the top to the bottom, and never put to the side or anywhere else where a normal viewer would skip over it.

      If you want more info about the technical side of SEO and an outsider’s perspective of how Google’s internal scoring system (probably) works, please feel free to shoot me an email – it’s something I’ve been studying heavily for the last few years.