SEO · WebMaster Resources

Google’s “Claim Your Content”?

End of last week did Google acquire a large amount of domains around the phrases: “Claim Your Content”, “Claim My Content” and “Claim Our Content”

CLAIMYOURCONTENT
CLAIM-YOUR-CONTENT
CLAIMMYCONTENT
CLAIM-MY-CONTENT
CLAIMOURCONTENT
CLAIM-OUR-CONTENT

Registered TLDs: .COM, .NET, .ORG
Country specific TLDs: .FR, .DE, .CH, .CO.UK etc.

WWWCLAIMYOURCONTENT
WWWCLAIM-YOUR-CONTENT

Registered TLDs: .COM, .ORG, .NET

Not registered were domains for:

WWWCLAIMMYCONTENT
WWWCLAIM-MY-CONTENT
WWWCLAIMOURCONTENT
WWWCLAIM-OUR-CONTENT

This implies that ClaimYourContent.* will be used as the primary domain. 

Garrett Rogers from Googling Google Blog speculates that the domains could be used to offer webmasters a tool to fight scrapers and others that steal content from your website.

Sam Harrelson from CostPerNews.com speculates that this might be an attempt to allow for users to claim (and thereby easily monetize) content from the wide variety of content producing platforms.

An effective system to fight content theft and scraping would be great.

Webmasters fight today an uphill battle against content theft, especially against scrapers. Scraper sites are literally sites that show “scraped” content from other sources, like SERPS, RSS Feeds, Blogs and other Web Sites.

The scraper “mashes up” and “scrambles” the content as good as he can to circumvent the search engines duplicate content filters. Only as much as absolutely necessary is done on the site which consists usually of thousands and more auto-generated pages. Nothing is done by hand, because the poor converting pages that litter all engines indexes are only profitable if you generate a lot of them.

While using tools like CopyScape to find duplicate content can be helpful and vehicles like the federal Digital Millennium Copyright Act (DMCA) might be working to fight single cases of content theft by other webmasters with a real website, are those methods pretty much worthless against scrapers who produce websites using your content faster than you can act on them, not to mention the problem of finding out the identity of the scrapers to send out a DMCA notice to them.

You can also send a DMCA notice to the search engines every time a scraper site with your content appears in the SERPs, but that can turn into a full time job doing every day nothing else than that.

The most effective tool available to webmasters against scrapers that get the content right from your website today is to identify their scraper scripts and block them from accessing your website.

Those scripts are basically “bad robots” that ignore the robots.txt exclusion protocol and robots meta tags. David Naylor provided information and also source code how to identify and block bad robots at his blog.

This method does not help you if scrapers use the content of your RSS feed. The only thing you can do there is not to make full articles available in your feeds, but only a brief summary or the first 100-200 characters of the post with a “more link” to the full article on your website.

Anything Google would come up with to solve or at least reduce those problems would be helpful, but if that is what those domains might be used for, I would like to know how they would solve problems like:

  • Verify that sites that claims content as their own are actually the rightful owner of the content
  • Prevent scrapers or rouge webmasters that steal content to claim content from others as their own
  • Allow content owners to white-list sites that do have permission to re-purpose some of their content (press releases, free to re-print articles etc.)

This is a very complicated subject and hot at the same time. I think it would already be a good start if webmasters would have a way to tell the search engines if their sites content gets suppressed or removed from the SERPs due to a duplicate content penalty or filter caused by content theft. This would help especially new domains that are most likely to become a victim of this, because of the lag of trust compared to domains that are older (Google Sandbox Effect).

A scraper who acquires an old domain to put up somebody else’s content will most likely be considered the content owner by the search engines and the original content owner gets penalized or filtered out.

I guess we will have to wait a bit more to see what Google will be using the newly registered domains for. But that does not stop people from speculating. Google might gets some new and useful ideas from what people speculate.

Cheers!

Carsten Cumbrowski
Cumbrowski.com internet marketing resources like duplicate content issue and legal resources and much more.

Quick Update (for everybody who does not read comments):  ClaimYourContent appears to be the name of YouTube’s copyright protection service. See here. Thanks Pete for pointing that out. However, I hope that Google does not stop there. The mentioned scraper issues are unresolved and options should be considered to find a solution for them.

 Googles Claim Your Content?
Carsten Cumbrowski has years of experience in Affiliate Marketing and knows both sides of the business as the Affiliate and Affiliate Manager. Carsten has over 10 years experience in Web Development and 20 years in programming and computers in general. He has a personal Internet Marketing Resources site at Cumbrowski.com.To learn more about Carsten, check out the "About Page" at his web site. For additional contact options see this page.

Comments are closed.

7 thoughts on “Google’s “Claim Your Content”?

  1. Google “scrapped” 100% of the net except very rare own pages that present 0.0000000000000000001% of Internet.

  2. Thanks for the info Pete.

    Well, the YouTube thing seems to become the main purpose for those domain, but it would be sad, if Google stops there.

    The points I made in my post are certainly valid and includes unresolved issues that should be addressed eventually.

  3. Great news for new sites within the last few years. I lookup and search for pre-2000 domains and they’re TOO expensive or aren’t selling.

  4. Writing a unique content might take hours to days of research and it will be very annoying & unfair for such efforts to be taken and devalued by the scrapers sites. Till date, i still constantly report any scraper sites i found online.

  5. Can be used successfully against scrapers. However, i do understand that there are tons of content spinners out online that can generate unique content from existing contents.