SEO

5 Google Algorithm Changes I’d Like to See

Recently, Google killed off (or at least substantially reduced) the SEO benefit associated with building websites on exact match domains (EMDs). And while I think that that’s a great step forward, it’s clear that Google still has a ways to go in terms of wiping away low-quality listings from the SERPs.

And since you all know how much I love algorithm changes, here are a few additional steps that I hope Google will consider as it rolls out future changes to its search results:

Change #1: More Accurate Measurement of On-page Quality

It’s not exactly a secret that Google struggles with measuring on-page quality objectively. The whole reason we have search engine algorithms in the first place is because the Googlebot can’t just go out, read a couple of pages, and decide which one is qualitatively the best.

And because of that, I realize that saying, “Google should do a better job of measuring page quality” is somewhat of a moot point. Obviously, this is a priority for the Web’s largest search engine, and obviously, it’s something that Google is constantly addressing.

That said, is anyone else disappointed by the pace at which these changes are occurring? Perhaps one of the engine’s most obvious missteps was a situation that occurred immediately after the roll out of the Penguin algorithm update (a change that was implemented explicitly to weed Web spam out of the results) in which a Blogger site with no content earned a temporary first page ranking for the competitive keyword phrase, “make money online.”

Mistakes like these limit the public’s confidence in Google to provide a good search experience, not to mention the impact had on SEOs whose sites are either demoted in favor of unworthy results like these or caught in the cross-fire of updates whose effects on the SERPs weren’t fully anticipated.

Given this diminished confidence, I believe that it will be important for Google to speed up the pace on algorithm updates designed to provide high-quality SERPs listings, while also minimizing the impact that these future changes have on sites that are unintentionally penalized. Doing so will be critical to the engine’s ability to maintain public trust and its competitive advantage.

Change #2: Penalization for Spun Content

In particular, one type of low-quality content I want to call out is “spun content”—pages in which words, phrases, and sentences have been automatically replaced using spinning tools in order to generate “unique” content. Sure, the content’s unique, but that doesn’t mean that it’s fit for human consumption!

But although Google has made it its mission to weed out low-value results from the SERPs, I still encounter plenty of instances of spun content while I’m making my rounds on the Internet. While I recognize that the text produced by this process may look unique to the search engine bots, surely Google’s language processing algorithms make it possible to uncover phrasing patterns that indicate content has been spun (at least in the most egregious of examples)?

Really, I’m sure that this is something that Google is working on, as it does the company a disservice to have this type of junk content appear alongside legitimate results in the natural SERPs. I’m just hoping that they’re able to speed the process up and get this update rolled out ASAP!

Change #3: Better Integration of Facebook and Twitter Data

Sure, we all know that Google has a social integration problem, given that Bing has the contracts with Facebook and Twitter, and Google is left to rely primarily on social data generated by the fledgling Google+ network.

However, I for one, would love to see the engine take one for the team and make the compromises necessary to bring this data set to the Google results (wishful thinking, I know…). Or hell, if Google even bothered to use the public data made available by these two primary social networks, I bet that it’d be able to seriously improve upon the quality of its existing personalized results.

Here’s the deal: If Google wants to claim that its results are the best, it simply can’t do that without the social access granted to Bing by Facebook and Twitter. In this case, relying on information from the Google+ network is like a coach claiming that his minor league baseball team is the best, even compared to heavy hitters like the Cardinals or the Nationals. You’ve got to know something’s wrong when the Google SERPs initially list Mark Zuckerberg’s unused Google+ profile over his own Facebook page…

Do I think this is going to happen? No, probably not. But since this is my wish list, I get to say that I believe the Google SERPs would be stronger with either the integration of readily available Facebook and Twitter public data or with the types of contracts these networks currently enjoy with Bing.

Change #4: An End to the Benefit of Profile Links

Honestly, this one’s bugged me for a long time. And despite all that Google has done to diminish the benefit earned by low-value linking schemes, it seems that the value given to profile links is still alive and well.

In theory, assigning value to links that originate from within an author’s website profile account makes sense. In many cases, these accounts are tied to participation on prestigious, high PageRank sites, so it’s natural to assume that users who add their own website links to their profiles are the ones who are actively contributing to their communities.

Unfortunately, once word got out that these profile links conferred an SEO benefit, the digital marketing community saw a surge in automated tools that would create profiles on as many of these high-ranking websites as possible. The result wasn’t a certain amount of ranking value given to active, contributing site participants. It was a link structuring scheme that wastes website bandwidth and storage space by pumping an otherwise well-intentioned site full of spam profiles that gave nothing back to the community.

I have to assume that Google’s on this one, as it’s a clear violation of the engine’s Webmaster Guidelines that prohibit link schemes that intentionally manipulate the search engines without providing any type of value in return. However, if they aren’t, consider this my impassioned plea that this loophole be closed as quickly as possible!

Change #5: Less Reliance on About.com, Wikihow.com, and Wikipedia 

One last thing I want to mention is that I’m sick to death of typing in a search query and being inundated with results from About.com, Wikihow.com, and Wikipedia filling up the top spots. As far as I’m concerned, with the possible exception of Wikipedia (in some cases), the articles that come from these sites tend to be barely higher in quality than the articles found in the mass content directories that Google slapped down last year with the Panda update.

Now, if I were an SEO conspiracy theorist, I might suggest that Google preferentially lists these websites over other pages that offer better information, but without any existing Google monetization (as in the case of industry experts who write exceptional blog posts in order to sell their own products).

But whether that’s the reality or not doesn’t actually matter. What matters, from a search engine perspective, is that the people using a given engine find its results to be as useful as possible. I don’t think that I’m alone in growing increasingly frustrated with being served up poorly-written, error-riddled About.com articles, so I’m crossing my fingers that future algorithm changes include a greater diversity in voices and sources of information.

What about you? What specific changes would you like to see Google roll out next? Share your own ideas for future algorithm changes in the comments section below.

 5 Google Algorithm Changes I’d Like to See
Sujan Patel is a passionate internet marketer and entrepreneur. Sujan has over 10 years of internet marketing experience and started the digital marketing agency Single Grain. Currently Sujan is the CMO at Bridge U.S. a company that makes the complex immigration process easy and affordable.

Comments are closed.

17 thoughts on “5 Google Algorithm Changes I’d Like to See

  1. For the most part I think Wikipedia pages are pretty useful and a great resource, simply because so many people spend a lot of time making sure they are that way! If you had that many people working on the authenticity and accuracy of your site it would probably do pretty well. But I also agree that most of About.com is basic fluff and not much better than a lot of article spinning sites out there.

  2. I would also add that they do a better job with EMD. A lot of old timers, like myself, have sites on exact domains not because we want to spam but we were to grab them while it was available (a good few years ago) and put content that is useful for the visitors and not bots… Its frustrating to see 4 to 5 years of work go down the drain overnight because of a botchy EMD update.

    Ofcourse if the content is real spammy then it makes sense, but good, editorial content… give me a break.. With google able to figure out good vs spammy content, the EMD update is a real bummer…

  3. Hi Sujan,

    Excellent article. I agree with all your points but there are some harms of these updates are well. Google is penalizing those websites who are good and they should be on good place. Nothing wrong with content and nothing wrong with back links. Leave it.
    But I am agree with all your points that profile links still plays an important role if you make them real and also Duplicate content is not in the search. I just love this point. ;)

    Thank you

    Zane

  4. I definitely agree with #5. I always hate clicking on a link (after not paying attention to what site it actually is) and landing on About.com or Wikihow.com. I never find their articles helpful.

  5. I’d like to see Google absolutely nail down the problem of stealing and reprinting content so that all the credit goes to the person who originated a story. I hate seeing it when people are ranking more highly in the search results for content they copied from my site. I hope the Authorship system can finally solve this.

  6. I also agree on the points of your article. But Google can’t easily figure a solution on the article spin thing nowadays links from this articles can really make changes on the rankings. But if the problem will be solved the result on the searches will be more reliable for sure.

    And about the profile accounts i think it depends on the administrator if they can detect if the profile will be used as spamming. Like registering an account today and posting links(post or signature) and leave on the forum forever. But if the accounts will be participate and can contribute information on will i guess that profile that has link can deserve the links that had made.

    Thanks for sharing your ideas and i agree with your points of view.

  7. I would love to see Google confine its results to one listing per domain. If they are as good as they say they are at picking relevant pages for any given search query, surely they are good enough to pick the most relevant page on a single website for that search result. I think most of us are smart enough to navigate the website from there and find other things of interest to us.

    This would not only clear up all of the clutter in the listings, but it would help with bounce rates and page views.

  8. #3 will happen when duffusses finally realize that Google designs better and bigger web experiments. Google has been crawling the web for AI design conditions since its founding. Multi-factor design of web scale experiments is the name of the game. Hoarding Facebook and Twitter data makes about as much sense as squirrels burying rocks for the winter.

  9. I understand where you are coming from and am certain G is working on all of this, around the clock. That said, since I’m a linking strategist for multiple sites across multiple industries, I think it’s crucial to distinguish that the appropriate signal set will have to vary depending on the search. To give an example, if I search for “great gift ideas for Valentine”s Day”, I don’t mind if social stream signals are included in for those search results. However, if I’m searching for “How to properly adjust a timing chain on a 1972 Triumph Spitfire 1500,” then the signal set I need will not be found at FB or TW. Therein lies one of Google’s challenges: Recognizing WHERE the signal sets should come from and the WEIGHT TO GIVE EACH ONE based on every searcher’s individual intent. Given the enormity of that task, I think they do quite well.

  10. Hi Sujan, thanks for this great article. I agree with #1, it bothers me a lot when I found many websites that don’t have any right to be placed in top 5 of Google’s SERP since their content was not good in quality, moreover if it is compared to another content that were ranked lower but actually having good quality content.

    Also, I want Google to reduce their huge dependency to backlinks in ranking a site. While you are busy in creating high quality content, then next you found your content has lower ranking compared to the content that is not too good in quality but very good in building backlink scheme.

  11. I have to agree with Scott McKirahan, multiple results from the same URL is simply stupid. In the last months it has become the biggest google flaw for me, in some industries up to 90% of the results in the first 3 pages are all from the same domain even if completely unrelated to the search term.

  12. Change #2: Penalization for Spun Content

    Guest bloggers are the worst.

    Rarely do they base content on personal experiences. They rewrite other peoples posts and steal idea’s.

    And there’s plenty of big names in there.

    The sites they post on are just as bad for allowing and knowing its happens. Plenty of big names in there too.

    2# Domain authority is out of control. I’m finding SEOMoz in the results and other high DA sites for keywords they have no related content for.

    Also seeing the same domains in up to 8 results on one page.

    Thought Matt Cutts had stated he’d fixed this?

  13. I would like to say that Google doesn’t take it personally to support any website to rank as Google less Relying on About, Wikihow and Wikipedia, Google is only trying to give users more right info which they want. I agree mostly on #3 and #5.

  14. I agree with most of these especially #1. However, I think we also have to get smarter with how we search. If we want better results we have to learn how to use the search engine more effectively. I agree that better content should be shown, on the serps. But when we don’t get the answers we want with our initial searches, we should learn how to search better to get the info we want. Just my $.02

  15. I am sick of these keyword stuffing. i found many bloggers are doing keyword stuffing in the beginning of their articles and amazingly their articles are ranking high despite of the fact that the quality is very poor.

    i am wondering why google is still preferring them and no update yet for them? It is extremely demanded.

  16. Spun content is not inherently bad. There is no reason why spun content can’t be totally fit for human consumption. You also can’t say that manually written articles are always grammatically correct. It’s about having quality guidelines. Unfortunately there are so many poor spinners and robot spins that it gives a bad name to those who actually do produce quality spun articles. I think it is more about those low quality spins and pushing those to what are essentially link farms that is bad, but “spinning” a few versions of an article to publish on a couple different sites is not inherently bad.

  17. One question between in this week do google launched its any algo update. My one blog affected badly which resulted in its ranking to 10th page for its own keyword. Whats the heck google is doing i reapply do not understand.