SEO

PageRank is dead and NOFOLLOW will NOT Save it!

It’s again time for a discussion about the use of the nofollow attribute for links at the English language Wikipedia. A lot of the Wikis in other languages use the attribute and also the MediaWiki platform which is used by Wikipedia and thousands of other Wikis has the nofollow attribute ON by default for new Installations.

There where a lot of discussions about it in the past and I got involved in this in April this year. It did not help and the nofollow tag remains off until this day.

Matt Cutts from Google was recently asked for his opinion about this and he responded that he would also like to see the nofollow enabled “but work out ways to allow trusted links to remove the nofollow attribute”. Well that is the same old we hear now time and again, but that is no solution for the problem. I spent the time and addressed some of the previous discussions and what I believe the real problem is. This is not Wikipedia specific which is the reason why I post this here. Google already did take steps in the right direction when it demoted the value of talk pages at Wikipedia. I blogged about that earlier this summer.

Here are the parts of my argumentation which I believe are relevant beyond Wikipedia.

When it comes to Links, I think that a link MUST be first and foremost be relevant to the article and add value to it. For whom? The human visitor that is reading the article. Search Engines are secondary. I will come back to that one a bit later.

The effect if some external links have the attribute and some don’t would reduce the effectiveness of the attribute considerably. If the rule is general with no exception, also the spammer in the furthest region on the planet will learn about it. It would be big news and send out a message that is today only a whisper.

PageRank is dead!” - The original principle is becoming less and less of a factor for the major search engines because of massive abuse. MSN has the least developed algo’s and is the most susceptible to attacks of the big three. It was “nicely” demonstrated this summer.

But even Google and Yahoo have problems with this which are still not solved effectively. The SE’s know that and the only tactic they have today is to scare people to death to reduce the problem that way as much as they can to buy themselves time to actually come up with a solution. Don’t buy links, don’t exchange links, don’t cross link sites you own, add nofollow etc.

They could have said: “Either don’t add or at least flag any link that you would not protect with your life” or at least not “co-sign” for, because we do a bad job in determining intend and relevance of links at the moment. “We work on it, but in the meantime please help us to suck less”.

Okay, let us help them and add nofollow to any link that is not to somebody I would trust with my life. Let that become standard practice and Google will become able to calculate PageRank in real-time, because there will not be much to calculate anymore. May be that would cause the SE to increase their efforts to come up with a solution that works better.

Until then will Spammers care about links nobody sees, even if only 1 in thousand creates results. If the result is $0.0000000 after a week of work, nobody would SPAM in that fashion, unless it is for research purposes.

Spammers need then to shift their focus ENTIRELY to areas where it is seen by a human at least one of them and there must be at least a remote chance that this human will act on it a way that benefits the spammer. If you spam a site with a 100% readership of strong believing Muslims and offer delicious pork chops, your conversion will be 0, regardless if you spam once or a billion times.

The same results would have adult entertainment merchandise which involves young and pretty women with little or no clothes when promoted to an audience of 100% catholic priests.

The more it becomes targeted, the less it becomes SPAM actually. SPAM that actually benefits me is not really annoying and I will be forgiving the fact that I did not ask for it. The more the spam moves to the visible space the more relevant does it need to become or the easier it is to detect automatically without a human even seeing it. The latest Blog Spam plug-ins are a very good proof for this.

Also “learning” Email Spam filters work extremely efficient and over time almost 100% accurate. It is obviously currently not feasible or possible for the major search engines to use the same principles to solve the spam problem. If it is relevant, the spam filter will not catch it, but it also is not really spam anymore.

If the spam must become more relevant and closer to good content it must become less spam in nature. It is today already possible to detect spam that is too much off topic. Filters could be developed and be very efficient that work on the principles of existing blog comment/trackback spam filters and email spam filters and remove obvious spam automatically.

Those filters could be developed already and they would also help under the current situation btw. If stuff remains in the Wiki after all that, the validation of the provided content will happen on a very different premise than today. It would become a very healthy process in my opinion and probably increase the popularity of Wikipedia.

Search Engines are trying to get there. I am absolutely convinced about that. They don’t have a practical solution for it yet, but why should we make their life easier that they have to work less hard on the solution for them?

What are your thoughts about this?

Cheers,
Carsten Cumbrowski

e6149739a0ceadb8fde822225838bd26 64 PageRank is dead and NOFOLLOW will NOT Save it!
Carsten Cumbrowski has years of experience in Affiliate Marketing and knows both sides of the business as the Affiliate and Affiliate Manager. Carsten has over 10 years experience in Web Development and 20 years in programming and computers in general. He has a personal Internet Marketing Resources site at Cumbrowski.com. To learn more about Carsten, check out the "About Page" at his web site. For additional contact options see this page.

You Might Also Like

Comments are closed.

20 thoughts on “PageRank is dead and NOFOLLOW will NOT Save it!

  1. Sorry, this is nonsense. Wikipedia is manipulated by people who run their own businesses. I had a link from Wiki some time ago and they removed it just because they don’t want their competitor’s business listed there. Wikipedia shouldn’t be trusted anymore. Moreover, number of links within any article on Wikipedia doesn’t mean anything to anyone’s rankings. My traffic has increased since I’m not listed there…

    BTW, nofollow attribute is the most evil out there. Since nofollow attribute exists, paying for links without this “useful” attribute is now good business… All indians with their directories would prove it!

  2. Hi Jan,

    thanks for your comments.

    I would consider your initial statement to be incorrect, because you generalize. I am sorry to hear that you are disgruntled because of your personal experience in this user driven space.

    The new Web and Social Networks etc. do not change most of the core traits of people. People are still humans, with all the good and bad that comes along with it.

    I pay for links on the Google Website, that is evil too. CNN is plastered with paid links as well. I guess I am in Hell already. :) No, the nofollow is not evil, but a crappy workaround for an issue the search engines can not solve today.

    crap? Yes! Evil? Nope.

  3. I agree and think no follow is silly — Google etc needs to do a better job of understanding how topical and relevant a link is. Asking a web master to use a “link condom” seems like a small patchwork solution that is inconsistently applied. If someone contributes a relevant link and the person in control of the site deems it relevant, let it pass.

    Of course people will put some commercial links in but as you are saying, the better the filtering the less spam becomes like spam.

    Over time, the best sites, the sites which provide the most quality control over links, comments, and the like, should be rewarded and valued. If google could tell which pages are controlled and valuable sources of information and give priority to that over auto generated stuff, ultimately I believe that would give them a leg up.

  4. With regards to:

    ———————————————–
    Also “learning” Email Spam filters work extremely efficient and over time almost 100% accurate.
    ———————————————–

    I really don’t think that the email spam filters will ever reach 100%. Bayessian filters won’t catch it when random words from dictionaries are thrown in nor will they catch stuff like markov generated text.

    In general, they do a decent job but I’ve seen a large number of false positives from the systems that I’ve used which generally leads me to just not using them at all.

    G-Man

  5. G-Man: nothing is 100% in IT and especially software. With hardware are you at least able to add double, triple or more redundancy to virtually get to 100%, but with software no chance. This is regarding the software’s security, reliability and functionality.

    The above statement is true even for well defined and yes/no (computer processable) tasks. Now Email and its content and purpose which includes the definition of what is spam and what is not spam is not well defined. There is no yes/no chain to get to the answer to that question for every single email.

    This makes it irrelevant if a user receives an email he considers junk from somebody he gave permission in the past (and can remember that or not) or if he never gave permission.

    A lot of email received and flagged as spam by automated filters and the inbox owner himself are actually not spam (technically, legally whatever), but the user does not care about that.

    Btw. This fact pollutes a lot of statistics that measure the percentage of spam of the total amount of emails sent over the Internet.

    It’s the perception of the reader/owner that matters. A working spam filter makes sure that I get the emails I want and filters out the ones I don’t want or don’t care about.

    I used over the past decade + several filters from build in to commercial ones and two of them are doing such a great job at virtually filtering out the crap I don’t want and not the stuff I actually do want to get. It does not matter to me, if I opt-ed specifically for something or not. Using both in conjunction is even better.

    I have over a dozen email account and signed up for tons of things over the years and some of the accounts are over a decade old and the combined number of crap I need to delete manually is only a handful of emails per day. The spam folders are getting every day between 500 and 750 emails, courtesy of the filters.
    100+ emails I care about are getting through every day.

    Sure, its technically not 100%, but who counts? I am not a machine and what I “feel” is the only thing that matters to me and I “feel” that the email filters are working pretty much 100%, even if it is in reality only 99.2%.

    p.s. The two filters are the Gmail spam filter and the Cloudmark Desktop Filters for Outlook and Outlook Express. Both have a learning curve and get better the longer you use them. Gmail is free and Cloudmark is only $40 per year and good for 2 computers. Use this link and I will get a month of free service (no cash). http://www.cloudmark.com/?rc=kk4m4
    click here if you don’t want to give me free month of service :)
    http://www.cloudmark.com/

    Cheers!

  6. Good quality One way Links provide good traffic to website and by that site get enough page rank as per my study….!!!!

  7. I agree, no follow is silly and PR is dead. Links should only be placed , most importantly, on relevant sites.This is my policy and following it, i managed to outranked sites with much higher PR then mine because of the relevancy.

  8. We al say pagerank is dead but whenever there’s an update we all start to write about it..Just check the last update where lot’s of blogs and directories lost some of their PR. I agree that the actual PR (not the indication on the toolbar) it’s not THE factor to get top rankings in the SERPS but it still has it’s value, that’s for sure.

    Concerning the nofollow, the original purpose of the attribute changed and according to Matt it can be used to control the flow of PR within your own site :

    Quote :
    The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity.
    http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru

  9. Hi Dave, thanks for your comment. It’s a good time to revive the discussion of this 1 year old post of mine actually.

    “We al say pagerank is dead but whenever there’s an update we all start to write about it”

    Yep, but not everybody writes the same. Here was my take on the lastest mess.

    http://www.searchenginejournal.com/the-oracle-of-mountain-view/5923/

    “Concerning the nofollow, the original purpose of the attribute changed”

    Most people didn’t get that memo yet, obviously. I think the default “nofollow” on comment links for WordPress is still meant to indicate that it is not editorial but user generated content with the goal to reduce blog spam and not to hoard PR. Same with outbound links from Wikipedia.

    ” and according to Matt it can be used to control the flow of PR within your own site ”

    and should be used to control flow of PR to other sites based on what Google thinks and not the webmaster (or else).

    http://www.searchenginejournal.com/reign-of-bread-and-whip-the-new-google-aristrocrathy-i/5549/

    “The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity.”

    That was not Google’s idea, but a smart SEO trick, somebody cam up with to use the nofollow for something useful.

  10. Hi Carsten,

    You’re right, the purpose of the nofollow didn’t changed but it got an extra ‘function’ so to say. The point is, it was a long discussed thing if using the nofollow within your own site would (or would not) hurt your site/rankings. And the funny part was that google wasn’t clear about…was it that hard to say 

    I agree that PR doesn’t play a big role anymore to get top rankings but whenever a new page get some ‘PR Love’ after an update , we noticed a jump in the serps and with the last update, we saw a lot of second listings under the original one. And I think we can all agree that you won’t get a top 3 ranking for a competitive word/phrase with a PR1-2 page…

    Another small thing maybe..some people swear that even with a nofollow, google follow the link and still give some kind of juice through that link. Love to hear your thoughts about that.

    dave

  11. Hi Dave,

    “it was a long discussed thing if using the nofollow within your own site would (or would not) hurt your site/rankings.”

    I remember that discussion as well, also the debate whether or not using extensively nofollow for outbound links would hurt your site rankings, since it would be a sign of poor editorial control or judgement on your part, if you have to nofollow many links. Those concerns were absolutely legitimate if you consider the original purpose of nofollow. The high rankings of Wikipedia content shows that there is no issue with doing that.

    “And I think we can all agree that you won’t get a top 3 ranking for a competitive word/phrase with a PR1-2 page…”

    The more competitive the phrase, the more important become any factors, not just PR, including things like keyword density, use of image alt tags, anchor text of inbound links (internal and external) etc.

    “..some people swear that even with a nofollow, google follow the link and still give some kind of juice through that link. Love to hear your thoughts about that.”

    I have not seen anything conclusive yet that convinces me that nofollow links pass any “juice” or that they are followed by Google. I do believe that they cause those weird listings in Google that only show the URL to the page without title and without description. Those are created if Google knows about the existence of a page, but can’t crawl it, pages that are not blocked by robots.txt. Those entries are a result of nofollow links on an indexed page or pages with nofollow in the “robots” META tag .
    Keep in mind that nofollow is treated differently by the different search engines.

    See
    http://en.wikipedia.org/wiki/Nofollow

    That’s me take on that one.

  12. Should i “dofollow” a link to Google and “nofollow” a link to my clients or sponsors? Google it is time to wake up before losing the trust from webmasters whom contributed to what you are today. Have Google ever wonder what will happened if one day more than 60% of the sites block googlebot with their robots.txt?

  13. After google's pagerank update last week I have pretty much concluded that PR=N/A. Nada, Zero. Twitter's current PR=0. Sure, people still obsess over inbound links with high PR, but I really don't think it matters like it use to. My vote, PageRank is DEAD.

    I have not seen PR make any real difference in the search terms I aim for…. I know I am just a pest control guy…. But when my PR=4 and the 3 guys directly in front of me have a PR=N/A, what joy is their in having a higher PR?

    -Thomas

  14. After google's pagerank update last week I have pretty much concluded that PR=N/A. Nada, Zero. Twitter's current PR=0. Sure, people still obsess over inbound links with high PR, but I really don't think it matters like it use to. My vote, PageRank is DEAD.

    I have not seen PR make any real difference in the search terms I aim for…. I know I am just a pest control guy…. But when my PR=4 and the 3 guys directly in front of me have a PR=N/A, what joy is their in having a higher PR?

    -Thomas