SEO

Anticipating SEO in 2012- Competitive Advantage

Hi there!  This is an odd situation for me – as this article is being published, I’m attending SMX Advanced in Seattle – and tomorrow morning, I’ll be speaking on the Google Survivor Tips panel.  In my presentation, I’ll be showing people how to weather the Google roller coaster, and how to dramatically increase organic search traffic beyond Google.  So I felt bad that you all couldn’t attend, and would have to wait til afterward for me to do a full article on it.

To compensate, I’m sharing with you some insights that I only briefly touch on in that presentation – insights I believe will help those who listen to take action that will give you a competitive advantage.  And you’re getting this advice early enough on that by this fall, you’ll be ahead of the curve!

Get Your Competitive Advantage On

When the May Day update happened last year, I saw the writing on the wall, and spent the next few months taking my client site planning in a direction I believed would be the future.  It turned out that future was Panda.  And guess what? Every one of my clients who adapted according to my recommendations not only lost no organic traffic this spring, in fact some saw a bump.

It’s that ability to anticipate, and adapt.  And Panda showed us that with some changes, you can’t easily adapt AFTER the fact.  Your site could be significantly hammered with no clear path to recovery.

Schema.org – A New Paradigm

While there have already been some great articles breaking it down to basics, as well as making a strong case as to why you should care at all about Schema.org, I want to touch on specific aspects of this new structure and how I believe they’re going to help search engines, and in turn, SEOs who adopt them, do what we all do for a living. Which means those who adopt early will have (at least initially) a competitive advantage over those who don’t.

The Bad News (depending on the color of your hat)

If you are like me, you’ll already see how these will be considered fair game for people wanting to use tactics of the “hat color that shall remain nameless”.  But hey – that’s part of the nature of search rankings already – this is just going to be yet one more sub-arena people will try and game.

The Good News

Even though people who come from the “hat color that shall remain nameless” camp will creatively look to push the boundaries of fairness and all that entails, I also believe this new system will aid the search engines in being able to do a better job, believe it or not, at detecting such tactics.  And I’ll  touch on that concept at the end of this article.

The Really Annoying News

If you thought it was challenging trying to get people to implement even the simple semantic markup for breadcrumbs, products, events, recipes, and the like, wait til you get a load of how complex Schema.org is.  I mean we’re talking about dozens of content (data) types, each one potentially having dozens of elements to provide content for.

And if you’re creating web pages by hand, or even if you’re developing your own sites using a CMS, it’s going to be a bitch and a half now, to get this stuff implemented properly. Every major CMS, from WordPress to Joomla to Drupal, to Magento and beyond, is going to need to be reworked.  Every dev company that has their own custom CMS is going to have to find the budgetary room to do so as well…

The Semi-Good News

The light at the end of this developer’s worst nightmare tunnel comes from a few points.  It’s a “standard” that the big three have embraced.  No more picking and choosing between microformats, RDFa, or some other competing schema that isn’t fully recognized by the three.

Additionally, for the major CMS’s, once they’ve updated their core systems to work with this, the majority of the heavy lifting will be a one time shot on their end.  Except of course, every time a new schema type or element-set comes along.  And except, of course, for any plug-ins that have been developed for any of them prior to this roll-out.  And except, of course, for any major changes the CMS communities want to make in the future that will then have to factor in schema factors.

But hey – at least it’s one methodology that everyone (who is wise) will be able to embrace!

Common Elements – Doing The Work For The Search Engines

Many people say things like “you don’t need to submit your site to the search engines – they’ll discover it.” I’ve never held that view – I’ve instead always taken the perspective that it’s better to help the search engines along.  The more I can do this, the more likely my client sites will be indexed and ranked sooner.  And more accurately to my vision of what I want them to be found for.

Many will say wait with Schema.  Let’s see if this thing really takes off, or if it’ll even be worthwhile.

I believe that’s also a very big mistake. Because the Schema.org microdata structure will do an amazing job at helping search engines.

Core Elements You Should Care About

Here are the common elements across many content types I see as being critical for SEO moving forward.

  • aggregateRating
  • author
  • datePublished
  • genre
  • headline
  • interactionCount
  • keywords
  • offers
  • publisher
  • reviews
  • breadcrumb
  • isPartOf
  • mainContentOfPage
  • primaryImageOfPage
  • significantLinks

Page Segmentation in 2012

And now the most relevant of all:

  • SiteNavigationElement
  • WPAdBlock
  • WPFooter
  • WPHeader
  • WPSideBar

Why These Are So Relevant

Think about it – the more clearly defined your content, the less guessing search engines have to do when it comes to trying to figure out what the topic of the page is, what the relationships are between pages and sections, who the person/organization is that’s the originator of the content…

How Spam Detection Will Be Easier

Okay so there’s all sorts of ways to try and abuse this.  The keywords element alone is going to be played.  And yes, scrapers will write scripts to replace author/publisher elements.

Except with so many aspects of any single page being able to be clearly defined, in a standardized structure, I also believe search engines will be able to look at any single element and any combination of elements and say “Does this type of data/content belong in this place when compared to what normally goes in these across the majority of sites out there in this market?”

And the more sites they pull into their search indexes that use microdata, the more accurate their model will become over time.

Adopt Or Perish

Remember how early on I said how challenging it’s going to be to get developers and CMS creators to implement this stuff?  Well whoever gets it right in the development arena is also going to be ahead of the pack. So keep your eyes open on that front.  Or start campaigning for your preferred CMS platform’s development team / community to get on board.

And I also believe that everyone who adopts early on will have that much of a lead on whoever ignores it or struggles to adopt.  It’s that significant a change to search at the semantic level, which is at the heart of SEO.

My Plan

The first day this broke on Twitter, I immediately sent out an email to my biggest client’s development team.  I let them know how important this is, and that we need to start planning NOW for adoption over the next few months.  They have their own CMS, so it’s going to involve a lot of work.  Yet I am quite confident it’ll pay off.

Think I’m Right?  Think I’m OverReacting?

I’d love to hear your thoughts on this.  So please – leave a comment and share them here.  Just forgive me if I am not able to respond to them right away, as I’ll be caught up in the whirlwind that happens at most conferences, and especially this one since I’m both a panel speaker and putting on one of my #EpicDinner events!

 Anticipating SEO in 2012  Competitive Advantage
Alan Bleiweiss is a Forensic SEO audit consultant with audit client sites consisting of upwards of 50 million pages and tens of millions of visitors a month. A noted industry speaker, author and blogger, his posts are quite often as much controversial as they are thought provoking.
 Anticipating SEO in 2012  Competitive Advantage

You Might Also Like

Comments are closed.

18 thoughts on “Anticipating SEO in 2012- Competitive Advantage

  1. Alan, to be perfectly honest, the only thing I see on the Schema site is code bloat.  I look at some of the examples, and it just seems they are suggesting we add extra attributes to every tag.  And I am having a hard time seeing the purpose.  Let’s take the “review” schema as an example.  We can add an attribute to identify which element is the product, which is the review, etc.  But how does this help the search engines?  If the page does not already have the word review somewhere on it, then the website owner is already letting the search engines and the searchers down.  And what is the purpose of adding the attribute specifically at the point where each review starts?  The search engine only needs to know that there is a review at this URL, not at what point the review portion of a page starts.  Perhaps I am missing something, but this seems like a masssive make-work project.

    1.  David,

      The issue is consistency.  The more consistent the structure of the data, the more easily search engines will be able to readily identify it.  Reducing the confusion that happens when people use words, labels, tags that are NOT based on the same intended purpose.  I have seen too many sites where the developers slap random words in ID, Name and Label fields at the code level that have nothing to do with search based intent, but instead are the coders’ own internal language. 

      I do agree that they’ve taken it to the extreme, and it’s not going to be a perfect system.  It is, however, in my opinion, a move in the right direction. 

      1. Hi Alan.  I agree that all those “name” and “ID” things are superfluous, but not because they are inconsistent – rather they are superfluous because they serve no purpose.  The search engines can easily ignore them.  The words on the page tell the search engines what is on the page.  Let’s keep in mind that the focus needs to be on “content”, not on “data”.  Sure, the search engines could categorize each element of a page in all sorts of ways, but the only value to doing so is if they can deliver content that people are searching for.  I might eventually be convinced otherwise, but right now I see all this as just a lot of extra code that doesn’t do anything to respond better to what people are searching for.

      2. I happen to disagree regarding search engines ability to ignore them.  And that is one of the very reasons they came out with schema.    Search engines look everywhere, at all levels, to try and figure out what a page is really about.  That’s why Google continues to decipher javascript, CSS, ajax, Flash, and even text within images. They do a lousy job of it, but they try nevertheless.  With schema, THEN they can ignore all the other code that happens to contain words. but not til then.

          Of course, time will tell with all of this.

  2. Alan, to be perfectly honest, the only thing I see on the Schema site is code bloat.  I look at some of the examples, and it just seems they are suggesting we add extra attributes to every tag.  And I am having a hard time seeing the purpose.  Let’s take the “review” schema as an example.  We can add an attribute to identify which element is the product, which is the review, etc.  But how does this help the search engines?  If the page does not already have the word review somewhere on it, then the website owner is already letting the search engines and the searchers down.  And what is the purpose of adding the attribute specifically at the point where each review starts?  The search engine only needs to know that there is a review at this URL, not at what point the review portion of a page starts.  Perhaps I am missing something, but this seems like a masssive make-work project.

  3. I have to agree with David here, Alan….I’m still reading various bloggers on SEO/SM and many say that such a “scheme” will not be of value….especially those who push/promote the RDF Framework ie W3C folks. Jury’s still out for my client roster, but I am trying to fully understand the gains vs the efforts for same…

    Jim

    1.  Jim,

      That’s why I’m presenting this as my prediction – anticipating that it will be important.  Just because SEOs gripe about something new, something that changes or turns their world upside down is exactly why I see it as a competitive advantage. 

      Many in our industry will continue to ignore the warning signs, and instead, continue to look for easy magic bullets.  And that will, in my opinion, allow those who adopt to have yet one more competitive advantage. 

    2.  Jim,

      That’s why I’m presenting this as my prediction – anticipating that it will be important.  Just because SEOs gripe about something new, something that changes or turns their world upside down is exactly why I see it as a competitive advantage. 

      Many in our industry will continue to ignore the warning signs, and instead, continue to look for easy magic bullets.  And that will, in my opinion, allow those who adopt to have yet one more competitive advantage. 

  4. Quote: “But how does this help the search engines? ”

    Because microdata can be used by Search Engines to display rich snippets. Google can currently use microdata (and microformats & RDFa ) to display rich snippets for recipes, reviews and people.

    It’s worth optimizing for this, since rich snippets are likely to have better click through rates.

    Given that Google, Bing & Yahoo all now support the same format, it’d be a mistake not to optimize for it

  5. Really? you understand the big 3 SEs could decide to license this? and it could be just them trying to takeover the semantic web…kinda like M$ did during the browser wars with their funkified DOM which gacve us the infamous IE 6?

    Alan you keep talking about a competitive advantage please explain where that is because if I have the same words on the page I doubt they are going to weight the marked up stuff heavier than say the same text bolded, or a different font, or in a bulleted list or a an anchor link. Come on man… this is just the SE’s telling people how to build sites for them… not users.

  6. Alan I think you have hit the nail here. Anything a site can do that allows them to format the data and tell the search engine what type of data it is …. is important. I think in the area of NAP (because there is a local business information type) this concept will be important.

  7. This is indeed a good news that big search engines are coming  together and agreeing to one standard of Schema, it was kind of catch 22 earlier that justifying one standard and leaving the other alone. Though its more work for designers and developers moving forward. But if site owners wants to be on top, they have to live today and plan for tomorrow. Majority of the business happens on internet because of searches on search engines so no one can afford to avoid the future changes in SEs.

  8. I always feel like this is an industry full of intelligent and creative people.  That said, scientific principles are usually left to the wayside and I’m compelled to dig my feet in for some evidence.  There’s more than a few ways to accomplish this and I’m only going to suggest one:  pick a few pages on the same site with similar rank, add schema to half and watch for an effect. 

    If schema mark-up plays a significant role, the page will have some new significance in the eyes of the omniscient SE, for better or worse.  IF the additional mark-up is significant, there WILL be a change.

    If I end up running this test in the next few weeks, I’ll return to post some results.

  9. Just seeing if there is any new news about this and facts / tests.

    I was reviewing it and agree with David that it seems like a lot of code bloat for just a few extra tags. I’ll maybe give it a shot but it seems kind of over-kill. We assume the engines have a hard time reading things and determining what a page is about, but if you have lean code with little code bloat and mainly well written content shouldn’t that be enough not to confuse the engine. Besides Google is one of the largest market cap companies in the world employing thousands of PHD’s I think their algorithm is smart enough to sort through some code that we write to figure out what its about.

    Either way it’s def something to keep an eye on, but just my thoughts at these early stages.