A relatively new type of tool analyzes the search engines results pages (SERPs) and provides recommendations based on statistical analysis of similarities shared between the top ranked sites. But some in the search community have doubts about the usefulness of this kind of tool.
SERP Correlation Analysis and Lack of Causation
This kind of analysis is called Search Engine Results Page (SERP) Correlation Analysis. SERP analysis is research that analyzes Google search results to identify factors in ranked web pages.
The SEO community has found startling correlations in the past by studying search results.
One analysis discovered that top ranked sites tended to have Facebook pages with a lot of likes.
Of course, those top ranked sites were not top ranking because of the Facebook likes.
Just because the top ranked sites share certain features does not mean that those features caused them to rank better.
And that lack of actual cause between the factors in common and the actual reasons why those sites are top ranked can be seen as a problem.
Just because web pages ranked in the search results share a word count, a keyword density or share keywords in common does not mean that those word counts, keyword densities and keywords are causing those pages to rank.
SERPs Are No Longer Ten Blue Links
Another problem with analyzing the top ten of the search results is that the search results are no longer a list of ten ranked web pages, the ten blue links.
Bill Slawski (@bill_slawski) of GoFishDigital expressed little confidence in search results correlation analysis.
“The data in correlation studies may be cleaned so that One Boxes and Featured Snippets don’t appear within them, but it’s been a long time since we lived in a world of ten blue links.”
I asked an AI-based content optimization company (@MarketMuseCo) about SERP Analysis tools.
“Content optimization tools that scrape SERPs and use term frequency calculations to tell you what to write about are misleading at best.
Most of these tools will scrape content from the top 10-30 search results, extract common terms, and rate their relevance using Google AdWords Keyword Planner from Google’s public API.
Adding words to your content from these types of tools will never lead to comprehensive, expertly written content that, over time, becomes a competitive advantage for your business.”
SERP AnalysisTools and LSI Keywords
Some of these SERP Analysis tools promote outdated concepts like LSI Keywords as being important for ranking in Google.
This is a concept that is well known to have little relevance for ranking in Google’s search results.
There's no such thing as LSI keywords — anyone who's telling you otherwise is mistaken, sorry.
— 🍌 John 🍌 (@JohnMu) July 30, 2019
User Reviews of SERP Analysis Tool
He offered his opinion of these tools based on his hands-on experience:
“Using those tools do not promote reader satisfaction, which I think is the core of on-page SEO. It more promotes a copycat style of content which mimics 1,000 other sites within your niche.”
Jeff Ferguson (@CountXero), a marketer with over 20 years of experience, Partner/Head of Production, Amplitude Digital (@AmplitudeAgency) and Adjunct Professor, UCLA offered his opinion based on his own experience with these kinds of tools.
“I’ve played with a few of these before, and I can see the appeal; however, all too often, their reasoning for doing certain things is based on SEO myths, outdated info, or just flat out made up.
Most of them are great at doing a word count of the content for a given keyword, but word count isn’t a ranking factor. Others are pushing things like “LSI Keywords,” which don’t actually exist in the Google universe.”
More Data Does Not Give You Better Results
Some of these tools will analyze more than the top ten of the search results. They may analyze the top 30 and higher.
But more data does not translate into better analysis. The idea that more data will yield a better analysis is a common misconception.
“More data is not better if much of that data is irrelevant to what you are trying to predict. Even machine learning models can be misled if the training set is not representative of reality.
…Is inclusion of certain data relevant to the problem that we are trying to solve? …it should not be assumed that blindly introducing more data into a model will improve its accuracy.”
Wikipedia has an entry about accuracy and precision, where accuracy is about how close an experiment or analysis is to the truth and precision is how reproducible the results can be, regardless if the results are accurate or not.
“For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment.”
Accuracy is a problem with SERP Analysis in that the typical analysis does not account for all the variables that are responsible for why a web page ranks in the search results.
The reason they don’t account for all the variables is because nobody outside of Google knows what those variables are.
Analyzing Search Results Yields Flawed Results
Analyzing the search results has consistently yielded questionable results. One can analyze the results and tease out something like a possible search intent.
But to claim to identify factors that are responsible for why a site is ranking is questionable.
I mentioned to Bill Slawski that I was writing about SERP Analysis Tools and he quipped:
“I laughed my head off after reading the —– website. Word count has never been a ranking signal at Google. Neither has keyword density.”
Everyone has their opinion about this kind of tool. Some people may find value in it.
It’s up to you to research and determine if this kind of tool is useful for you.