An SEO shared on social media that his SEO tests proved that not using a meta description resulted in a lift in traffic. Coincidentally, another well-known SEO published an article that claims that SEO tests misunderstand how Google and the internet actually work and lead to the deprioritization of meaningful changes. Who is right?
SEO Says Pages Without Meta Descriptions Received Ranking Improvement
Mark Williams-Cook posted the results of his SEO test on LinkedIn about using and omitting meta descriptions, concluding that pages lacking a meta description received an average traffic lift of approximately 3%.
Here’s some of what he wrote:
“This will get some people’s backs up, but we don’t recommend writing meta descriptions anymore, and that’s based on data and testing.
We have consistently found a small, usually around 3%, but statistically significant uplift to organic traffic on groups of pages with no meta descriptions vs test groups of pages with meta descriptions via SEOTesting.
I’ve come to the conclusion if you’re writing meta descriptions manually, you’re wasting time. If you’re using AI to do it, you’re probably wasting a small amount of time.”
Williams-Cook asserted that Google rewrites around 80% of meta descriptions and insisted that the best meta descriptions are query dependent, meaning that the ideal meta description would be one that’s custom written for the specific queries the page is ranking for, which is what Google does when the meta description is missing.
He expressed the opinion that omitting the meta description increases the likelihood that Google will step in and inject a query-relevant meta description into the search results which will “outperform” the normal meta description that’s optimized for whatever the page is about.
Although I have reservations about SEO tests in general, his suggestion is intriguing and has the ring of plausibility.
Are SEO Tests Performative Theater?
Coincidentally, Jono Alderson, a technical SEO consultant, published an article last week titled, “Stop testing. Start shipping.” where he discusses his view on SEO tests, calling it “performative theater.”
Alderson writes:
“The idea of SEO testing appeals because it feels scientific. Controlled. Safe…
You tweak one thing, you measure the outcome, you learn, you scale. It works for paid media, so why not here?
Because SEO isn’t a closed system. …It’s architecture, semantics, signals, and systems. And trying to test it like you would test a paid campaign misunderstands how the web – and Google – actually work.
Your site doesn’t exist in a vacuum. Search results are volatile. …Even the weather can influence click-through rates.
Trying to isolate the impact of a single change in that chaos isn’t scientific. It’s theatre.
…A/B testing, as it’s traditionally understood, doesn’t even cleanly work in SEO.
…most SEO A/B testing isn’t remotely scientific. It’s just a best-effort simulation, riddled with assumptions and susceptible to confounding variables. Even the cleanest tests can only hint at causality – and only in narrowly defined environments.”
Jono makes a valid point about the unreliability of tests where the inputs and the outputs are not fully controlled.
Statistical tests are generally done within a closed system where all the data being compared follow the same rules and patterns. But if you compare multiple sets of pages, where some pages target long-tail phrases and others target high-volume queries, then the pages will differ in their potential outcomes. External changes (daily traffic fluctuation, users clicking on the search results) aren’t controllable. As Jono suggested, even the weather can influence click rates.
Although Williams-Cook asserted that he had a control group for testing purposes, it’s extremely difficult to isolate a single variable on live websites due to the uncontrollable external factors as Jono points out.
So, even though Williams-Cook asserts that the 3% change he noted is consistent and statistically relevant, the unobservable factors within Google’s black box algorithm that determines the outcome makes it difficult to treat that result as a reliable causal finding in the way one could with a truly controlled and observable statistical testing method.
If it’s not possible to isolate one change then it’s very difficult to make reliable claims about the resulting SEO test results.
Focus On Meaningful SEO Improvements
Jono’s article calls out the shortcomings of SEO tests but the point of his essay is to call attention to how focusing on what can be tested and measured can become prioritized over the “meaningful” changes that should be made but aren’t because they cannot be measured. He argues that it’s important to focus on the things that matter in today’s search environment that are related to content and a better user experience.
And that’s where we circle back to Williams-Cook because although statistically valid A/B SEO tests may be “theatre” as Jono suggests, it doesn’t mean that Williams-Cook’s suggestion is wrong. He may actually may be correct that it’s better to omit the meta description and let Google rewrite them.
SEO is subjective which means what’s good for one might not be a priority for someone else. So the question remains, is removing all meta descriptions a meaningful change?
Featured Image by Shutterstock/baranq