Most SEO forecasts are useless
This week’s newsletter is sponsored by the Digital PR agency Search Intelligence. See their case study linked at the end of the newsletter.
Are you interested in taking a cohort-based course for product managers to learn about SEO that I am building with Kevin Indig? Please share your insights in this survey to help inform what we should teach.
Warning: This will be a long post because I want to do justice to the topic. My follow-up post will share how I build better forecasts with TAM.
The ability to forecast a marketing investment is a critical business need on many levels. While it is easy to spend money on what seems to be good ideas, knowing what will come from them is crucial for decision-makers to prioritize and approve them.
SEO forecasts have been the Achilles heel of any investment request for most of my career. Most SEO managers and consultants can wax eloquently about the requirements for any SEO task, but when asked to present and commit to a hard forecast, they clam up.
There is good reason for this; nearly every forecast I have ever seen is absolute BS.
I have never, in my 17 years of putting together SEO forecasts, seen an SEO effort's impact even remotely resembling the forecast. Done right, the SEO returns surpass the SEO forecast by many multiples, done wrong, it misses by a considerable margin. No matter what, there will always be variables in SEO results that were never predicted.
To compare this to a weather forecast, it’s like a meteorologist saying there’s a 10% chance of showers tomorrow, and then a category five hurricane shows up. While that forecast might have been technically accurate because there were showers, it will not keep that meteorologist from being fired.
Weak forecasts kill investments
With this kind of ambiguity and fuzzy data, it is impossible to make proper decisions, and SEO will ALWAYS lose budget to the more predictable and reliable channels even when those are less effective and more expensive.
The inability to forecast SEO is because most SEO forecasts begin with a signal - an assumed monthly search volume - and then layer more assumptions on top of them. If any of these assumptions are inaccurate, and most are, as I will explain below, the forecast is no longer a forecast but a math equation that does not resemble reality.
To elaborate on these assumptions, let’s break down the primary components of a typical SEO forecast.
Forecast methodology
Suppose someone wants to estimate the search traffic potential for a piece of content that hopes to discuss beach homes in Florida. First, they would go to a keyword research tool and grab the monthly search volume for the keywords they believe will be visible on Google.
This is the primary piece of data, and when it’s wrong; the whole equation is instantly wrong.
While keyword research tools are valuable for overall SEO research, they are only great for high-level directional data. Regarding granular specific data, it would be impossible to be accurate because of the many-many variables that go into the queries people search.
Seasonal variables
While the tools try to account for seasonality, they must be psychic to nail user trends for the future. Going back to the Florida beach houses example, a tool can undoubtedly account for higher search volume in the winter over the summer, but only in a general way. No tool can know whether Florida will have an unseasonably cold winter that suppresses search volume or if a TikTok trend will make the beaches in Alabama more popular. The tools report the past performance of a keyword but are not a reliable predictor of future results.
These details matter because they could be the difference between 5,000 monthly searches for that keyword or 2,000. As the primary piece of data that is the foundation for an entire forecast, the margin of error must be minimal, which is impossible.
Brand search as a datapoint
As one data point, I have been fortunate to consult for many brands that have substantial brand searches on Google. I have never seen any keyword tool even closely report the actual impressions I saw in Google Search Console for any of the individual brand keywords.
Even before estimating the percentage of traffic that might come from this keyword, most SEO forecasts assume that the monthly search volume from the SEO tools doesn’t account for all of the variations of that keyword that might also trigger a relevant search result. To account for this, the forecast will typically gross this number up by a random number. I have seen people use 2x, 5x, and even 20x.
This practice itself will destroy the forecast because a random multiplier applied to all keywords uniformly is, of course, not mathematically sound. Some keywords only represent a small set of variations, while others might not.
Estimating traffic share with CTR
Continuing with the grossed-up monthly search number, the next piece of data is an estimation of the percentage of traffic that a site might get from those monthly searches.
For example, if it is assumed monthly searches for all keyword variations is 50k, the forecast would have to estimate how many of those searches would click into the results for the target website. Years ago, there was a concept of a “click curve” where, based on the ranking position of a result, one could estimate how many clicks the site might receive.
Those click curves are no longer valid; if you don’t believe me, look at your own Google Search Console data. You will see some first positions with an 80% CTR (clickthrough), while others might have 15%. The same goes for lower positions. Some position 9’s might have a .01% CTR while others will have a 5% CTR.
Here’s an example from my site that shows the wide gaps between CTR’s for the same position.
Average positions are a guess
Regardless, this is where things get REALLY fuzzy with an SEO forecast. Taking the mostly at-this-point guessed number of total monthly searches for a query set, an average position will then be targeted, and it’s assumed CTR applied as a factor. If the model wasn’t completely broken at this point, this is where it diverges from anything tied to reality and simply becomes a dream.
As anyone who has ever tried to reverse engineer specific ranking positions on Google knows, it’s impossible to determine why one particular URL “ranks” in a specific slot while others rank higher or lower. There may be some correlative data around the number and quality of links or usage of text, but it will be, at best, an educated guess. If you tried to use these exact same metrics on a different URL, there’s no guarantee that the same results would reoccur.
Essentially, there’s no way to predict or force a specific ranking position, so the position used in the forecast is really just a hope or dream and is not grounded in fact. Even if the position were to somehow be accurate, the assumed clickthrough rate, which is the more important part of the forecast, is impossible to guess. (If a query already has traffic, there will likely be a stable CTR that can be relied upon, but this doesn’t work without existing data.)
Putting this all together, the typical SEO forecast begins with:
A monthly search volume number for a single keyword that is based on past performance only of that keyword
The keyword’s monthly search volume is grossed up based on an arbitrary number
The total searches are multiplied by a clickthrough based somehow on a hoped-for ranking position
This total will give some random traffic number that potentially has better odds of being the next Powerball number than it does of being the precise traffic volume expected from SEO efforts.
To restate my original point, all SEO forecasts are BS if they are based on keyword data. The fact that SEO is complicated to forecast is a huge component of why SEO teams get the budget scraps, and solving this would help companies prioritize SEO investments. In my next newsletter, I will give an alternative method to build a better SEO forecast tied to reality; subscribe please to get this in your inbox.
Thanks as always for reading!
[Sponsored]
We got top-tier links for our HR client by combining datasets.
Campaign name: Happiest Employees in America.
Goal: Earn white-hat links with Digital PR.
Result:
• Entrepreneur: DR 91
• Benzinga: DR 88
• PC Mag: DR 92
• ... & lots more
This is how we've done it: READ MORE