SEO without a hypothesis is just a to-do list
Stop doing SEO based on guesswork
This week’s newsletter is sponsored by North Star Inbound and Airops
Paid subscribers can download my THRICE prioritization template here.
Finchling helps PR teams find story opportunities before their competitors do by scanning the news, filtering the noise, and surfacing the stories worth reacting to, along with why they matter and angles to pitch. Try it free for 30 days.
Upcoming events: I will be on a webinar with Duda today, Surfer next week on April 28, and attending Google IO on May 19-20. Please join me at any or all of these!
In many organizations, the SEO roadmap is a list of tasks dressed up as a strategy. The to-do list is based on guesswork, with ideas sourced from a tweet, a LinkedIn post, an agency audit, or past experience implementing a best practice. Someone decides the site needs more internal links, or the title tags need rewriting, or a new content cluster would improve SEO. There is no “do this work because X will happen.”
[Sponsored by Airops]
Every marketing conference right now talks about AI search. Very few of them show you what’s actually working.
AirOps is doing something different with NEXT. One day. One stage. NYC on May 13th.
The agenda is built around what’s actually working. New research on how AI is reshaping discovery. Playbooks from teams like Ramp, Later, HelloFresh, and Anthropic who are already running AEO at scale. Live product demos, not slides about a product launch. And hands-on building sessions with the AirOps team.
I’ve been watching this space closely for the last year, and the gap between the teams who are building for AI search and the teams who are still talking about it is widening fast. The people in that room are the ones building.
Kevin Indig is speaking. The room is CMOs, VPs, and growth leaders who are already doing this work.
Space is limited and registration is application-based.
What happens after the work is implemented is even worse. Sometimes they monitor Google Search Console for a few weeks and either declare victory or blame the algorithm for updating and missing the change. But, from my experience as a consultant over the last seven years, most companies never check back on progress. They just do the work and move on.
More than a lack of follow-up, there was no logical basis for the task before it was suggested. Nobody stopped to ask the most important question before any of it started: what, specifically, do I expect to happen, and why?
Add a hypothesis to your roadmap
Whenever I see one of these roadmaps (Check out my PRD template to improve) built as a to-do list, I immediately add a column for “hypothesis.” A hypothesis is not complicated and need not be statistically sound. It is just a sentence that says “if I do X, I expect Y to happen.” You can add a third part explaining why you expect it, but that isn’t strictly necessary.
The reason this matters is not philosophical, nor is it busy work. Without a stated hypothesis before you start, you cannot determine if the work is even worth doing. More than that, without a hypothesis, you close the door to learning afterward because you don’t know what you are supposed to learn. A company I worked with wanted to create an expensive process to update their XML sitemap daily because it seemed like a best practice for a site of their size.
They were already shifting their prioritization efforts toward THRICE, so naturally, they needed a hypothesis to support this recommendation. The hypothesis they formulated was that it would improve crawl depth by a certain percentage. Without that hypothesis forcing the question of what they were actually trying to prove, they would have built the daily updater, shipped it, and kept maintaining a tool with no measurable impact indefinitely, out of fear of unknown downsides.
Since this was an expensive engineering effort, I talked them down from building a daily updater and convinced them to build a single, comprehensive, up-to-date sitemap instead, because this would prove or disprove the hypothesis with far less investment.
After some internal pushback from team members who were attached to their technology solution, they agreed to the test. It was obviously an easier way to prove the hypothesis, and if they were right, they’d have greater justification to build the daily updater later.
Most results will be inconclusive
The result was inconclusive, with no meaningful change to crawl depth in the following month. Crawling did improve six months later, but it was so far removed from the work that connecting the change was impossible. Having seen that the test was inconclusive, the team moved on to work that drove meaningful revenue.
This is why a hypothesis is necessary. It helps prioritize work and builds a knowledge base of what actually works on your specific site, rather than doing something just because it’s a best practice. It forces you to have a reason. Not “I think this will help” but a logical explanation for why the change should produce the result. If you can’t write that sentence, you don’t understand your own strategy well enough to execute it.
The SEO and AEO industry is swimming in false confidence. Practitioners point to ranking bumps and traffic increases that were coming regardless of what they did. This is also why so much SEO advice misses the mark when applied to sites other than the one it was originally observed on.
For example, many social media case studies (I am deliberately not linking to them) show that adding an FAQ schema increased LLM traffic by some huge percent. Another company reads it, adds the FAQ schema to their site, and nothing happens. Both teams walk away with the wrong conclusion. The first team thinks the FAQ schema is a reliable driver of traffic, while the second thinks it doesn’t work.
Neither team understands what they actually learned because neither team started with a hypothesis about why the FAQ schema would help their specific site answer their specific queries better than the current format.
As a bonus, successful efforts in which you proved or disproved a hypothesis are the strongest resume builders for in-house employees or case studies for agencies. You have a solid example of work that you can recommend that doesn’t come from something you read; it is your own experience.
Here’s how a proper hypothesis sounds
For an AEO project: “Product pages need an FAQ section on the page, and is therefore underperforming relative to competitors. If I add an FAQ section, I expect Google traffic to increase within 45 days.” That hypothesis is SMART. It has a timeframe with a specific mechanism. It identifies a specific set of pages rather than the entire site. You can run that test, wait out the timeframe, and actually learn something from whatever happens. This is a real test I have done many times over the last two years, and the results have been mixed, but it is still worth trying.
The best example of a clean hypothesis test I have run was adjusting title tags to attract more clicks. The test set URLs saw a 100% increase in CTR while the control stayed flat. Could there have been noise in the data? Maybe, but it was solid enough that we rolled it out across the site and saw a similar bump everywhere.
On the negative side, removing canonicals from the test set caused impressions to drop immediately. Both outcomes were obvious enough that the science barely mattered. That kind of clarity only happens when you define what you are measuring before you touch anything. Scientific testing in SEO is extremely difficult because many variables are unknown, so most results will be anecdotal
The reality is that, in most cases, SEO tests will be inconclusive, so doing something purely for SEO is hard to justify if it requires considerable engineering effort. In those cases, I would prioritize UX or other KPIs over work that will be unproven either way. If it performs better for users, then do it, even if there’s no SEO upside.
Hypothesis writing leads to compounding learning
Writing a hypothesis is quick and delivers outsized value because it builds a knowledge repository. This compounding is where you get the real value because every confirmed hypothesis teaches you something about how your site, in your specific category, with your distinct competitive set, behaves in search. That knowledge doesn’t come from a blog post or a conference talk.
A narrow, falsifiable hypothesis produces something useful regardless of outcome. If it’s confirmed, you’ve identified a repeatable pattern on your site. If it’s rejected, you’ve ruled something out, and ruling things out is underrated. When it’s inconclusive you have learned something too.
Your biggest surprise will be how often you sit down to write the hypothesis and find you can't. That inability is more diagnostic than any post-project review. It means you don't actually have a reason for the work, and the right call is not to proceed on instinct.
When a team commits to not recommending a change without a hypothesis, the conversation shifts from tasks to value. That is not a small thing when the SEO team's contribution is already being questioned as it is in this AI-obesessed world. Don’t neglect the hypothesis.
[Sponsored by North Star Inbound ]
What happens when you combine best-in-class SEO with conversion optimization?
BigRentz’s traffic increased by 186% (85k), added 1950 conversions in 12 months.
Self Financial’s traffic increased by 50k/month, and added 685 new customers.
Lastly, Secure Data added 1968 phone calls.
North Star Inbound’s SEO strategies earn leads, conversions, and revenue..
Book a call for a free content audit and 10% off any engagement.
Intrigued? [Learn more here]



