Agencies need an SEO forecasting tool to translate their clients’ goals into potential business results, but choosing the right tool poses difficulties because they all forecast differently, using various algorithms.
This study compares SEOmonitor with 6 other tools and platforms that do SEO forecast in a similar framework: taking into account potential SEO performance (ranking targets) to forecast potential traffic growth (and business results).
That way, SEO agencies can understand SEOmonitor's capabilities against its competitors in the market and can make an informed decision on the right forecasting tool for their business.
The study proposes a decision framework that agencies can use to evaluate existent forecasting tools, it maps these tools based on proposed dimensions of performance and outlines the main takeaways from this comparison. It further compares the tools based on relevant additional features and on pricing criteria. All charts and scoring systems are available to examine.
Clients will always want results.
Oftentimes, as an agency, you interact with clients who are reluctant to invest in SEO because seeing these results takes a long time. Plus, they might find SEO harder to correlate with gaining additional traffic and revenue.
Showing your prospective client what kind of growth they can expect to achieve in a certain period could make them more open, and help them understand why and how an investment in SEO would be worthwhile for their overall business goals.
This is where forecasting comes into play.
It takes your goal of improving the website rankings on specific keywords in a set timeframe and translates it into palpable business results, modeling the potential organic traffic improvements over time if you reach that SEO goal. Which, for the client, further translates into these business metrics: sessions, conversions, revenue.
Having a reliable and robust forecasting tool to work with can ultimately make you stand out in the crowd and prove your proposal’s value.
“All models are wrong, but some are useful”, as the British statistician George Box put it in the ‘70s. We conducted this market research because we wanted to analyze and compare the current SEO forecasting tools out there. Gradually, a larger gap in the industry emerged, and the opportunity to address it: the need for a decision framework that can inform how SEO agencies choose a forecasting tool. The process by which you compare a set of tools can be just as relevant, or more, than the actual comparison.
We initially researched 9 tools that did SEO forecasting alongside SEOmonitor: Authoritas (the same tool as Linkdex), SEO Arcade, Future Thought, SE Ranking, Dragon Metrics, Ahrefs, BrightEdge, SEOclarity and Prophet. These were either SEO platforms with a forecasting solution, or tools specialized in SEO forecasting.
It became obvious quite soon that there are 2 main ways in which they approach forecasting.
The extrapolation method uses past website performance as input and projects it into the future, assuming it will be the same and acknowledging no other forces that influence the organic traffic. It does not start with setting a ranking goal based on what the client wants to achieve.
This way has its use cases and works when nothing changes: for example when you’re setting a paid media campaign or strategy starting from last year’s experience.
2. Setting a desired ranking target
This forecasting framework starts from a desired future performance (the goal) and calculates potential business results if the targeted ranks are achieved, while taking into account other keyword variables that will influence the formula (CTR, search volume, seasonality, device segmentation), based on user behavior statistical data mainly provided by Google products. Because it relies on current and future rankings, this is usually a feature or connected to a rank tracker.
This type of forecasting involves planning, according to an established goal. Also, when you want to effect change, it’s essential to consider not only keyword rankings but also seasonality, year-over-year search volume trends, CTRs and other variables that will impact the precision of the modeled potential future.
Because setting a goal and taking other keyword variables into account are not fundamental to how they forecast, we considered Future Thought, Prophet, and Ahrefs extrapolation and didn’t analyze them further.
We continued the comparison between SEOmonitor and the remaining 6 — Authoritas/ Linkdex, SEO Arcade, SE Ranking, Dragon Metrics, BrightEdge, SEOclarity — as they fit the industry need we’re also addressing: defining a realistic forecast scenario and explaining its potential ROI.
All of them come with different variables and methodologies. The more complex they are, the more elaborate navigating them can become. We need a set of critical dimensions of performance to make sense of them through a common-ground lens.
In the following section, we list and explain these dimensions of performance. Then, we show how we scored each tool based on these dimensions, and how we further mapped the tools based on these scores.
Let’s deep dive into them.
To choose a tool, the essential condition is to trust it. When you trust it, you rely on the quality of the data it uses, the process and the output it delivers. But what would reliability look like in the field of SEO forecasting tools?
To turn this notion from abstract to specific, we broke it down into two key dimensions of performance that at the end of the day can help you decide which tool to trust.
One measures the Precision of the tools, while the other measures their Transparency.
An SEO forecast is as solid as the data it uses. The variables the algorithm takes into account to estimate potential additional traffic based on your targeted SEO performance, together with the combination in which they are used, build the first key dimension: Precision.
This is what goes into the forecast – the input.
Keywords are indispensable, along with their current position, search volumes and corresponding targeted ranks; each tool has these as input, so there’s no point in using them as criteria for the comparison.
What sets the forecasting tools apart from each other are the keyword attributes and variables that further enter the algorithm. The fewer the tool uses, and the less calibrated they are, the less precise the forecast is.
These Precision variables are:
For our evaluation, Precision is thus a percentage score made of the variables above, segmented into two main categories:
The Precision of the CTR and the precision of the Search Volume can each get a maximum of 50%, according to the level of precision and granularity they add to the forecast.
Following the formula of the organic traffic — CTR (rank) * search volume — and the various degrees of impact they can have in the estimated results, as explained above, we’ve scored them as follows:
1. The Precision of the CTR (50%) is split into:
2. The Precision of the Search Volume (50%) is split into:
The closer the total Precision score gets to 100%, the more accurate the input, and therefore the forecast.
Let’s say you know which variables the tool takes into account as input for the SEO forecast; next, it’s important to understand how these variables are used in the algorithm and where the data comes from, based on the information that the tool provides.
We define this as Transparency.
An SEO forecasting algorithm is designed based on a set of intentions, hypotheses, and thought processes. These exist, but not all of them are made known to the user. If they are made known, and if they’re explained clearly and exhaustively, the ideal outcome is that you could replicate it.
To measure Transparency, we followed these questions as main guidelines and tried to answer them during our research:
Understanding the algorithm allows you to further explain how it works to your team, and then to your client. If it proves difficult to communicate how the tool forecasts, then your team won’t know how to advocate for the forecast to the client, who might thus lose trust in your proposal. Furthermore, could your colleagues or your client research the algorithm on their own? This could also increase trust, as well as autonomy in your team when it comes to using the forecast.
For the tool scoring, Transparency is a percentage that measures whether the tool offers the user enough clear information on the main Precision variables: CTR and Search Volumes.
So they’re segmented in a similar way:
1. The Transparency of the CTR (50%) measures each tool’s research on the clickthrough rate — knowing the data source for the CTR and how up-to-date it is is essential for trusting the tool. It’s split into:
2. The Transparency of the Search Volume (50%) measures the available information that is offered about each of the Search Volume variables, as long as the tool uses them as input for the forecast:
This is how we translated Precision and Transparency into a map having them as dimensions of performance:
How much each tool scores in both Precision and Transparency gives an answer to the initial question: can I trust the output generated by this tool’s forecast?
This is Trustworthiness — the essential guiding principle that we used to compare the tools.
To further map them on a graph, Trustworthiness had to be measurable.
Both Precision and Transparency determine whether the user can trust a tool, so Trustworthiness lies at their intersection. Once we had a scoring system in place and scored the tools based on it, we placed each tool on one of these quadrants on a map:
In the next chapter, you’ll see how each tool scores based on this scoring methodology, and how we placed the tools on this map according to Precision and Transparency.
In short, our scoring system allocates a percentage score to each of these variables if the forecasting algorithm uses it, according to the variable’s importance in the algorithm.
If the research shows the tool doesn’t use this variable, then no Precision score and no Transparency score is allocated.
If from the resources that the tool has, the user can understand how the algorithm uses this variable, then Transparency is allocated with its corresponding scores.
This is how the forecasting tools fared:
And this is how we mapped the tools:
Our research highlighted several other characteristics of the forecast that don’t critically determine Trustworthiness but add to the complexity of the algorithm and can influence whether the tool is the right fit for your agency.
We defined them as Additional features.
Below we elaborate on each of them and show separate charts with how they compare to each of other.
There is no SEO forecast without an output. This defines what you see in the dashboard, the estimation of organic traffic, and other metrics. We don’t consider Output by itself a nice-to-have, but there are significant differences in what the tools provide as the outcome of the forecast. For example, all the tools forecast traffic; not each of them shows inertial, improved and additional traffic separately.
Some other components that we added:
An SEO forecast can only prove accurate if we are able to track how close the estimated additional organic traffic has got compared to the actual organic traffic shown in real-time in the respective timeframe: did you or did you not achieve the targeted ranks? The alternative would be to verify this manually. Here we assessed whether the tool offers this possibility as an integrated feature in their product.
The level of control that the user has in adjusting the forecast algorithm can be highly important to some and less so for others. We mapped out the main differences in how the tools approach their interaction with a prospective user to help you build your own preferences.
These are the most important questions that emerged from our research:
The tools have very different approaches in setting their pricing plans. We used two criteria to help you focus on the forecast.
The path to deciding you can trust an SEO forecasting tool is hard, so we tried to make it more straightforward.
That’s how the decision-making framework was born.
We also wanted to make the methodology of our own research as transparent as possible. The limits of this research are partly influenced by how much each tool makes its variables and algorithm known.
The current form of this research is a starting point for us as well, and we will continue to update it based on agency feedback.
Illustrations by DAIA