When beginning a new SEO campaign, a crucial first step is the keyword research. As a part of this step, one wants not only to find a comprehensive list of keywords that could serve as a potential target markets on Search Engines, but also to be able to predict amounts of traffic that each keyword will bring to his site. So we usually turn to different keyword prediction tools that provide us with this kind of information. Obviously, the most popular such tool is Google Adwords Keyword Tool, but there are others – Google Search Based Keyword Tool, SEMRush, Spyfu, Wordtracker MSN Excel based addon (check a great review and tutorial on SEOMOZ about this one), etc.
When we look at these tools, we can see only a single value for each keyword, representing a predicted monthly number of searches performed for that keyword on search engines. However, there are important issues that need to be taken into account when considering this data: what position will bring that amount of traffic ? Can the change in title (such as adding a brand name, special characters, call to action, etc.) increase the amount of traffic that the site gets from a current position ? How do Universal Search Results affect the percentage of traffic I get from the 1st page of Google SERP ? In order to be able to consider these issues, we need to be able to estimate the percentage of traffic that each position will bring.
Now, when reading about the importance of getting to the first page on Google, there is one piece of information that always gets quoted: 90+% of users do not go past the first page of SERPs ( I don’t know who was the first to come up with that statistics, so I will just link to this SEM article). While being probably true, this statement creates a misconception that if we do succeed and get our site to the first page of results, we have managed to achieve exposure to those magical 95% of users.
I am sure that many people rationally understand that this is probably not the case, that each listing in the top 10 does not get the same amount of attention and that introduction of ads on top of the SERPs and universal search results within the SERPs change the distribution of clicks among the top 10 results. That said, it is hard to find valuable and reliable data that confirms or negates these premises. In order for people to check it themselves, they need to control every site in the top 10 for a keyword that provides enough traffic for these measurements to be statistically significant. Furthermore, there would probably be need for at least another such setup from a different niche to confirm the validity of the results in the first niche. To summarize, it is close to impossible to produce such research with the means available to an average website owner.
There are two ways that such researches can be done: one is by eye-tracking studies. In these studies, a group of people is fitted with an instrument that follows their eye movement and records them against what the examiners are seeing on the screen. It also records the clicks they do on websites and correlate that data with the “heatmaps” created through the eye tracking. For some more info on those studies and on their misgivings, check out this great post from 2006 by Bill Slawski.
The second way is to get hold of the search engine log data that provides not only the identity of keywords that were searched and results that were clicked but also information about the position that a URL was located at when clicked.
Luckily, there are both kind of data available. There are quite a few people out there interested in web usability and testing those concepts through eye tracking studies. As I mentioned in my Affilicon 2009 presentation, all of the major search engines cooperated with universities and conduct academic researches in order to investigate new concepts or improve the existing ones As for the search engine data, remember AOL data dump of 2006 ? So apparently that dataset included all of the data needed for estimating the number of clicks each website would get if it was located in a different position on SERPs.
In this post I will try and summarize several such researches, spanning the time from 2005 till 2008. Only the 2006 study is based on the user data accidentally released by AOL and the remaining studies are done by the eye-tracking technology. They are all investigating the distribution of clicks each of the top 10 results on Google gets. It is interesting that one of the researchers signing on the majority of the eye tracking studies is Laura A. Granka. She did her PhD in Communications at Stanford and has been in the User Experience Team at Google since 2005.
Here are the links to the sources of data for all five studies according to the years:
|Year of Publication||Type of Data||URL|
Notice that the above dates are dates of publication, not necessarily the dates when the study was done.
And here is the graph presenting the results of these studies (click to enlarge):
I have added the CTR for each location for the 2008 data
It is obvious that the vast majority of users click on the first 3 results (60-80%) and that anything below these positions will bring low to negligent traffic. These results should make us consider the overall optimization tactics – instead of massive investment into links for the sake of promotion of a few highly searched keywords with high competition; it may be worth our while to go for the long-tail traffic with almost certain top 3 positions through concerted efforts in wide scope content creation, site architecture optimization, deep linking, etc. Obviously, the shape of the above curves will depend on the nature of the search query, nature of the target audience and the data should also be cross-compared with the expected ROI from each term.
There is a very clear difference between the data in 2008 and the previous years – there seems to be a sharp shift of the majority of users towards the top 3 positions, making the above described differences even sharper. There is also a spike in users that click on the bottom results in the 2008 data. This can be explained by the introduction of universal/blended search results – video, news and image results. By being visually different, these results will naturally draw more clicks than the bland organic listings. Furthermore, an additional research done by the iProspect company shows that 36%, 31% and 17% of users click on the Image, News and Video results respectively. This should point towards possibilities of inserting content into those three categories of results and thus capitalizing on more SERP real estate.
There also seems to be a pronounced difference between the 2004 data and the later studies – there seems to be a much more pronounced gravitation of users towards the #1 position than in the following years. In order to understand the reason for this we should look in the way the research was conducted (was there any difference in the methods, was there a pronounced difference in the queries that the test subjects used, etc.). However, we can speculate on some other explanations, such as possible higher quality of SERPs in those years which brought more relevant results in top position for more searches than in the later years when more users had to click to lower-ranking results since the top-ranking sites did not offer the best match to what they were searching for.