Why Google SERP CTR studies are a waste of time

June 20, 2011 – 8:09 pm

?Historically, one of the most quoted, used and abused SEO metrics was ranking. It is a cornerstone of reports that are sent to clients, one of the first things we go to check for clients’ sites and usually cause of alarmed phone calls when the ranking drops. Over the time, SEOs have learned to steer away from rankings as the major reported metric, although I have yet to find someone in the industry that doesn’t check them.

Another exercise that we like to engage in is predicting how much traffic ranking in the top 10 will bring to the website. It quickly became obvious that not every position in the top 10 brings the same amount of traffic. It is quite self-evident that the positions 1-3 will bring more traffic than positions 7-10 but how much more? Effort involved in pushing the site from #5 to #2 can be significantly larger than effort to bring the site from #50 to #10, so is the traffic/conversions worth the effort?

For this need, people have started investigating how the traffic divides across the top 10 locations. There were several methods used in these investigations, the two most prominent ones being eye-tracking and averaging clicks over locations data that comes from analytics or search engine/ISP (mistakenly) released data. I have reviewed a number of such studies and their predictions varied: the first 3 positions get in the neighbourhood of 60-80% of clicks, while the rest divides over the positions #4-#10. You can already see how the data in these studies shows a high degree of variability – 60%-80% is a pretty wide range.

All of that sounds well and good on the paper and the numbers can be transformed into pretty charts. More importantly we have numbers that we can translate into business goals and all clients like to hear that: “if we reach #3, we will get X visitors and if we put in more effort (AKA increase the SEO budget), we can get to the #1 position which will mean X+Y visitors, conversions, whatnot”. However, when one starts looking at the numbers out in the wild, the picture that emerges is significantly different. The numbers do anything but fall into precalculated percentage brackets and the division by percentages fluctuates so wildly that pretty soon, you find yourself tossing your predictions into the bin and all you can say is that improving your rankings will get you more traffic by an unknown factor. This is something that has not escaped the original researchers of CTR distribution: they are constantly trying to improve the accuracy of their findings by focusing the research on similar type of queries (informational, transactional, brand, etc.) or similar type of searchers, but there is a much greater number of parameters that influence the spread of clicks over different positions. I will try to outline a number of such parameters, as I encountered them with some of our own or our clients’ websites:

Universal search (local, shopping, images, video, etc.)

Introduction of different types of results in SERPs was Google’s attempt to cover as many user intentions as possible. This is particularly true of short search queries: when someone searches for [flowers] it is hard to tell whether they want pictures of flowers, instructions of how to care for flower pots, places to buy flowers or time-lapse videos of blooming flowers. Therefore, Google tries to include as many types of results as possible, hoping to provide something for everyone. iProspect’s study done in 2008, shows that 36%, 31% and 17% of users click on Image, News and Video results respectively, which means these results will seriously skew the CTR distribution over the SERPs. This is particularly true of queries with local intent where Google Places results are featured prominently in SERPs. Compare the following 3 SERPs (click to enlarge):


There is no chance that the #3 position gets the same percentage of clicks on each of these SERPs and the same holds true for every other position. Since serving of universal search results depends on geo-specific (IP geolocation) and personalized (location, search history) parameters, there is no way to accurately predict how the clicks will distribute over the different positions in SERPs

Search intent

Different people mean different things when searching and shorter the query, wider the scope of possible intentions is. We can be pretty sure that someone searching for [online pizza order NY] is looking to order dinner tonight, but person searching for [pizza] may be searching for a recipe, in which case a pizza joint with online ordering website located at #1 will not get the recipe seeking clicks, which will either perform a second search or browse deeper into the SERP and on to 2nd, 3rd page of results. If the majority of searchers are looking for a recipe, then the #1 located pizza vendor will most definitely not get 40-50% of clicks. We have seen this happening with one of the clients that was ranked at #1 position for a single word keyphrase. Google Adwords Keyword tool predicts 110,000 exact match monthly searches in the US. During the month of May, while being ranked at #1 at Google.com for that keyword, the site received about 7,000 US Organic visits. That is 6.36% CTR on a #1 position. Google Webmaster Tools shows a 10% CTR on #1 position for that keyword. Compare that to 50.9% CTR that the eyetracking study from 2008 predicts for #1 position, which means site located at #1 should be receiving around 56K monthly visits and you can get a feel for the gap between the prediction and reality.


The problem with guessing intent of single query keywords is that it cannot be done before reaching any significant traffic creating positions and that task can be enormous. Low CTRs on the first position also put the whole perceived market value of single keyword exact match domains under question – the predicted amount of traffic that is expected from reaching top locations for the EMD keyword should take into account the difficulty to predict the intent of searchers.

One of the possible benefits of ranking for single-query keywords is appearance in SERPs even if your results are not clicked through. If your Title is unique and memorable, there are higher chances that people will click through it from a second search query SERPs if they have seen your site in the first query. For example, if you rank for [widgets] and your title says “The Cheapest Blue Widgets in the World!”, then there are higher chances of those visitors that perform a second search for [blue widgets] to click on your site, even if you are located in lower positions (granted that your title is not pushed below the fold by Local, Video, Image or other Onebox results). Yahoo has a nice keyword research tool that shows the most common previous and next queries after your keyword of choice (I think it is only relevant for US searches)

Yahoo Clues

Brands outranking you

According to one of the CTR studies, dropping from #3 to #4 should cost you about 30% of your traffic (if the 3rd place gets 8.5% and 4th place gets 6.06% of total traffic), but is it always the case? Let’s see how drop from #3 to #4 and then back to #3 played on one of the SERPs:

Outranked by a brand - traffic levels

The SERP for this keyword has no local results nor universal search (images, videos, news). Pay attention to the difference in numbers between the original #3 traffic and the later #3 traffic. The initial drop to #4 cost this site 76% of the original traffic (instead of 30% as the CTR studies predict) and the return to #3 got it back to about 47% of the traffic the site got when at #3 originally. So what happened? The site got outranked by a famous brand. Seeing the brand name in the title and and URL of the #1 listing, enticed searchers to click through that listing in much higher numbers than the 42% that the study predicts. Again, it is not only where you are ranked, it is also who is ranked with you that influences your CTR percentages. My guess is that even when a big , well-known brand doesn’t outrank you, but is located just below your listing, your CTR percentages will also be negatively affected. Seems like Google’s preference for brands is based in actual user behaviour. It would be interesting to see whether there are any significant differences in bounce rates from brand listings too.

Optimized Listing

There are many ways to make your listing on SERP more attractive to clicks.  The linked article outlines the most important ones, however I would like to add some additional points to three of those tactics:

  • Meta Description – In an eye tracking study, performed by a team including, among others, Google’s Laura Granka, (published in a book called Passive Eye Tracking, in a chapter Eye Monitoring in Online Search) it was found that the snippet captured the most attention of the searchers (43% of eye fixations), followed by the Title and the URL (30% and 21% of fixations respectively). The remaining 5% distributed over the other elements of the listing, such as Cached Page link. This finding shows the importance of controlling the content and appearance of your snippet for your main targeted keywords, as an eye-catching snippet can tip the scales of CTR in your favour and increase the number of clicks you get, relatively to your site’s ranking.

  • Rich Snippets – the above figure of 43% of attention dedicated to the snipped probably becomes even higher when we talk about rich snippets. Google and other search engines have been investing a lot of efforts into encouraging adding a layer of classification onto the data presented on web pages and in case the information is labelled in an accepted way, it will be presented already in the snippet in SERPs. Check out the following SERP:Rich snippets in office chair reviews SERP
    The showing of review grade stars in the snippet itself, makes this listing more attractive than the neighbors and it probably increases the CTR far beyond the few percent that the position #6 would get it, according to CTR studies. Similar situation can be observed with other microformatted data, such as event dates, recipes, airplane tickets, etc. Rich snippets will become even more widespread if and when Google starts accepting websites that implement the Schema.org tagging.

  • Optimized Titles – adding a call to action to the Title or making it stand out in some other way, can increase the CTR percentages of a listing, beyond what you would usually get due to your ranking. This strategy, however, became less influential since Google started deciding what the best Title of your site should be, according to, among other things, anchor text of incoming links. As these changes will be implemented within Google’s discretion, it becomes very hard to predict how many people will click through to your site.

These are only some of the reasons that CTR % can differ greatly from the value predicted by the eyetracking or other studies. It is important to remember that as search engines try to index and incorporate into SERPs more and more types of results, it will be harder to predict the CTR over locations. Even averaging out the figures over a large number of results will give us a number that is not at all useful for predicting the traffic volume received from each position.

(Image courtesy of iamyung)

SEO Spider Review – Xenu on SEO steroids

February 24, 2011 – 6:35 pm

One of the first tasks when getting to a brand new SEO project is to assess the state of on-site optimization. That is the place where we usually do the first measurement of ”how bad the things really are” or “how badly the previous SEO company screwed up” . One of the most basic tools that I always used for this task is Xenu Link Sleuth. It was great for looking at possible crawling problems, getting server headers, checking alt tags on images etc.

However, the problem with Xenu was that it is not an SEO tool. As the on-site SEO changed, so did the requirements of the indexing tool. All of a sudden, I wanted to see what pages have a canonical tag, what pages have duplicate metas issues, whether any pages have robots metatag issues and Xenu just doesn’t do this.

Enter Screaming Frog SEO Spider. If Xenu had a brother that did SEO, that’s what it would look like. It is a great tool and one of the landmarks of a great tool is constant discovery of new ways it can be useful. It has saved me numerous hours, mostly at the initial site audit stage.

I will not go into the nuts and bolts of how the tool works. The interface is very intuitive and if you have used Xenu in the past, you will feel at home with SEO Spider. Furthermore, there is a helpful video that explains how it works (British accent alert!). Instead of doing a comprehensive review of how the tool works, here are a few ways SEO Spider helped me in my SEO tasks recently:

(DISCLAIMER: SEO Spider has a free version that will index up to 500 pages. Full version – individual licence –  costs £99 per year. I am paying for the full version and am not being reimbursed in any way, shape or form for writing this review):

  • Investigating distribution of onsite links – When large websites want to promote a certain section of their site, one of the ways to do this is to increase the number of links to that page/section from other pages on the site. If the promoted page is found deep within the navigational structure, this will bring it closer to the root and increase the linkjuice flow to it. An overview of internal category/product pages that have a larger than usual number of on-site links pointing to them can help us identify competitor’s SEO campaign targets. Furthermore, if we cross examine these promoted pages with the number of offsite links to internal pages, we can gain some nice insights about promoted products, targeted keywords, linking strategies and other aspects of SEO campaigns.
  • SEO Spider - onsite link distribution


  • Identification of images without alts – this is especially true about images that serve as links. In addition to the capability of filtering images without ALTs, there are other options, as in filtering images that are larger than 100kbs or filtering images that have a long ALT tag.
  • SEO Spider - image alt tags

  • Searching link prospects for broken links you can replace – as a part of link building process, you can go to a linking prospect site, run the spider on it and get a list of all outgoing links that result in 404. Go to the linking pages, find out what was the linking purpose and the context of the broken link. Then you can either create a similar resource on one of your sites or tweak your existing content to fit the linking intention and contact the linking prospect, alerting them about the broken link and pointing out similar helpful info on your site.
  • SEO Spider - broken links

  • Identifying “bait and switch” links on your site – there is a possibility of webmasters creating useful content that you want to link out to. After a while, some of those websites may be taken over by other companies that want to leverage the existing links and authority and redirect them to their own resources. Thus, unwillingly, you may find your site linking to content that you did not intend to link to. In the mild case, your visitors will click through and will not find the useful content you intended them to. In worse cases your site may be flagged as linking to adult and maybe even illegal content. Hence you can get a list of all the external links that result in a redirect and check them out. A lot of the cases those will be simple canonicalization issues that webmaster took care of (adding the trailing slash, pointing to the www version of the site, reflecting a change in directory structure, etc.)
  • SEO Spider - bait and switch links

  • Discover noindex/nofollow metas – A great way to get an overview and identify screw-ups left behind by your developers. Leaving noindex metas sounds like a beginner mistake, but you would be surprised how many experienced SEOs stumble here, either due to their own oversight or due to forgetful developers
  • SEO Spider - noindex nofollow metas

  • List all the canonicals – canonical tags are a great way to get rid of duplicate content issues on site (and across sites) but it can be used for other, more sinister purposes too (that’s for another post). Conveniently, SEO Spider will spit out all the canonicals across the site
  • SEO Spider - canonical tags

  • Search inside the HTML source of pages – this is one of the recent additions to SEO Spider. It can be very useful if you want to find all the pages linking to a target page with a specific anchor text or look for tracking code within pages.  Up to 5 custom searches can be defined in the Configuration section.
  • SEO Spider - HTML source search


These are just a few things I have found in the last few weeks. I am sure this list will just grow, so check it out at later stage, I may be adding more great ways to use SEO Spider


SEO is Dead

January 7, 2010 – 4:41 pm

In the past few years, we have seen a flurry of articles celebrating the forthcoming death of SEO (AKA linkbait) and an even larger flurry of angry rebuttal articles from the SEO community (AKA retweetbait).

So not wanting to be left out of that link party, I decided to weigh in with our clients’ opinion on that matter (click for larger version):

Have a nice day.

Reading List #4

December 1, 2009 – 10:53 pm

In between having an awesome time at Pubcon and having not so awesome time catching pneumonia, there are quite a few articles that have gathered in my Reading List bookmark folder. So here are the articles that have caught my eye in the last few weeks.

If you think I missed something groundbreakingly important or that your article has covered one of the discussed topics in a better way, please let me know in the comments and I will make sure to review it and post it in the next Reading List.



I like articles that explain different tools and techniques through concrete examples. This one explains 4 different metrics that can be analyzed through creating segments in Google Analytics


A little bit of circumstantial evidence supporting the claim that Google will boost your rankings if you feed it with fresh content. Which is probably true under very specific conditions, so I wouldn’t treat this as an article outlining a controlled experiment, rather as an interesting direction to check things.


A commentary on Google’s move of replacing the URLs in SERPs with breadcrumbs. Interesting insights.


Post on Bing link building guidelines, with links to the original Bing post and WMW discussion thread on the topic.


A great post describing how to measure and analyze data describing the frequency of Google visiting your site. With the visible PR being either outdated or unimportant parameter, frequency of spidering seems to be a likely contestant for the title of the parameter that defines the importance of your site. This article describes a way to measure and compare this frequency between different pages of your site.


List of 5 not-so known analytics tools that (seem to) have been created with SEO on mind


Outline of a site-wide Title tag tweeking process for achieving maximal rankings and increasing overall CTR for those rankings. I don’t get the kung-fu metaphors, for some reason SEOs seem to be infatuated with them.


OK, this is probably going to be my last mentioning of the Vince (or the Brand) update on these lists. Here is a comprehensive guide on everything meaningful that was published on this subject


A nice case study explaining why you shouldn’t automatically 301 all your 404 pages back to your homepage. I do not necessarily agree with the way they have solved the problem – they could have done it without losing links to the non-existing pages, but that is another issue


Nice comparison of the old and the new beta Google Adwords keyword tool and the differences in the numbers they output for keywords with different search volumes


A piece describing some reasons why country code TLDs have advantage over .com, .net, .org, etc. in local results


Examination of breadcrumbs from the designer and user experience point of view


Examination of breadcrumbs from the SEO point of view. A very nice overview of all the pros and cons of different ways to utilize breadcrumbs on your site. One point that the article does not mention is duplicating links back home or to inner pages by using breadcrumbs. Remember those “second link doesn’t count” issues? Make sure that the link you want to be counted comes before breadcrumbs in the code.


Good overview of all the SEOMoz tools. I like the way that the tools are presented – through a specific SEO problem that each of them solves. Still waiting for part 2 of the post.


Awesome review of one of my favorite SEO Tools – Xenu. Combined with Excel (who else?) this little FREE tool can do anything. Do. Read. This.


A very interesting comScore research showing that 4% of internet users account for 85% of the clicks. That means you don’t have to target everyone. Some demographic research should be able to profile the heavy clickers for you and help you target those audiences that are most likely to click on your ads.


A few tips on how to use Analytics to monitor for canonicalization issues on your site. Nothing Earth-shattering here but it is always good to have a variety of tools in your belt for doing the same task.


Interesting analysis by Eric Enge (haven’t heard from him in a while) on why deep links to your site’s inner pages increases the amount of juice that your homepage passes around.


Ari explains how to “influence” Google Local results. Not sure about the lifetime of this one but I trust Ari that it worked at the time of the writing 🙂


A very interesting study case of one failed and two successful offline campaigns that are supposed to support online marketing efforts. Includes a what-to-do-to-get-it-right list.


A list of link-tracking tools. Besides the obvious, popular ones (Majestic, Linkscape, Raven and Yahoo backlinks), includes some less known tools that are worth a try.


A novel look on how to harness Google’s Personalized SERPs to your benefit using some social engineering, bordering on psychology


Interesting discussion on Google’s practice to limit the number of pages it indexes on large sites. Something tells me that this is a very old phenomenon that was always known as “Supplemental Index”. Anyways, there are some tips in here on how to deal with this situation.

Pubcon’s Twitter Fail

November 12, 2009 – 10:21 am

Didn’t get a chance to write any updates about the Pubcon, so it is a pity that my first update is going to be a complaint, but I guess since everyone is talking about the good stuff (of which there is plenty) and it is the complaints that get you to improve stuff so here it goes:

First day of Pubcon, there was almost the whole session track devoted to Twitter. The sessions rocked, the presentations rocked, it was realy great. You would think that the leading search marketing conference would plug in to Twitter and utilize it in ways that are a bit more creative than scrolling the tweets with the #pubcon hash on plasma screens in the expo area. The most significant failure in understanding twitter, IMHO, is failing to understand that my Twitter profile is my point of contact, part of my online identity, my way of plugging into what my friends and people that I follow are doing in real life. Instead of allowing me to integrate Pubcon into the fabric of my online presence, I am given only one option of participating, through the hashtag.

So what could have been done better? Here are three things from the top of my head that seemed obvious from the first day of the conference:

  1. My Twitter profile is a part of online identity – was it hard to ask people for a twitter username at the registration stage and print it out on the name tags? I do not talk to Ben Cook on Twitter, I talk to @skitzzo and the connection between the two should be made immediate through the name tag. Giving the ability to upload the avatar image and print that too would be ubercool.
  2. More session specific hashtags . The moderators were doing this out of their own initiative, but it would have been really helpful to have a specific hash tag for each session and to be able to follow the sessions like that. Tomorrow if I find a tweet saying that @sugarrae gave an awesome tip about redirecting affiliate links, it would be really helpful to know what session it was on, what the context was and maybe to find the presentation as well. This way I have to sift and search since there is no way of me knowing when it happened, was it an answer to a question from the crowd or part of the presentation, etc.
  3. Make sure all the panelists have their twitter username on each and every slide of their presentations. When I sit on the panel where Lyndsay Walker is rocking an awesome presentation on website architecture and SEO, how can I share that with my tweeps from England that are eagerly following every piece of info on Pubcon? What I needed to do is start searching on Google for the twitter username to be able to give proper credit. With the WiFi speed surging to as fast as 1Kb/s at certain points during the conference, I had to miss a whole lot of slides to be able to tweet about one. One exception that I’ve seen was Jill Sampey (who is awesome, regardless of the presentation) who had her username on each and every one of the slides. The majority of the presentations are using the default Pubcon background, it shouldn’t have been too hard to insert a space for a twitter handle of the speaker.

So as I said, the intention of this post was to help improve things for the next conference. I am having a great time so far and while some of the panels speakers do leave a lot to desire in terms of the novelty of the info they deliver, there are some really awesome panels and the networking is as good as it gets. So Pubcon, keep rocking and help us be active Pubcon participants on Twitter too.

Reading List #3

November 1, 2009 – 6:00 pm

Here we go, another reading list. It has been a bit more than a week so I hope I haven’t missed anything. If you think I am missing an article from these lists, let me know in the comments and I will include it in the next one.



A nice basic sum-up of things to keep in mind when evaluating a potential inbound link you are about to acquire.


On value of outgoing links. Yes, I wrote outgoing, not incoming. Look for a link to a relevant experiment written about on the same site.


Sometimes it is important to read articles that do not teach you anything new, but can serve as an example of premature jumping to conclusions. It looks like the author has chosen the outcome of his experiment and did not let anything put him off track to reaching it. While exact match domains can provide advantage when optimizing for the exact match keyword (due to evasion of those parts of algorithm that are supposed to prevent over-optimization of onsite and offsite backlink anchors), buying an old domain, outfitting it with WordPress and providing quality content can hardly be considered as a sterile testing environment. Designing an experiment is not a simple task and one should always beware of engineering the test conditions that will inevitably lead to a specific answer over all other possible ones.


Traffic analysis is one of the most important aspects of optimization and can serve as a source of significant improvement of conversion of your existing traffic as well as expose ways of bringing in new visitors too. This article gives a very nice explanation of a difference between the bounced visitor and a visitor that has exited your page,.


Interview with 3 SEO professionals (not calling them gurus on purpose) about the possible benefits of press releases for SEO. You can find some nuggets in there, but don’t forget to put on your critical reading glasses 😉


Another article about PRs. I thought this one was nice because it gives nice ideas about the research one should do BEFORE writing and publishing the PR, together with screenshots of different research techniques one should use.


Nice little tactic for discovering your competition’s long tails. It shows an interesting approach according to which you should value keywords not only by traffic but also by the preparedness of natural linkers to use them in the anchors. Very interesting approach.


Some interesting insights on how people from the UK search for travel-related terms. Has a link to a recording of a related webinar so if this niche is of some interest to you, I recommend devoting it 41 minutes and watching it at the link provided at the bottom of the article (free registration required)


A nice summary by Google’s John Mueller (@johnmu) of different ways to deal with duplicate content on your site.


A bunch of tips for a linkbuilder. Some of them are even new (to me). Others were heard before but it is good to have them all in one place and some should constantly be repeated.


This should be a required reading for every manual link builder out there. List of guidelines to use when writing a link request email. Rules so simple, yet so unimplemented by the vast majority of them, judging by the link requests I get.


The chances are that the majority of you will be in a situation where you have to contact the search engine representatives for assistance with getting your site out of penalty or helping you with some other technical issue related to how your site is being crawled or ranked in SERPs. This article provides a list of guidelines how to establish contact in a way that will provide you with most chances of succeeding and getting your problem solved. Imprtant read.


Did you mean Deciphering ? A solid post from the Aaron Wall factory on understanding the Google Adwords Quality Score.


This is my favorite post of the whole list. Usually, these lists are pretty much regurgitated stuff showcasing already seen tools or providing a stage for self promotion. Ari’s list is fresh, well organized and above all, damn useful. Since the article got published I have already used a number of tools from that list that I’ve never used before and scored me a couple of nice domains. Do not only read it, bookmark it and keep it for latter use.


A very good piece on potential pitfalls in marketing your site in foreign markets. I particularly liked the “not giving consideration to how you will manage content” and “taking the agency’s international claims at face value”.


SEOs around the world have been bashing heads to understand the nature of the Vince (or brand update), including trying to understand how Google defines brands. This is another shot at trying to explain the nature of these algo changes. For additional reading on this topic, check out Shark SEO’s article on the same subject


All you needed to know about geotags.


Another great article by Garret French on how to create content that will attract natural links. I suggest that you find a quiet spot, remove all other distractions and dig into this one, including all the links in the article.

Reading List #2

September 29, 2009 – 5:48 pm

Hey, I got some nice reactions to my reading list. So here we go again. Make sure to pay attention to a special insight at the end of the list, i think it is worth it 😉




A very nice article on the type of questions you should ask when interviewing a potential link builder. Actually it is more about types of personalities you may encounter when hiring a link builder and which one of those you should be looking for


A nice review of the problems Flash-based site may encounter in search engines and possible search-engine friendly solutions. Can serve as a decent referrence to clients. One thing though, when writing SEO articles, you want to be as non-vague and precise with your words as possible: “But if you go with this option remember to submit only the HTML version to the search engine.” Really ? Submit the HTML version to the search engine ? Come on, you know better.


I cannot understand why this article is called “5 steps…” and not “Hundreds of practical steps and …just print it out and learn it by heart already”. I love articles which not only give you a list of concrete things to do, but also open your mind to numerous additional possibilities to develop. A must read for every link builder out there. Includes a download link to a link building outreach worksheet.


A nice overview on how to get the most out of Majestic SEO backlink reports. While being very useful, Majestic is not always the easiest tool to use, so any additional information on how to get the most out of it is great.


A very nice nugget. After actually convincing a client that it is important to blog, there is always the stage where his writer (or your writer working for him) hits the solid wall with ideas on what to write on the blog. Here is a nice list that can be explored to get ideas for a business blog writing.


Stephen Spencer alert! Outlining 7 steps to successful linkbaiting. If you ignore the overnightprints.com hype (which was really a great case study, but one article was enough 🙂 ) you will come up with some great starting points for your linkbaiting campaign.


A post bursting with information. Analysis of the most linked-to posts on SEOMoz. Compares different types of posts (list posts vs. non-list posts), post categories and whether there was an image or video included in the post. Very inspirational.


While not having specific tips on linkbuilding, this post tries to put the linkbuilder (and not just him) in a linkbuilding state of mind. See everything as a link building opportunity. One thing that I would like to add is when doing this, it is important not to forget about your inner moral compass that should be alerting you about those link opportunities that are better not taken.


A good reminder of how relationships can be used to build links


Some nice tips on using Wikipedia for your SEO purposes. Beware, not all of them are white-hattish 🙂


This is cool. Google start showing links to specific sections of the page. This is something that the good people at Huomah blog have been discussing for quite a while now – Page Segmentation – and this is another way for Google to get even more granular with the ways the assign link juice and evaluate relevancy on a page level.


Pay attention – it is extremely important to doubt each and every one of the tools in your box, especially the keyword building ones. Here is the list of reasons why your keyword research tool is probably giving you wrong data. There is another thing worth paying attention to in this article and it is who wrote it. I will expand on that at the end of the list.


Google comes out with a new Adwords Keyword tool. Looks much nicer and much easier to use. Does it provide more information? Not so sure…


An important part of on-page SEO – site architecture – does not bring maximal benefit if the structure does not reflect the keyword focus each section of the site is trying to target. This post explains how Wordstream free keyword analysis tool helps you perform this task. Again, notice where this post is being published and check out the end of this list for further discussion.


A very nice overview of how to use .htaccess file for your SEO needs. Bookmark for future reference.


I usually don’t like putting interviews on this list, but this interview with Dixon Jones from Majestic SEO outlines some of the important features of Majestic, information that the current and future users can find useful.


You may have noticed already that I like these articles that provide ideas on how to do certain aspects of SEO, especially if they just scratch the surface without exposing specific tactics, thus rendering them useless pretty quickly. This is another one of those articles and (with the exception of listing Google Search as a potential linkbait idea finding resource) some of the ideas in there will make you go hmmm.


And now for all the mysterious mentions of the discussion in some of the above links: it is amazing how one can learn from any activity, even when learning is not the intention. In the past few weeks, while gathering articles for this list, I have noticed a growing trend of guest articles written by people from Wordstream – a new keyword research tool (yes, I am linking to them with targeted anchor, since their efforts deserve some reward). In addition to the guest posts, I have also noticed some very interesting links that they scored from very respectable places (pay attention to very targeted anchor text in the first two paragraphs of this article 😉 ). It is very interesting to follow the progress of Wordstream website for keywords that they are using to link to their website, both in guest posts and in other mentions around the web and to see how their ranking changes over time, when they are competing in a niche that at first glance looks very saturated and competitive. It can be a compelling case study in optimization through content syndication and other tactics. One can learn a lot by watching what others are doing 🙂

Weekly Reading Lists

September 22, 2009 – 10:17 am

One of the responsibilities I have in the new company is to organize a weekly list of recommended reading and send it out to everyone. I have been getting good feedback about these lists and since I am already doing them, I thought it would be a good idea to put them up on the blog.

Obviously, not everyone will agree with the importance of the posts i put up here and they will not be new for everyone, but I thought you will enjoy reading the stuff that has kept me interested for the past week (or so).

I will also add a short commentary to each link, sometimes referring to related articles, sometimes arguing the points made in the article, if nothing else then just to make it less of a regurgitation of someone else’s content that we see too often on the web.

So here is the list for the last week:


Another instant classic by Stephen Spencer (I think I should just RSS his stuff into this list every week). Gives all the important attributes that SEOs want in their dream CMS. This list can be used as it is when comparing existing CMS packages, or even better, when creating a CMS from scratch. The items on his list that made me go hmmm: Declared search term, Paraphrasable excerpts and Multi-level categorization structure



Everyone knows that in order to increase the long-tail traffic to your site, you need to add relevant content, but a few people know how to do this process in and educated way. This article explains how to harness Google Adwords new reporting abilities to learn more about long tail queries that bring traffic and conversions to your site and use that information to guide your content creation


This one of those things that a lot of people were saying and that made a lot of sense, until a friendly Googler came out on record and said it is not so. Length of the domain registration period is apparently too noisy to serve as a signal in the quality assignment part of the algorithm


Rand seems to be having quite a bit of time on his hands lately, so he writes these epic posts, summarizing a lot of basic SEO data. This one describes 17 possible ways that search engines look at links. While some of the statements in the article could be argued (like the way search engines treat nofollow), all in all it is a keeper.


This post actually complements above Rand’s post very nicely – if Rand is looking at how search engines evaluate every single link, Bill Slawski provides a “high level overview” on how the search engines may classify links and understand the linking structure of whole websites and groups of websites. And he discusses the search patents related to his observations.


A nice checklist of all the ways to improve an internal linking structure of your site


Linkbuilding is a practice that makes it so easy to fall into worn out patterns, which makes every new and fresh idea very rewarding. This article takes a very nice approach to gaining links by doing small favors to webmasters. It is usually approaches like this that make a difference between a successful and a gray, bland, average linkbuilder, so pay attention to Melanie Nathan


Michael Gray publishes this series of posts on WordPress SEO. Some interesting tidbits in there. This post discusses all the different ways you can configure your URL structure in WordPress


To quote Michael:”While there are hundreds of thousands of posts on maximizing external links for your blog, like a red headed step child no one pays much attention to internal links.” The more successful your blog becomes, the internal links will have more impact on your rankings. You better take care of it early.


How to create posts that get their content updated without using the link love that the original content acquired over time.


This is an issue that gets neglected a lot by the blog owners who usually cringe at the thought of someone scraping their content. Michael shows not only how to encourage them to do so, but also how to make sure you reap the most benefits out of scrapers to score links. Includes mentions of plugins that help you with the job.


Directory submission is largely considered to be an SEO tactic that belongs in history. Debra Mastaler explains why this is an unjust generalization and gives some guidelines how to do directory submission right.


Every once in a while Google comes out with a new feature in Webmaster Tools that makes webmasters’ lives much easier. Whenever that happens, you can bet on Vanessa Fox writing up an extensive review of the new feature on SEL. This time it is about Google enabling webmasters to define which URL parameters Google should disregard. This is something that was usually done through URL rewrites, robots.txt, canonical tag or other options that usually involved a webmaster changing something on their site. It seems like Google wants to enable webmasters to control the ways a website is being crawled and index without making those on-site changes. In addition to presenting the new WMT feature, Vanessa presents all the other options for URL canonicalization and their pros and cons.


OK, this is not an SEO article but I thought it was interesting enough to make the list. This article gives some tips on how to put your LinkedIn profile to better use. Besides having a potential for being an amazing traffic source, LinkedIn should play a very important role in everyone’s personal online identity management. Read this article.


This is an interesting observation: link authority and anchor relevancy outweigh


While the Title of this post suggests that the reader will come out with a clear preference of one over the other, the result is actually quite different: the article explains how those two marketing tactics can feed into and support each other.


Another one of Google revelations on the algorithm, covered by, who else if not, Barry. While being important for a whole lot of other reasons, the fact that your site validates will not give your pages a boost in Google


One of the first signs that tell me that a website has been SEOed is a dissonance between the keywords featured in the metatags and the rest of the content. While being an SEO flag for me, it can be a serious turn-off for your visitors, affecting conversion rates, bounce rates and what not. This article gives some guidelines on how to align between your keywords and your content.


A short post with a great chart explaining all the inns and outs of how the search engine bots crawl and index pages and what are the differences between these two processes.


A great review of all the factors that can influence your geotargeting on search engines. Clears up a lot of terms and reasons why your rankings can vary between different countries.


And finally, a post that gives a list of top 40 SEOs to follow on Twitter. There are three reasons for listing this post:

  1. Following SEOs and engaging them in conversation on Twitter is an invaluable tool. Really. Cannot-put-a-price-on-it kind of tool.
  2. While not everyone on that list is an SEO, I went over it and I have already been following 90% of the people there, meaning that they were already providing real SEO value to the conversation.
  3. I am on that list 🙂

Tags: , ,

Google Ranking and CTR – how clicks distribute over different rankings on Google

July 12, 2009 – 11:17 am

When beginning a new SEO campaign, a crucial first step is the keyword research. As a part of this step, one wants not only to find a comprehensive list of keywords that could serve as a potential target markets on Search Engines, but also to be able to predict amounts of traffic that each keyword will bring to his site. So we usually turn to different keyword prediction tools that provide us with this kind of information. Obviously, the most popular such tool is Google Adwords Keyword Tool, but there are others – Google Search Based Keyword Tool, SEMRush, Spyfu, Wordtracker MSN Excel based addon (check a great review and tutorial on SEOMOZ about this one), etc.

When we look at these tools, we can see only a single value for each keyword, representing a predicted monthly number of searches performed for that keyword on search engines. However, there are important issues that need to be taken into account when considering this data: what position will bring that amount of traffic ? Can the change in title (such as adding a brand name, special characters, call to action, etc.) increase the amount of traffic that the site gets from a current position ? How do Universal Search Results affect the percentage of traffic I get from the 1st page of Google SERP ? In order to be able to consider these issues, we need to be able to estimate the percentage of traffic that each position will bring.

Now, when reading about the importance of getting to the first page on Google, there is one piece of information that always gets quoted: 90+% of users do not go past the first page of SERPs ( I don’t know who was the first to come up with that statistics, so I will just link to this SEM article). While being probably true, this statement creates a misconception that if we do succeed and get our site to the first page of results, we have managed to achieve exposure to those magical 95% of users.

I am sure that many people rationally understand that this is probably not the case, that each listing in the top 10 does not get the same amount of attention and that introduction of ads on top of the SERPs and universal search results within the SERPs change the distribution of clicks among the top 10 results. That said, it is hard to find valuable and reliable data that confirms or negates these premises. In order for people to check it themselves, they need to control every site in the top 10 for a keyword that provides enough traffic for these measurements to be statistically significant. Furthermore, there would probably be need for at least another such setup from a different niche to confirm the validity of the results in the first niche. To summarize, it is close to impossible to produce such research with the means available to an average website owner.

There are two ways that such researches can be done: one is by eye-tracking studies. In these studies, a group of people is fitted with an instrument that follows their eye movement and records them against what the examiners are seeing on the screen. It also records the clicks they do on websites and correlate that data with the “heatmaps” created through the eye tracking. For some more info on those studies and on their misgivings, check out this great post from 2006 by Bill Slawski.

The second way is to get hold of the search engine log data that provides not only the identity of keywords that were searched and results that were clicked but also information about the position that a URL was located at when clicked.

Luckily, there are both kind of data available. There are quite a few people out there interested in web usability and testing those concepts through eye tracking studies. As I mentioned in my Affilicon 2009 presentation, all of the major search engines cooperated with universities and conduct academic researches in order to investigate new concepts or improve the existing ones As for the search engine data, remember AOL data dump of 2006 ? So apparently that dataset included all of the data needed for estimating the number of clicks each website would get if it was located in a different position on SERPs.

In this post I will try and summarize several such researches, spanning the time from 2005 till 2008. Only the 2006 study is based on the user data accidentally released by AOL and the remaining studies are done by the eye-tracking technology. They are all investigating the distribution of clicks each of the top 10 results on Google gets. It is interesting that one of the researchers signing on the majority of the eye tracking studies is Laura A. Granka. She did her PhD in Communications at Stanford and has been in the User Experience Team at Google since 2005.

Here are the links to the sources of data for all five studies according to the years:

Year of Publication Type of Data URL
2004 Eye Tracking http://www.seoresearcher.com/distribution-of-clicks-on-googles-serps-and-eye-tracking-analysis.htm
2005 Eye Tracking http://www.cs.cornell.edu/People/tj/publications/joachims_etal_05a.pdf
2006 AOL Data http://www.redcardinal.ie/search-engine-optimisation/12-08-2006/clickthrough-analysis-of-aol-datatgz/
2008 Eye Tracking http://www3.interscience.wiley.com/journal/117935668/abstract

Notice that the above dates are dates of publication, not necessarily the dates when the study was done.

And here is the graph presenting the results of these studies (click to enlarge):

I have added the CTR for each location for the 2008 data

Some Conclusions:

SEO Strategy

It is obvious that the vast majority of users click on the first 3 results (60-80%) and that anything below these positions will bring low to negligent traffic. These results should make us consider the overall optimization tactics – instead of massive investment into links for the sake of promotion of a few highly searched keywords with high competition; it may be worth our while to go for the long-tail traffic with almost certain top 3 positions through concerted efforts in wide scope content creation, site architecture optimization, deep linking, etc. Obviously, the shape of the above curves will depend on the nature of the search query, nature of the target audience and the data should also be cross-compared with the expected ROI from each term.

Universal Search

There is a very clear difference between the data in 2008 and the previous years – there seems to be a sharp shift of the majority of users towards the top 3 positions, making the above described differences even sharper. There is also a spike in users that click on the bottom results in the 2008 data. This can be explained by the introduction of universal/blended search results – video, news and image results. By being visually different, these results will naturally draw more clicks than the bland organic listings. Furthermore, an additional research done by the iProspect company shows that 36%, 31% and 17% of users click on the Image, News and Video results respectively. This should point towards possibilities of inserting content into those three categories of results and thus capitalizing on more SERP real estate.

There also seems to be a pronounced difference between the 2004 data and the later studies – there seems to be a much more pronounced gravitation of users towards the #1 position than in the following years. In order to understand the reason for this we should look in the way the research was conducted (was there any difference in the methods, was there a pronounced difference in the queries that the test subjects used, etc.). However, we can speculate on some other explanations, such as possible higher quality of SERPs in those years which brought more relevant results in top position for more searches than in the later years when more users had to click to lower-ranking results since the top-ranking sites did not offer the best match to what they were searching for.

Using BlackBerry Bold as a Tethered Cellular Modem on Orange-Israel Network

June 21, 2009 – 12:32 am

My new company got me a snazzy BlackBerry Bold to help me keep up to date with all the emails and tasks i need to do, since I am working the majority of the time from home. Since I live in Jerusalem and the new office is in Tel Aviv, on those days that i do commute, I have about 3 daily hours of train time to kill. Needless to say, the optimal way to utilize that time is to have an internet connection through the whole ride, but without the wireless modem, there is nothing to do but watch How I Met Your Mother 🙂

BlackBerry Bold has changed all that and one of the first things i did when i got it is to set out and find instructions how to tether its internet connection to my laptop. After a long search and a lot of dot connecting, I managed to find a solution:

  1. Connect your Blackberry Bold through the USB and start the Desktop Manager app (came with your phone). Without the Desktop Manager running, you will not be able to connect to the internet.
  2. Go to Control Panel -> Phone and Modem Options -> "Modems" tab.

  3. You should be able to see a list of all the modems you ever installed on your computer. Blackberry modem should be marked as a Standard Modem on COM14 or COM16. Notice how all the other standard modems are Bluetooth modems. I am guessing that Blackberry can serve as a modem even over Bluetooth, but have not checked this.

    Blackberry Bold Modem Setup 1

  4. Click Properties (if on Vista, now click on Change Settings in the new window) and go to Advanced tab. Note: If you are on Vista and did not click on the "Change Settings" button, you will not be able to perform the next step

  5. Under the Extra Settings in the Advanced Tab, enter the following line:


    Make sure that there are no spaces before and after. Click on OK to close the window

    Blackberry Modem Setup 3

  6. Now go to your Network Connections and create a new connection (Setup a New Connection on Vista). Choose Setup a Dialup Connection and pick the  Standard Modem (not the bluetooth one).

  7. Next step you are going to be asked for a dialup phone number. Enter the following number:


    No username or password are needed.

  8. Go to the list of your network connections, right click the new connection you created and click on Properties. Go to Networking tab, pick the TCP/IP protocol (on Vista this is the TCP/IPv4 protocol) and click on Properties. Click on Advanced. Uncheck the "Use IP Header Compression" checkbox (marked in red in the below picture).

    Blackberry Bold Modem Setup 3


That is about it. Before connecting, make sure that you have 3G network enabled on your Blackberry. I personally disable 3G since it devours my battery within like 6 hours. The connection speed that I measured was something under 2Mbps download, which is not bad at all. I also managed to use my VPN client and have yet to find a significant difference between this connection and the regular WiFi connection.


I’m sure that not everyone will manage to connect smoothly with just following the above instructions, but I am hoping that at least some will get some benefit from them. in case you hit some bumps, let me know in the comments.