When we analyze incoming links, we tend to focus on more or less the same set of link parameters: PageRank, anchor text, relative position in linking document, surrounding text, etc. However, sometimes looking beyond the regular, can provide the opportunity to not only succeed in link building, but even dominate the niche you are competing in. One of these uncommon parameters is the link freshness factor.
The whole issue of temporal aspect to links is a few years old. It has first appeared in official Google writing in this patent, signed, among others, by Matt Cutts, which makes it even more interesting to the SEO community. The whole patent itself is an interesting read and revisiting it can produce a worthy blog post, but I would like to focus on a very specific aspect of it I have recently noticed with several of our websites.
Some of our sites were enjoying top locations for their main targeted phrases in the past few months. While that is primarily good for the company’s bank account, it also gives us some freedom as to how we spend our time and how we divide the work priorities with that specific site. So, in the framework of secondary-phrase optimization stage, I have decided to drastically slow down the link acquirement process so I can try and gauge the relative value of each of the linking resources I was using at the time. Basically what I’ve done is instead of just throwing all the weight on several link acquirement techniques at once, I decided to use one, wait for the increase in rankings and then use the next one and compare. It is hardly a sterile experimenting environment but I thought it is a decent start…
While the comparison of impact of different link sources produced some interesting data by itself, plotting the change in locations over the time and marking the addition of links to different sources on the graph, provided me with a bit more interesting information (click on the below image to enlarge):
As can be seen on the above graph, every addition of a link (or number of links), resulted in a location increase, followed by the gradual slippage to a lower position (albeit higher than the starting one). This phenomena took place after several link additions from different sources and on different sites in different niches, so I believe it is not an isolated occurrence.
So what do we have here ? From the above graph, it can be theorized that there are at least two different scores that a link can pass to a page it is pointing to:
- A “fresh link” score. Since this is a new link, Google does not yet know the amount of link-juice this link should pass on. Even if this was the only link added to a linking page at the time of the observation, the number of outgoing links has changed and the proportion of PR this link (and other links on that page) should send on, must change. Since even Google cannot calculate all that on the fly, an “artificial” value is added to the link. From the above graph it can be concluded that this “artificial” value can be higher than…
- … a “real” link score. This score kicks in after Google reiterates all the PR calculations and assigns an objective value to that link.
In the above graph, I have marked the “fresh link” score with an A and the “real” score with the B. It is obvious that in the above case A > B which means that when the real value comes into account, the links are worth a bit less, the webpage’s ranking score is adjusted accordingly and the site slips in SERP’s.
Based on this analysis, we can have three possible relations between A and B:
- A > B – as in the example above, when the “real” link score is lower than the “fresh” link score – usually happens in case of crappy comment / forum signature / reciprocal / unrelated links.
- A ≈ B – when the two link values are approximately equal. We will usually not see a significant change in locations due to the switch between these two values.
- A < B – when the “real” link value is actually higher than the “fresh” link score. This usually happens with the high-quality links from on-topic / authoritative website. The result of this would be for a site to get an initial boost in rankings, stagnate for a while and then further improve.
So, what can we do with this information? Well, if you have a large pool of authoritative websites that can give you on-topic incoming links, then you should remember that (more often than not) your initial improvement in locations, due to that link addition, is only temporal and is bound to improve even more. When the link addition is considered through this scenario, it is easy to see how a phenomena dubbed “inbound link sandbox” came into existence. The situation where it takes quality links a while to affect the rankings can be explained by the fact that the A value of those links is not high enough to overcome the ranking score of competing websites so there is no improvement in locations. When (higher) B value kicks in, the score gap between the two sites is overcome and the locations improve.
However if you belong to the majority of people that have only less sophisticated link pool to dip into, you may want to add links at such rate that the “unknown” link score just keeps adding to the previous link’s “unknown” score and thus continuously improve the locations.
Obviously that rate will change from niche to niche and from link to link. Furthermore, you should be careful not to overdo it and raise some red flags due to extensive link addition rate, but some trial and error in each niche should outline the playfield rules for that particular niche.
As for the methods of reproducing the “unknown link” score over and over again, well, that is a completely different hat color…