Wednesday 29 May 2013

Monitor Competitors’ Web sites pertaining to SEO


It is relatively standard practice nowadays to do keyword rank checking with tools such as SEOmoz, Authority Labs or Conductor. It just makes sense to us as SEOs to keep an eye on them, whether you are of the school that you should be reporting them to your clients/boss or not. However, we know that with rankings there are so many variables at play that it is more of an art than a science to react to them when you see big changes.
Rank tracking helps inform us of how our tactics are working, whether competitors are up to something, or if Google has been playing with the dials again. However, I’ve been thinking recently about what other things we should be routinely tracking, and which of these might be helpful in prompting more specific actions.
One thing that I know some SEOs do, on and off, but something I haven’t really done much of until now is tracking my competitors’ sites (their markup, structure and content). Sure I look at their rankings, and if their has been interesting changes then I might look at OpenSiteExplorer, Majestic of Ahrefs to establish whether they’ve been doing anything new on the link-building front, but if it is internal changes to their site then I probably won’t spot the exact changes unless it was something in-your-face (like a complete redesign).
However, Google have been rolling out an increasing array of technical changes over the last few years that have enabled us to make all sorts of under-the-hood changes which can affect our SEO performance. Sometimes these changes can have a dramatic impact in the SERPs, and in those instances it can be really handy to know exactly what your competitor dig. Furthermore, even when the changes are smaller and less dramatic, and even negative, it can be helpful for you to know. By taking frequent snapshots of our competitors sites (and our own too if there are other departments that might be making changes!), we can then look at our rankings data, competitive intelligence tools such as Searchmetrics, and any other data points we have to identify any link between a change in rankings/visibility and any on site changes that might have been made just before.
Sure, these changes probably aren’t going to be a frequent occurrence, but tracking a few sample pages from your competitors websites on a weekly or monthly basis need only take a few minutes, and on the occasions you refer back it can turn out to be a high ROI activity.
Putting it to Use
Here is an (anonymised – sorry its covered by NDA) view of one of my clients’ competitors SERP visibility as measured by Searchmetrics:

In early December we can see they took a decent ~15% step up in visibility (not the Y axis is not 0 based), which on a large site is quite significant, and also likely equates to a very real increase in revenue. Now, normally, I’d start trying to work this out by looking at the the link activity that had gone on recently. I’d also take a look at the site but, as I mentioned above, it is hard to identify all the changes without having a snapshot to refer back to.
In this instance, there wasn’t much out of the usual with the linking activity, so at this point I’d look around their site a bit and maybe come away with one or two theories, but nothing concrete and nothing actionable….
However, in this instance one of our consultants, Mike Pantoliano, had been keeping an eye on things and so we had a before and after view of the code on the competitor site. They had rolled out an update to their front end, which changed lots of the visuals but kept the IA of the site pretty much intact. However, hidden amongst all the changes, was an under-the-hood tweak: they had added a recently added Schema.org vocabulary to their pages. Suddenly our view of the situation looks quite different:

By regularly taking a snapshot of your competitors sites you can overlay your data with annotations based on what they changed, and suddenly the picture becomes a lot clearer. In the example above, we can be far more confident of what was the likely cause of the change, and we can examine it in more detail to see if we can replicate it (or do an improved version of the same thing).
I won’t labour the point with examples based on rankings data; hopefully you see why this might be useful.
What to Track
For some small sites it might be feasible to download the whole site, but more often than not that simply won’t be feasible. Our goal here isn’t really focused on tracking the content across the whole site, but is more focused on code changes and IA updates.
My suggestion is to pick a handful of the ‘money’ pages that represent the main types of page on the site:
•    Homepage
•    Category pages (1-2)
•    Product pages (1-2)
•    Search page
•    Sitemap file(s)
•    Robots.txt file
                                           
The exact details of what you track will depend, of course, but the aim is to get a decent sample of the makeup of the main pages on the site.
How to Track Changes
Ok, so now we know we want to download a snapshot of a page we can examine again at a later date – how do we go about that?
The most straight-forward solution is widely available, and is to simply use the “Save as…” option available in most browsers nowadays. You’ll often be given a choice (except for sitemap files and robots.txt files) between saving just the HTML of the page, or the full page with accompanying files (images, CSS files, JS files etc.):

I recommend saving the complete page, and archiving these by date and URL. Having just the HTML is useful, but having the dependent files allows you to load the page up again at a later date and I find having that ability makes spotting some changes easier. You can do this in Chrome, Firefox and even IE!
If you’re looking to be a bit more geeky you can write your own scripts to use curl or similar command line utilities to download the URLs you’ve selected and then schedule these to run. It is outside the scope of this post a little so I’ll leave that as an exercise to the reader!
If you’re interested in downloading the whole site then I recommend SiteSucker for Mac, which has worked really well for me and has various options for what types of files to download. On the Windows side, I’ve not used anything but my research has turned up Fresh WebSuction which looks to be approximately the same deal. If people have alternatives, I’d love to hear in the comments

__________________________________
SEO Expert India , SEO Services India


No comments:

Post a Comment