The good news is that Liliputing appears to have recovered much (but not all) of its search engine traffic. The bad news is that I have no idea if any of the hundreds of changes I made to the site had any impact at all, or if I just wasted an awful lot of time.
For the past few years Liliputing has been one of the top resources on the web for news and information about netbooks, tablets, and other affordable mobile computers. But a few months ago search engine traffic started falling off a cliff, and the changes seemed to coincide with Google’s launch of a new search algorithm code-named Panda.
While Panda was designed to help separate high quality web sites from low, there are plenty of examples of false positives. I could be biased, but I’m pretty sure Liliputing was among them. The site is full of original articles including dozens of the most detailed computer reviews you’ll find anywhere on the web. We never copy and paste content from other sites, and in fact, I’ve avoided even posting press releases or block quotes longer than a few words.
Anyway, I’ve been documenting the steps I’ve taken to try to convince Google that Liliputing is a “high quality” website — and in fact, some of those changes really have improved the user experience, so I’m glad I made them. For instance, there’s no reason you should be redirected to a new URL every time you click on an image in a gallery.
A week ago I started to notice that search engine referrals were close to what they had been a few months ago. I was cautiously optimistic, but figured I’d wait a few days before declaring victory.
The last few changes I’d made to the Liliputing really seemed like they might have made a difference. I reduced the number of links in the sidebar, header, and other areas of the site so that overall there were far fewer links on each page. I used Google’s new tool for specifying how URL parameters should be handled to reduce pages that were essentially indexed twice. And I identified every post on the site that had fewer than 100 words and either added a “noindex” attribute or updated the post with new information or more details.
That last change was one I’d been putting off because it just seemed so daunting for a website with more than 7,000 posts. As it turns out, I only had to edit a few hundred articles — and when I did, I realized that some weren’t just light on information — they were also outdated.
That little exercise has changed the way I think about blogging. While Liliputing is still largely a news site, every time I write a new article I try to imagine what someone would think if they found this post 3 years from now. Would it still offer useful information?
Because search engines don’t surface a series of posts in chronological order the way they were written, you have to assume that it’s possible someone will find your article with absolutely no context. That means you should give them enough information in each post to fully understand the topic — or at least links to follow to get more information.
This doesn’t mean you need to post a full review each and every time you mention a tablet or netbook. But it does mean it’s not enough to just mention the model number and assume everyone knows what you’re talking about.
Anyway, so when traffic started to pick up almost immediately after making those recent changes, I figured I’d finally cracked the code. After all, Google’s only real advice for dealing with Panda so far had come down to: Write better content, and get rid of low quality content.
Today I found out that Google rolled out a Panda 2.3 update at pretty much the exact time that Liliputing was regaining its search engine traffic.
So while I’m glad I made many of the changes I did, it’s possible web traffic would have recovered anyway.
It’s also possible that the only reason Liliputing has started to recover is because I made those changes and Google rolled out an update to Panda which recognized the changes.
From what I understand, Google pushes out Panda updates manually. So even if I made dramatic changes to my site in early July, just days after the Panda 2.2 update rolled out in June, Google may not have noticed until Panda 2.3 was ready to go.
In other words… I wish I could tell you I’ve cracked the Panda code. But I’m not sure that I have. All I know is that Liliputing is doing better than it was a week or two ago… but that the biggest thing I’ve learned is that we need to continue building an audience of loyal readers by broadening the scope of coverage and using social media and other tools so that we get enough traffic to keep the site running even when search traffic is on the low side.
Liliputing still hasn’t had a full recovery. I’ll be honest, I never monitored my traffic stats closely enough before Panda to know how much of my traffic came from Google and how much came from other sources. But overall traffic is a bit lower than it used to be. That could be from a decline in Google search. It could be due to the fact that I’ve essentially told Google not to even bother looking at hundreds of pages on my site. Or it could just be due to the fact that traffic tends to dip in the summer anyway.
Overall, something’s changed and it would be nice if I knew if I had anything to do with it.
As I watched my visits and page views climb over the past week, I imagined I’d write a post this weekend declaring victory and explaining how I managed to show Google that my site was a high quality site (which isn’t the same thing as gaming the system or tricking Panda… because Liliputing really does have thousands of pages of high quality, well-researched, original articles).
Instead all I can say is I guess I’m happy that things are improving and if you’re lucky and/or worked hard, maybe you’ll see the same thing on your site?
Update: So it occurs to me that there’s a framework under which this all makes sense: Google is intentionally trying to kill SEO (Search Engine Optimization).
That would explain why there have been so few stories about publishers making changes to their sites and then recovering after they were hit by Panda. It would also explain why most recovery stories that we do hear seem to happen at a moment when Google is rolling out a major algorithm change: because it’s not us… it’s Google.
In other words, Google doesn’t want webmasters to make specific changes to their sites in order to make their content easier for search engines to discover. It’s Google’s job to find the best content, no matter how your web pages are formatted. Instead, Google wants publishers to focus on creating good content. Period.
Which is all fine and dandy. But when Google changes its algorithm in a way so that publishers that have been cranking out good content for years are suddenly penalized, the only rational response is to try to figure out why they’ve been penalized and take specific steps to get back into Google’s good graces.
I never paid much attention to SEO before Panda. I figured if I continued to publish content that was of value to my readers, Google would find it and direct even more readers to it. Once Panda hit, I suddenly had to start looking for SEO best practices.
If you’d asked me a few months ago if I would be OK with Google rolling out an update that specifically makes it tougher for people to game the system using SEO practices, I would have said sure. But if you had told me that the way Google was going to do this was to make dramatic changes to try to identify high quality sites from low quality sites and that there was a decent chance that some websites would get caught on the seventh circle of Hell by accident for up to 5 months, I probably would have felt differently.