Google Panda penalty from content syndication?
Content syndication (sharing your blog content with permission) can be a great way for small businesses to get more website traffic. Sharing content on authority sites helps to establish authority and trust and drive web traffic.
While I recommend syndicating your content, it needs to be done with caution and consideration for your own SEO (Search Engine Optimization) and Google search rankings. There is a hidden danger to sharing content that can lead to automatic Google penalties - especially from the Google Panda algorithm update.
This article will show how content syndication on SME Pals led to a Google penalty, despite having only high quality content, with a low bounce rate, plenty of authority, and everything else a site needs to succeed in driving plenty of organic search traffic.
Duplicate content and Google Panda
In general, small business bloggers have nothing to fear from "scrapers".
A content scraper is someone that republishes whole article or blog posts without permission. These sites generally have very little authority, and very low Google rankings.
Google Panda and content scrapers
In the past, scrapers were able to make money because Google was not able to do a good job at deciding which version of an article was the original. Scrapers could happily reproduce great content that would often-times outrank the original author, driving valuable organic traffic to the scraper instead.
Google's Panda algorithm did a good job of cutting out content scrapers and restoring quality content producers to the top of search results.
In fact, SME Pals benefited greatly from Google Panda.
Panda makes us safe from content scrapers
In effect, Google Panda penalizes sites that it perceives to be content scrapers - offering little to no new valuable content. This is precisely why you no longer need to worry about sites copying and publishing your content - Panda ensures they have a low rank and won't appear above your content in search results.
Content syndication and Google Panda
Syndicating content (sharing content with permission) is different from content scraping. Genuine content curation sites will always add a byline linking back to the original post. A clear signal to Google that the content originated somewhere else.
The idea is that Google will find multiple versions of your content, and follow the link signals back to the original version - returning it above all other versions in Google search. That's the theory, but it doesn't work in practice.
Share this great article with people you like!
High authority trumps originality
In practice, Google looks at a piece of content and looks at who links back to it, where it is shared, and a number of other signals (such as the authority of the site it is posted on), and chooses which version to show.
What this means is that despite being the originator of a piece of content, Google will often return a copy if that copy is published to a site with higher authority.
Visibility and authority vs. Google search rankings
The dilemma for small business bloggers is whether or not the benefits of having content syndicated to an authority site, outweighs the benefits of appearing higher in Google search results.
If an article republished to an authority site drives 1000 visits back to your own blog, it may well be worthwhile. This is a choice everyone has to make, and is a part of growing a business or blog online.
The problem comes when Google makes a mistake and wrongly identifies the original content as a content scraper, and penalizes it.
How Google Panda can incorrectly penalize original content
SME Pals incurred a Google penalty despite being a high quality, original content producer because of automated syndication to several high authority websites.
I was not concerned about occasionally being outranked by my own content because our Google search traffic has been doubling roughly every quarter for over a year and a half. At that rate, SME Pals would soon outrank the high authority content curation sites anyway.
That's not how Google saw things.
June 25th Google Panda refresh
Google Panda and Penguin have always been good to SME Pals, largely because I play by the SEO rule book. But, in the latest refresh, Panda must have mistakenly identified SME Pals as a content scraper because all our content was republished to a higher authority site.
Despite numerous signals (including backlinks to SME Pals in every republished article), Panda decided to drop SME Pals from the rankings.
Organic search traffic coming from Google dropped by 70% instantly.
The quality of search traffic decreased substantially too, with hits from only obscure keywords and phrases, leading to a 20% increase in bounce rate.
Evidence for Panda's mistake
Google Webmaster tools showed that over the last month or so, SME Pals ranked at number 1 or 2 for a number of key SEO phrases.
Prior to the penalty, for example, a search in Google for "Anatomy of a blog post" would return the article "Anatomy of a blog post: How to get more traffic and social engagement from your content" in first or second place (an average of 1.0 reported by Webmaster tools).
Subsequent to the June 25th update, a search for this phrase returns the syndicated version of the content only.
This means that the content itself has not been penalized (otherwise the syndicated version would also be dropped). It is SME Pals that has been penalized.
Since all SME Pals does (from Google's perspective) is create new content, it is unlikely that another infringement is to blame since all prior Panda and Penguin updates (before content syndication began in earnest) had only a positive effect.
Dealing with Google's penalty
Naturally, I was anxious to find out why, after meticulously playing by Google's SEO rules, I was now being penalized. My research uncovered the following:
- Google's content syndication guidelines
- Why Content curators don't follow Google's syndication guidelines
Google's syndication guidelines
Google offers guidelines on how to properly syndicate content. Content curators are supposed to:
- Use a cross domain rel="canonical" tag to point back to the original article
- Use noindex meta tags to prevent the reproduced content from being indexed
In reality, no content curator wants to do this because it effectively passes the SEO juice away from their own sites, or prevents them from appearing in the search index altogether.
Why Google's syndication guidelines aren't followed
I wrote the following letter to B2C:
For example, SME Pals ranks at number 1 for the phrase "Anatomy of a blog post" according to Google webmaster. That’s now changed. If you do a search on Google, you will see B2C’s version of my article in first place instead.
The problem is that you have set your canonical URLs to point to your own versions of my articles, instead of the original version (which is the point of a canonical URL).
Obviously, I would like to keep contributing to B2C, but I can’t do it at the expense of my own livelihood. If you guys can change the canonical URL to point to the original article, or make similarly appropriate index changes, then I’d be happy to continue as we are. Otherwise, I’m afraid, I’ll have to stop sharing my content in order to restore my Google rankings.
Fortunately, B2C were very quick to respond, and very helpful (which is why I will continue to syndicate content with them - albeit in moderation). But, their response shows clearly why Google's guidelines are not practical:
We don't point our canonicals back to the original site because this essentially tells Google that we want to give away all of our SEO juice for these articles, which we don't.
Please let me know if this works for you and I'll make the update.
I thought this response was fair enough - after all, they are only re-publishing content with my permission. I don't mind competing with content curation sites. My only concern is not to be unfairly penalized by Google in the process.
I thought Google's algorithms were smarter?
Google authorship allows you to link your Google+ account with the content you create, so that Google can personalize your results in search - displaying an avatar and a link to more articles written by you.
Google recognizes me as the rightful author of my own content even when published on B2C. Despite this, and the fact that I delayed syndication to B2C by a full week in order to give Google time to index the original version first, Google seems unable to determine where the content originates.
Content syndication, SEO, and Google search lessons learned
Sitting with a Google penalty can be demotivating and frustrating. But, I guess it is important to learn the lessons and try avoid them in the future. Here's what I take away from this experience:
- Don't syndicate all your content: Be selective in which articles you choose to share.
- Weigh syndication pros and cons carefully: Ensure that the benefits you derive from sharing content are greater than having the organic search traffic.
- Google makes mistakes: Despite numerous signals indicating SME Pals as the originator of the articles, Google's Panda algorithm chose incorrectly.
- Be selective about which sites you syndicate to: While B2C won't follow the recommended guidelines set by Google, they were fortunately willing to help out.
All in all, I am glad to have learned a valuable lesson, and, provided Google restores my search rankings at some point in the near future, I'll be able to continue growing and driving revenue through content, SEO and conversions successfully.
Are you thinking of syndicating your blog content? Have you shared content with success? Failure? Share your content syndication, SEO and Google search (and Panda) fears, advice, tips and experiences in the comments, or join me on Twitter and Linked in to continue the conversation.
Our articles to your inbox each week
Follow SME Pals to keep tabs on the best business ideas, plus tips to build & market your business.
Ask us a question on anything and we'll give you a simple expert answer. FREE!