Categories
Archives Blogging

Potential Benefits of Duplicate Content

This is a guest post by Duncan, a digital marketing expert.

Last week, I wrote a guest post here on duplicate content, detailing some of the ways Google determines the original source of content, and which versions have been scraped or syndicated. Whilst it is often the case that you don’t want your content appearing all other the web due to duplicate content issues, someone raised an interesting point in the comments about the positives of having your content out there, duplicate or not. He said that not only did he like the fact his content was everywhere, but also that he often didn’t mind when the original was outranked by the copies and even filtered out of Google’s index. It was a very good point, and so I thought I would elaborate on it and explore some more reasons why dupe content might not always be a bad thing.

Better Visibility

If you’re sending out your content marketing to the right places, you can often find that the sites it appears on are more powerful and receive more traffic than your own. It’s easy to become over-proud of your content and of your site, sometimes losing focus on what your goals for the content are. Often it is the message contained within a blog post or article that is the important thing, and the more people who get to read this message the better. You are covering a much larger expanse of the web by having your content in many different places, not only because people stand a better chance of finding it by just browsing around, but also because the other sites might have large number of followers and subscribers.

Furthermore, there is even potential benefit to being outranked in the SERPS by syndicated content versions. If the content is on a powerful and respected site, it not only stands a better chance of ranking above other similar content, but it could receive a higher click though rate than your more ‘unknown’ site might have received.

duplicate content
Categories
Archives Blogging

How Google Handles Duplicate Content

This a guest post from Duncan, an internet marketer who blogs about everything from on-site optimization to finding the best links on the net.

Duplicate content is a hot topic at the moment, with much speculation about if it can harm your site, or if you can actually benefit from scraping content from other sites and placing it on your own. Most webmasters, bloggers and SEO experts agree that accidental internal dupe content, caused by pagination, categorization etc, won’t harm your site power (apart from reducing internal linking power on the dupe pages), unless it is interpreted to be manipulative duplication, which can lead to penalties.

Aside from legal ramifications, there seems to be little negative effect from content theft – taking content from other people’s sites and publishing it on your own. Indeed, many people have RSS or other feeds from external sites populating their pages, and do not report any ranking problems for the pages of their site that do have unique copy.

One thing that many people are not clear on though is how Google determines which is the original source of the copy, and which are the duplicate versions. Here are some of things it looks at when determining original from duplicate content.