Wednesday, October 2, 2013

SEO - Avoid Getting Duped

What it is necessary to understand initially about avoiding search engine penalties, is the duplicate content that counts against you is your own. What other websites do with your content is regularly out of your own control, much like who links to you typically. Whenever your content is duplicated you risk fragmentation of your own rank, anchor text dilution, and tons of other side effects. But how would you tell initially? Use the "value" factor. Think about: Is there added value to the content? Don't just copy content for no reason. Is this version of the page essentially a new one, or just a slight rewrite of the previous? Make sure you are adding exceptional value. Am I sending the engines a bad sign? They can identify our duplicate content candidates from signs. Similar to ranking, the most popular are identified, and marked.

Every site might have potential variants of duplicate content. This really is fine. The key here is the way to manage these. There are legitimate justifications to duplicate content, including: 1) Alternative document formats. Using RSS feeds and others. In the first instance, we might have alternate methods to deliver our content. We need to manage to pick a default format, and disallow the engines in the others, but still enabling the users access. We can do this by adding the proper code to the robots.txt file, and ensuring we exclude any urls to these versions on our site maps also. Discussing urls, you should use the nofollow attribute on your own website also to remove duplicate pages, because other people can still link to them.

As far as the second case, in example you own a page that consists of a rendering of an rss feed from another website - and 10 other sites also have pages based on that feed - then this might seem like duplicate content for the search engines. So, the bottom line is that you probably are not at risk for duplication, unless a substantial part of your site relies on them. With your CSS as an external file, make sure that you just put it in a separate folder and exclude that folder from being crawled in your robots.txt and do the same for your own JavaScript or any additional typical external code. Any URL gets the possibility to be counted by search engines. Unless you manage them properly, two URLs talking about exactly the same content will seem like duplicated.