
Sometimes you get problems with duplicate content online. This is the case when large chunks of identical or very similar text appear on more than one web page, either within a domain or across domains. Duplicate content is considered a concern in search engine optimization (SEO) because search engines avoid including similar web pages in search results, which can jeopardize the user experience.
There are people who believe that duplicate content will only hurt your site ranking in search engine optimization if it is spam. Anyone who wants to reduce the risk of duplicates on their website, can benefit from the following points:
- Used 301 codes and redirect traffic from old or archived URLs to new URLs. This means that the risk of duplicate content is significantly less.
- Use the noindex tag. If there are pages on your site with duplicate content that you do not want to appear in the search results; use the noindex tag to deindex them.
- Use the tag ”rel = canonical”. If there are multiple versions of a page with very similar content, use the canonical tag to tell Google which version is the "canonical" URL, the original.
Below you will find an interesting article about duplicate content in English:
The myth of duplicate content how to get punished
How much dunpublished content available online
How much of the web has duplicate content? According to Matt Cutts, 25-30 percent of the web is duplicate content. A recent study by Raven Tools based on data from their website auditing tool had a similar result that 29 percent of the pages on the web are duplicate content.
Synonyms for duplicate
- multiple, make copies of; double, double