Very often with large dynamic sites webmasters just can’t do without pagination. Apart from usability here they come across another serious issue: duplicate content.
The thing is that all pages normally have identical titles and meta description and while some SEOs think it’s not an issue at all (as search engines “don’t see identical titles as duplicate content”), I do believe that this may at least result in inadequate site crawling rate and depth.
The problem may have a number of solutions none of which is unfortunately perfect:
Add a different portion of the title/ description to the beginning.
e.g. <title>A-G Blue Widgets</title>
Not in each case this unique element can be sorted out with pagination (most often that will be just page # variable).
Can be done only if there are not too many results to list; otherwise the page will be enormous and search engines won’t be able to crawl all of it.
Add NoIndex meta tag to the each page except the first one to keep search engines from indexing the pages but still allow them to crawl and follow the links.
<META NAME=”ROBOTS” CONTENT=”NOINDEX, FOLLOW”>
Naturally, none of the pages except the first one will ever be ranked and this still doesn’t solve the “PageRank leak” problem (i.e. to many links to follow and give weight).
Now, while each of these solutions has its pros and cons, I have a question to you: how do you handle these issues:
- Do you try to avoid pagination at all?
- Do you believe pagination doesn’t create duplicate content?
- Can you think of any other solution not listed here?