Measuring redundancy level on the Web Conference

Afanasyev, A, Wang, J, Peng, C et al. (2011). Measuring redundancy level on the Web . 81-88. 10.1145/2089016.2089030

cited authors

  • Afanasyev, A; Wang, J; Peng, C; Zhang, L

abstract

  • This paper tries to estimate redundancy level on the Web by employing information collected from existent search engines. To make measurements feasible, a representative set of Internet sites was collected using a random sampling of the Internet catalogs DMOZ and Delicious. Each page in the set was identified using a random 32-word phrase extracted from the content of the page. These phrases were used to perform search engine queries and infer the number of pages with the same content. Though the presented method is far from being perfectly accurate, it provides an approximation of a lower-bound for visible redundancy of the web-long phrases will likely belong to duplicate pages, and only the pages indexed by search engines are really visible to users. Obtained results showed a surprisingly low level of duplication averaged over all content types, with less then ten duplicates for most of the pages. This indicates that besides well-known classes of high-redundant content (news, mailing list archives, etc.), content duplication and plagiarism are not globally widespread across all types of webpages. © 2011 ACM.

publication date

  • December 1, 2011

Digital Object Identifier (DOI)

start page

  • 81

end page

  • 88