Computer Science – Information Retrieval
Scientific paper
2010-12-16
Full version of upcoming ECIR 2011 conference paper
Computer Science
Information Retrieval
40 pages, 17 figures; for details see the project page: http://uniquerecall.com
Scientific paper
We develop an abstract model of information acquisition from redundant data. We assume a random sampling process from data which provide information with bias and are interested in the fraction of information we expect to learn as function of (i) the sampled fraction (recall) and (ii) varying bias of information (redundancy distributions). We develop two rules of thumb with varying robustness. We first show that, when information bias follows a Zipf distribution, the 80-20 rule or Pareto principle does surprisingly not hold, and we rather expect to learn less than 40% of the information when randomly sampling 20% of the overall data. We then analytically prove that for large data sets, randomized sampling from power-law distributions leads to "truncated distributions" with the same power-law exponent. This second rule is very robust and also holds for distributions that deviate substantially from a strict power law. We further give one particular family of powerlaw functions that remain completely invariant under sampling. Finally, we validate our model with two large Web data sets: link distributions to domains and tag distributions on delicious.com.
No associations
LandOfFree
Rules of Thumb for Information Acquisition from Large and Redundant Data does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Rules of Thumb for Information Acquisition from Large and Redundant Data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rules of Thumb for Information Acquisition from Large and Redundant Data will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-218331