Computer Science – Information Retrieval
Scientific paper
2010-06-23
Journal of Computing, Vol. 2 No. 6, June 2010, NY, USA, ISSN 2151-9617
Computer Science
Information Retrieval
IEEE Publication Format, https://sites.google.com/site/journalofcomputing/
Scientific paper
This study considers the extent to which users with the same query agree as to what is relevant, and how what is considered relevant may translate into a retrieval algorithm and results display. To combine user perceptions of relevance with algorithm rank and to present results, we created a prototype digital library of scholarly literature. We confine studies to one population of scientists (paleontologists), one domain of scholarly scientific articles (paleo-related), and a prototype system (PaleoLit) that we built for the purpose. Based on the principle that users do not pre-suppose answers to a given query but that they will recognize what they want when they see it, our system uses a rules-based algorithm to cluster results into fuzzy categories with three relevance levels. Our system matches at least 1/3 of our participants' relevancy ratings 87% of the time. Our subsequent usability study found that participants trusted our uncertainty labels but did not value our color-coded horizontal results layout above a standard retrieval list. We posit that users make such judgments in limited time, and that time optimization per task might help explain some of our findings.
Cao Dong
Carbonell Jaime
Gelernter Judith
No associations
LandOfFree
Studies on Relevance, Ranking and Results Display does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Studies on Relevance, Ranking and Results Display, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Studies on Relevance, Ranking and Results Display will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-278015