Computer Science – Information Retrieval
Scientific paper
2008-10-15
Computer Science
Information Retrieval
5 pages
Scientific paper
The LETOR website contains three information retrieval datasets used as a benchmark for testing machine learning ideas for ranking. Algorithms participating in the challenge are required to assign score values to search results for a collection of queries, and are measured using standard IR ranking measures (NDCG, precision, MAP) that depend only the relative score-induced order of the results. Similarly to many of the ideas proposed in the participating algorithms, we train a linear classifier. In contrast with other participating algorithms, we define an additional free variable (intercept, or benchmark) for each query. This allows expressing the fact that results for different queries are incomparable for the purpose of determining relevance. The cost of this idea is the addition of relatively few nuisance parameters. Our approach is simple, and we used a standard logistic regression library to test it. The results beat the reported participating algorithms. Hence, it seems promising to combine our approach with other more complex ideas.
No associations
LandOfFree
A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-303774