Learning preference distributions is a critical problem in manyareas (e.g., recommender systems, IR, social choice). However,many existing learning and inference methods impose restrictiveassumptions on the form of user preferences that can be admittedas evidence. We relax these restrictions by considering as dataarbitrary pairwise comparisons of alternatives, whichrepresent the fundamental building blocks of ordinal rankings.We develop the first algorithms for learning Mallows models (andmixtures thereof) from pairwise comparison data. At the heart ofour technique is a new algorithm, the generalized repeatedinsertion model (GRIM), which allows sampling from arbitraryranking distributions, and conditional Mallows models inparticular. While we show that sampling from a Mallows modelwith pairwise evidence is computationally difficult in general,we develop approximate samplers that are exact for manyimportant special cases--and have provable bounds with pairwiseevidence--and derive algorithms for evaluating log-likelihood,learning Mallows mixtures, and non-parametric estimation.Experiments on real-world data sets demonstrate theeffectiveness of our approach. (Some parts of this paperappeared in: T. Lu and C. Boutilier, Learning Mallows Modelswith Pairwise Preferences, Proceedings of the Twenty- EighthInternational Conference on Machine Learning (ICML 2011),pp.145-152, Bellevue, WA (2011).) color="gray">
展开▼