CE23 - Données, Connaissances, Big data, Contenus multimédias, Intelligence Artificielle

Preference Learning under Severe Uncertainties – PreServe

Submission summary

Learning user preferences plays an essential role in many problems, from helping a decision maker to choose between a few complex alternatives, to helping a user to pick the best choice among thousands or millions of them. A common problem in preference learning is that collected preferences from users may be subject to strong uncertainties and imprecision, because users may not be completely sure about their preferences, or because preferences are only given over a very small subset of objects (e.g., pairwise preferences on a small set of pairs when considering objects on large combinatorial domains). Faithfully accounting for such uncertainties may be a difficult tasks, unless one is ready to make extra, sometimes hard to check assumptions (e.g., that the decision maker is probabilistic). Such extra assumption may lead to biased inferences, which in turn can result in unwanted or non-optimal decisions.

Imprecise probability theories (IP theories) allow the modeller to avoid such assumptions when they are unwarranted, by explicitly modelling imprecision in its uncertainty model (e.g., by considering sets of probabilities consistent with the imprecisely, or uncertainly observed values). In PreServe, we propose to investigate the advantages of using these theories in two different learning problems involving preferences, or more generally rankings:

1. Uncertainties in elicited preferences of multi-criteria models: a common problem when confronted to a multi-criteria choice is to collect meaningful preferences from a user. If alternatives are complex to compare, the user may have difficulties to compare them, and therefore be uncertain about its answer. PreServe proposes to explore the potential of IP theories to model such uncertainties, integrate them within the elicitation protocol, and solve issues such as inconsistency removal or model choice.

2. Fitting a statistical ranking model (Mallows, Plackett-Luce, …) to observed preferences (e.g., coming from a population of users) when those are incomplete is a particularly challenging problem. Imprecise probabilistic approaches, that can treat imprecise data with a minimal amount of assumptions, seem well-suited to answer such challenges, yet there still is very few applications of IP to such problems. A second axis of PreServe will be to develop IP inference methods for such models.

Once solutions adapted to these two problems will be obtained, we will investigate how imprecise probability theories can help to transfer preference data obtained from large population to an individual that is related to this population (but not necessarily extracted from it), and for which we have no knowledge of her/his preferences. In short, how should we transfer (and possibly revise and/or weaken) generic knowledge to a single person, in order to efficiently solve cold-start problems (how to recommend without any knowledge of the individual) and not suffer from a too important bias in our inferences.

Project coordination

Sébastien Destercke (Heuristique et diagnostic des systèmes complexes)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

HEUDIASYC Heuristique et diagnostic des systèmes complexes

Help of the ANR 271,866 euros
Beginning and duration of the scientific project: - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter