The goal of this project is to develop this theory so that it can take into account real application cases. First, it is important in many risk-sensitive applications (such as agronomy) to consider other utility functions. Second, many applications such as clinical trials have a natural structure on the set X and restrictions on the set of possible functions f. For example, it may be known in advance that F[f(x, .)] is a monotonic, unimodal, or slowly varying function. Third, in cases such as recommender systems, the set X may be very large but possess a graph structure that can be exploited to speed up the optimization enormously. Finally, the space X can be continuous (as is the case for some parameters in autoML); we will treat this case by combining the analyses of the two previous cases with classical ideas of non-parametric statistics.
This chair is also intended to develop the teaching of artificial intelligence at the ENS of Lyon, a very selective institution with a limited number of students intending to work in research. This is an important aspect of the mission for which I was recruited last year at the intersection of the mathematics and computer science departments. Building on the excellent background in theoretical computer science and general mathematics, I want to train a group of statistical learning specialists each year, with a strong foundation in mathematical statistics and learning theory in general, and a specialization in noisy optimization and numerical code processing.
This project develops a field of expertise in French research. This approach to optimization has had many successes over the last ten years, and is one of the essential building blocks of recent major successes in artificial intelligence: personalized search engines, recommendation and advertising systems, reinforcement learning and MCTS, online auction systems, autoML, etc. This project contributes to all these aspects, providing both a general complexity theory for noisy optimization and algorithms and meta-algorithms to deal with particular cases.
Translated with www.DeepL.com/Translator (free version)
Publications :
Non-Asymptotic Sequential Tests for Overlapping Hypotheses and applica tion to near optimal arm identification in bandit models
Aurélien Garivier, Emilie Kaufmann
Sequential Analysis vol.40, Mar. 2021 (pp.61-96)
A Non-asymptotic Approach to Best-Arm Identification for Gaussian Bandits
Antoine Barrier, Aurélien Garivier, Tomáš Kocák
International Joint Conference on Artificial Intelligence (IJCAI) n°31 (AISTAT’22)
Navigating to the Best Policy in Markov Decision Processes
Aymen Al Marjani, Aurélien Garivier, Alexandre Proutiere
Neural Information Processing Systems Dec. 2021
Sequential Algorithms for Testing Closeness of Distributions
Aadil Oufkir, Omar Fawzi, Nicolas Flammarion, Aurélien Garivier
Neural Information Processing Systems (spotlight) Dec. 2021
A A/B/n Testing with Control in the Presence of Subpopulations
Yoan Russac, Christina Katsimerou, Dennis Bohle, Olivier Cappé, Aurélien Garivier, Wouter M Koolen
Neural Information Processing Systems Dec. 2021
Fast Rate Learning in Stochastic First Price Bidding
Juliette Achddou, Olivier Cappé, Aurélien Garivier
Asian Conference on Machine Learning (ACML) n°13 Nov. 2021
Epsilon Best Arm Identification in Spectral Bandits
Tomáš Kocák, Aurélien Garivier
International Joint Conference on Artificial Intelligence (IJCAI) n°30 Aug. 2021
Self-Concordant Analysis of Generalized Linear Bandits with Forgetting
Yoan Russac, Olivier Cappé, Aurélien Garivier
24th International Conference on Artificial Intelligence and Statistics (AISTAT’21) Apr. 2021
Efficient Algorithms for Stochastic Repeated Second-price Auctions
Juliette Achddou, Olivier Cappé, Aurélien Garivier
Algorithmic Learning Theory (ALT) n o 32, Mar. 2021
The recruitment of new collaborators (postdocs and PhD students) will allow us to continue the project, especially on workpackage 4 (continuous optimization), which was previously on standby. The development of a more important axis than expected on reinforcement learning (Markovian decision models) enriches the initial objectives, as well as the importance that tends to take the work on algorithms guaranteeing the confidentiality of users (differential privacy).
Translated with www.DeepL.com/Translator (free version)
Publications :
Non-Asymptotic Sequential Tests for Overlapping Hypotheses and applica tion to near optimal arm identification in bandit models
Aurélien Garivier, Emilie Kaufmann
Sequential Analysis vol.40, Mar. 2021 (pp.61-96)
A Non-asymptotic Approach to Best-Arm Identification for Gaussian Bandits
Antoine Barrier, Aurélien Garivier, Tomáš Kocák
International Joint Conference on Artificial Intelligence (IJCAI) n°31 (AISTAT’22)
Navigating to the Best Policy in Markov Decision Processes
Aymen Al Marjani, Aurélien Garivier, Alexandre Proutiere
Neural Information Processing Systems Dec. 2021
Sequential Algorithms for Testing Closeness of Distributions
Aadil Oufkir, Omar Fawzi, Nicolas Flammarion, Aurélien Garivier
Neural Information Processing Systems (spotlight) Dec. 2021
A A/B/n Testing with Control in the Presence of Subpopulations
Yoan Russac, Christina Katsimerou, Dennis Bohle, Olivier Cappé, Aurélien Garivier, Wouter M Koolen
Neural Information Processing Systems Dec. 2021
Fast Rate Learning in Stochastic First Price Bidding
Juliette Achddou, Olivier Cappé, Aurélien Garivier
Asian Conference on Machine Learning (ACML) n°13 Nov. 2021
Epsilon Best Arm Identification in Spectral Bandits
Tomáš Kocák, Aurélien Garivier
International Joint Conference on Artificial Intelligence (IJCAI) n°30 Aug. 2021
Self-Concordant Analysis of Generalized Linear Bandits with Forgetting
Yoan Russac, Olivier Cappé, Aurélien Garivier
24th International Conference on Artificial Intelligence and Statistics (AISTAT’21) Apr. 2021
Efficient Algorithms for Stochastic Repeated Second-price Auctions
Juliette Achddou, Olivier Cappé, Aurélien Garivier
Algorithmic Learning Theory (ALT) n o 32, Mar. 2021
The goal of this project is to develop a new complexity theory for sequential and active optimization, to implement it in industrial projects, and to teach the students of Ecole Normale Supérieure de Lyon the theory and practice of sequential decision making in particular and machine learning in general.
An agent is given a stochastic numeric code f:X*E?R where X is a set of controllable variables and E a probability space representing the uncontrollable variables. At time t, she chooses a control x_t according to all past observations, and observes f(x_t,w) for some random w of E. What is the best possible strategy so as to optimize x->F[f(x, .)], where F is some utility function (eg. expectation)?
We consider the PAC setting, with a given risk d and a tolerance parameter e. In the simplistic case where X is a finite set, f can be any function and F is the mean, I have shown with my co-author and former student Emilie Kaufmann that identifying e-optimal x with probability at least 1-d requires at least T(f)*log(1/d) observation points, where T(f) is the information-theoretic optimization complexity of the function. Besides, we have proposed an algorithm (called Track-and-Stop) achieving this rate (with multiplicative constant 1) when d is small.
The goal of this project is to develop this theory so as to face more realistic problems. First, risk-sensitive applications such as agronomy require to consider other utility functions. Second, many cases like clinical trial naturally encompass some structure on the set X, and restrictions on the set of possible functions f. For example, F[f(x, .)] can be known in advance to be monotonous, unimodal, or slowly varying (two consecutive point means cannot differ by more than a prescribed quantity). Third, in cases like recommender systems, the set X can be huge but its graph structure can be used to accelerate a lot optimization. Fourth, the space X can be continuous (as for example certain parameters in autoML); we will address this case by combining our analysis of the second and of the third cases with classical non-parametric ideas.
This chair also aims at developing the teaching of artificial intelligence at Ecole Normale Supérieure de Lyon, a very selective institution with a limited number of extremely motivated students willing to train for research. This is an important aspect of my mission since I was recruited, last year, at the intersection of the mathematics and computer science departments. Rely on existing very good programs in theoretical computer science and in general mathematics, I want to grow a group of statistical learning specialists every year, with a strong background in mathematical statistics and learning theory in general, and a specialization in noisy optimization and computer experiments in particular.
This project develops a field of expertise of the French research. This approach of optimization has shown very successful over the past ten years, and is one of the building blocks of significant recent successes of AI: personalized search engines, recommender and ads systems, reinforcement learning and MCTS, online auction systems, autoML, etc. The present project is relevant to all those fields and will have a great impact by both providing a general theoretical notion of complexity of noisy optimization, and a new algorithms or meta-algorithms for solving practical tasks.
This project is part of the academic program in Machine Learning of the University of Lyon and Saint-Etienne, now united in the research federation SciDoLySE. This federation encourages joint work among all its participants: in addition to UMPA and LIP, I can rely on and propose collaboration with my colleagues from Institut Camille Jordan, Dante INRIA team and Laboratoire Hubert Curien (to name just a few), and to disseminate the research and training fruits of my project to the entire site.
Monsieur Aurélien GARIVIER (Unité de mathématiques pures et appliquées de l'ENS de Lyon)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
UMPA Unité de mathématiques pures et appliquées de l'ENS de Lyon
Help of the ANR 539,568 euros
Beginning and duration of the scientific project:
August 2020
- 48 Months