PAC-Bayesian Agnostic Learning – BEAGLE
BEAGLE roots in statistical learning theory (see the monographs Vapnik, 2000, and Shalev-Shwartz and Ben-David, 2014), which can be viewed as the theoretical foundations of machine learning. Statistics and its subfield statistical learning are now core parts of numerous research domains, especially when the abundance of data (the so-called big data era) is a strong incitement for researchers to rely on automated information processing and learning. This movement, partly guided by the necessity to dig the data gold, is striking by its magnitude and has led to an increasing gap between practitioners using off-the-shelf statistical and machine learning methods, and theoreticians designing and studying the properties of such algorithms. In particular, mathematical assumptions needed to derive interesting properties for learning algorithms (such as consistency, or rates of convergence) are mostly ignored or even violated by the extensive use of such algorithms in many settings. Conversely, very few theoretical works investigate the proper tuning of algorithms parameters, leading to a rather rich heuristics developed by practitioners, sadly often contradicting with the aforementioned mathematical assumptions. While the extensive use of statistical and machine learning algorithms has led to significant advances in many domains, the potential of statistics and statistical learning is far from being fulfilled. This calls for a steady research effort from the mathematical community to reduce the striking mismatch between theoreticians and users. BEAGLE aims at bridging this gap by providing an integrated framework called agnostic learning. The aforementioned gap originates in the fact that, for both theoreticians and practitioners, subjective elements play a decisive part in learning. Indeed, human decisions (such as parameters tuning) bearing no sense for theory, or theory resting on unrealistic assumptions are equally harmful to establish a common framework. Building an efficient learning algorithm therefore relies on the following four key ingredients: assumptions on data (such as independent and identically distributed data, noise), assumptions on the model (such as computational constraints, or the specification of its analytic form), parameters tuning (heuristics recipes are often favored over sound mathematical arguments), and a proper metric (loss function to assess the mathematical and practical performance of an algorithm). This gnostic settings assumes some underlying truth which can be well approximated and modeled with no possible validation. BEAGLE proposes to remove all subjectivity in the learning process. The key ingredient is the PAC-Bayesian theory, coupled with metric learning, objective Bayes and computational statistics. BEAGLE targets a double theoretical and algorithmic expected impact.
Monsieur Benjamin Guedj (Inria Lille - Nord Europe)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Inria LNE Inria Lille - Nord Europe
Help of the ANR 181,116 euros
Beginning and duration of the scientific project: February 2019 - 48 Months