Over the past few years, many problems of automatically computing sparse representations of data have been addressed through convex optimization formulations. However, aiming for sparsity actually involves an l0 “norm” regularization/constraint, and the convex optimization way is essentially a proxy to achieve this sparsity. Here, we want to set the focus on another way of dealing with the sparsity objective, which, to our opinion, has been overlooked: greedy methods.
In essence, the aim of GRETA is to draw and take advantage of formal and precise connections between machine learning and signal processing from the greedy approximation standpoint. More specifically, we will address and bring original solutions to the following partially solved/unsolved problems: <br />• the development of new MP/OMP-like procedures for large (union of) dictionaries that do not naturally allow for fast computations: i) algorithmic developments are expected, that will rely on the creation of appropriate data structures, ii) questions related to structured sparsity will be of primary interest, and iii) the problem of learning appropriate dictionaries will be tackled; <br />• the development of new MP/OMP-like procedures for arbitrary loss functions, other than the square loss: i) the speed of convergence of the proposed algorithms will be central to this problem and ii) the relevance of the algorithms for various ML tasks (e.g. semi-supervised learning/transduction and its connection to inpainting) will be studied; <br />• the blessings of greediness: beyond mere computational efficiency, we anticipate that there are a number of situations where greediness has nice side effects from the generalization standpoint in ML; as a consequence, we will work on proposing new tools for ML to formally link greediness and generalization: those tools may heavily rely on quantities like the spark or the incoherence. <br />
The work of the Greta team heavily relies on theories and accompanying tools from convex optimization, combinatorial optimization, computational learning theory and fundamental computer science.
The strategy implemented by the group is to revisit well-known results and algorithms thanks to new perspectives provided by the aforementioned fields. This way, we try to understand the extend to which existing methods can be extended and, if not, to clearly pinpoint their intrinsic limits.
There are a few results of the Greta project that can be highlighted:
- A new principle to accelerate optimization methods to find sparse solution of convex problems has been devised under the name of «Dynamic screening«. It earned Antoine Bonnefoy, PhD-student funded on the project, the Best Student Paper award at Eusipco 2014.
- An optimization and greedy method has been developped, based on an active set strategy to identify among a set of infinitely many features, those that are essential to encode a signal.
- A family of greedy algorimths designed to minimize a quadratic cost has been generalized to the problem if finding zeros of Hilbertian operators. The methods, combined with the Moreau-Yosida regularization of the Poisson likelihood, has been applied to restore images corrupted by Poisson noise.
In addition a successful workshop gathering people from signal processing, machine learning and combinatorial optimization was held in Marseille in Septembre 2014. A Workshop with an even wider audience is planned for 2015.
A few international visiting researchers have visited (or are planned to visit) Greta sites.
The main prospects are:
- to revisit the question of inpainting from the transduction point of view and from the greedy optimization point of view;
- to see how theoretical results on the recoverability of signals can be derived for the very efficient Frank-Wolfe optimization procedure;
- to make the connections between greedy methods and machine learning techniques that have been devised for bandit problems, knowing that these techniques primarily revolve around local decisions, just as greedy methods do;
- to understand how greedy methods can be used to create active learning methods.
- 2 journal articles, 1 under minor revision
- 9 international conference papers
Over the past few years, many problems of automatically computing sparse representations of data have been addressed through convex optimization formulations. This holds both for contributions in machine learning and signal processing. However, aiming at sparsity actually involves an l0 “norm” regularization/constraint, and the convex optimization way is essentially a proxy to achieve this sparsity – through the recourse to an l1 norm, which itself is a proxy to the l0. In this project, we want to set the focus on another way of dealing with the sparsity objective, which, to our opinion, has been neglected recently: greedy methods. From a broad perspective, greedy algorithms constitute another way of approximating the solution of an l0 based problems and we think there is a lot of interesting questions that accompany the use of greedy methods. In particular, we will carry out our research along three different axis. First, we will devote time to develop new projection/matching pursuit-based algorithms, focusing on two specific points: on the one hand, we will be interested in making use of elaborate data structures and algorithms (e.g. kd-tree, hash tables, graph algorithms) to speed up matching pursuit learning, and, on the other hand, we will devise new pursuit algorithms capable of handling structured sparsity. Second, we aim at analyzing the properties of greedy methods in terms of generalization ability: results exist that establish conditions for (perfect) recoverability of the ’hidden’ or unobserved signal, but few exist that draw the connection with the generalization ability of the learned function – the question of using a loss function different from the squared error is one issue directly connected to this line of research. Finally, on a broader scope, we would like to elucidate some connections between pivotal quantities such as (mutual) incoherence, Babel function and spark that appear in the signal processing literature and VC-dimension, Rademacher complexities and algorithmic stability, which are notions widely used in machine learning. The former quantities are essential to prove results on recoverability while the latter play a central role to demonstrate the consistency of learning procedures: we anticipate that these notions are closely related and this is our purpose to precisely establish their connections.
Monsieur Liva RALAIVOLA (Laboratoire d'Informatique Fondamentale de Marseille) – firstname.lastname@example.org
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
CNRS - LATP Centre National de Recherche Scientifique Délégation Provence et Corse - Laboratoire d'Analyse, Topologie, Probabilités
LITIS- Laboratoire d'Informatique, du Traitement de l'Information et des Systèmes
LIF Laboratoire d'Informatique Fondamentale de Marseille
Help of the ANR 372,330 euros
Beginning and duration of the scientific project: August 2012 - 36 Months