Speaker Title tap/hover for abstract Materials
Michael UnserBiomedical Imaging Group EPFL, CH Representer theorems for machine learning and inverse problems
Representer theorems for machine learning and inverse problems

Regularization addresses the ill-posedness of the training problem in machine learning or the reconstruction of a signal from a limited number of measurements. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behaviour. In this presentation, I will present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems in Banach spaces and helps us understand the effect of regularization. I will then use the theorem to retrieve some classical characterizations such as the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, as well as a few new ones, including a result for deep neural networks.
video
slides
Vincent DuvalInria, FR Representing the solutions of total variation regularized problems
Representing the solutions of total variation regularized problems

The total (gradient) variation is a regularizer which has been widely used in inverse problems arising in image processing, following the pioneering work of Rudin, Osher and Fatemi. In this talk, I will describe the structure the solutions to the total variation regularized variational problems when one has a finite number of measurements.
First, I will present a general representation principle for the solutions of convex problems, then I will apply it to the total variation by describing the faces of its unit ball.

It is a joint work with Claire Boyer, Antonin Chambolle, Yohann De Castro, Frédéric de Gournay and Pierre Weiss.
video
slides


Video Recordings

Michael Unser: Representer theorems for machine learning and inverse problems

Abstract: Regularization addresses the ill-posedness of the training problem in machine learning or the reconstruction of a signal from a limited number of measurements. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behaviour. In this presentation, I will present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems in Banach spaces and helps us understand the effect of regularization. I will then use the theorem to retrieve some classical characterizations such as the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, as well as a few new ones, including a result for deep neural networks.


Vincent Duval: Representing the solutions of total variation regularized problems

Abstract: The total (gradient) variation is a regularizer which has been widely used in inverse problems arising in image processing, following the pioneering work of Rudin, Osher and Fatemi. In this talk, I will describe the structure the solutions to the total variation regularized variational problems when one has a finite number of measurements.
First, I will present a general representation principle for the solutions of convex problems, then I will apply it to the total variation by describing the faces of its unit ball.

It is a joint work with Claire Boyer, Antonin Chambolle, Yohann De Castro, Frédéric de Gournay and Pierre Weiss.