Speaker Title tap/hover for abstract Materials
Andrea BraidesUniversità Roma Tor Vergata, IT Continuum limits of interfacial energies on (sparse and) dense graphs
Continuum limits of interfacial energies on (sparse and) dense graphs

I review some results on the convergence of energies defined on graphs. My interest in such energies comes from models in Solid Mechanics (where the bonds in the graph represent the relevant atomistic interactions) or Statistical Physics (Ising systems), but the nodes of the graph can also be thought as a collection of data on which the bonds describe some relation between the data.
The typical objective is an approximate (simplified) continuum description of problems of minimal cut as the number N of the nodes of the graphs diverges.
If the graphs are sparse (i.e. the number of bonds is much less than the total number of pairs of nodes as N goes to infinity), often (more precisely when we have some control on the range or on the decay of the interactions) such minimal-cut problems translate into minimal-perimeter problems for sets or partitions on the continuum. This description is easily understood for periodic lattice systems, but carries on also for random distributions of nodes. In the case of a (locally) uniform Poisson distribution, actually the limit minimal-cut problems are described by more regular energies than in the periodic-lattice case.
When we relax the hypothesis on the range of interactions, the description of the limit of sparse graphs becomes more complex, as it depends subtly on geometric characteristics of the graph, and is partially understood. Some easy examples show that, even though for the continuum limit we still remain in a similar analytical environment, the description as (sharp) interfacial energies can be lost in this case, and more “diffuse” interfaces must be taken into account.
If instead we consider dense sequences of graphs (i.e., the number of bonds is of the same order as the total number of pairs as N goes to infinity) then a completely different limit environment must be used, that of graphons (which are abstract limits of graphs), for which sophisticated combinatoric results can be used. We can re-read the existing notion of convergence of graphs to graphons as a convergence of the related cut functionals to non-local energies on a simple reference parameter set. This convergence provides an approximate description of the corresponding minimal-cup problems.
Works in collaboration with Alicandro, Cicalese, Piatnitski and Solci (sparse graphs) and Cermelli and Dovetta (dense graphs).
video
slides
Nicolás García TrillosUniversity of Wisconsin-Madison, US Regularity theory and uniform convergence in the large data limit of graph Laplacian eigenvectors on random data clouds
Regularity theory and uniform convergence in the large data limit of graph Laplacian eigenvectors on random data clouds

Graph Laplacians are omnipresent objects in machine learning that have been used in supervised, unsupervised and semi supervised settings due to their versatility in extracting local and global geometric information from data clouds. In this talk I will present an overview of how the mathematical theory built around them has gotten deeper and deeper, layer by layer, since the appearance of the first results on pointwise consistency in the 2000’s, until the most recent developments; this line of research has found strong connections between PDEs built on proximity graphs on data clouds and PDEs on manifolds, and has given a more precise mathematical meaning to the task of “manifold learning”. In the first part of the talk I will highlight how ideas from optimal transport made some of the initial steps, which provided L2 type error estimates between the spectra of graph Laplacians and Laplace-Beltrami operators, possible. In the second part of the talk, which is based on recent work with Jeff Calder and Marta Lewicka, I will present a newly developed regularity theory for graph Laplacians which among other things allow us to bootstrap the L2 error estimates developed through optimal transport and upgrade them to uniform convergence and almost C^{0,1} convergence rates. The talk can be seen as a tale of how a flow of ideas from optimal transport, PDEs, and in general, analysis, has made possible a finer understanding of concrete objects popular in data analysis and machine learning.
video
slides


Video Recordings

Andrea Braides: Continuum limits of interfacial energies on (sparse and) dense graphs

Abstract: I review some results on the convergence of energies defined on graphs. My interest in such energies comes from models in Solid Mechanics (where the bonds in the graph represent the relevant atomistic interactions) or Statistical Physics (Ising systems), but the nodes of the graph can also be thought as a collection of data on which the bonds describe some relation between the data.
The typical objective is an approximate (simplified) continuum description of problems of minimal cut as the number N of the nodes of the graphs diverges.
If the graphs are sparse (i.e. the number of bonds is much less than the total number of pairs of nodes as N goes to infinity), often (more precisely when we have some control on the range or on the decay of the interactions) such minimal-cut problems translate into minimal-perimeter problems for sets or partitions on the continuum. This description is easily understood for periodic lattice systems, but carries on also for random distributions of nodes. In the case of a (locally) uniform Poisson distribution, actually the limit minimal-cut problems are described by more regular energies than in the periodic-lattice case.
When we relax the hypothesis on the range of interactions, the description of the limit of sparse graphs becomes more complex, as it depends subtly on geometric characteristics of the graph, and is partially understood. Some easy examples show that, even though for the continuum limit we still remain in a similar analytical environment, the description as (sharp) interfacial energies can be lost in this case, and more “diffuse” interfaces must be taken into account.
If instead we consider dense sequences of graphs (i.e., the number of bonds is of the same order as the total number of pairs as N goes to infinity) then a completely different limit environment must be used, that of graphons (which are abstract limits of graphs), for which sophisticated combinatoric results can be used. We can re-read the existing notion of convergence of graphs to graphons as a convergence of the related cut functionals to non-local energies on a simple reference parameter set. This convergence provides an approximate description of the corresponding minimal-cup problems.
Works in collaboration with Alicandro, Cicalese, Piatnitski and Solci (sparse graphs) and Cermelli and Dovetta (dense graphs).


Nicolás García Trillos: Regularity theory and uniform convergence in the large data limit of graph Laplacian eigenvectors on random data clouds

Abstract: Graph Laplacians are omnipresent objects in machine learning that have been used in supervised, unsupervised and semi supervised settings due to their versatility in extracting local and global geometric information from data clouds. In this talk I will present an overview of how the mathematical theory built around them has gotten deeper and deeper, layer by layer, since the appearance of the first results on pointwise consistency in the 2000’s, until the most recent developments; this line of research has found strong connections between PDEs built on proximity graphs on data clouds and PDEs on manifolds, and has given a more precise mathematical meaning to the task of “manifold learning”. In the first part of the talk I will highlight how ideas from optimal transport made some of the initial steps, which provided L2 type error estimates between the spectra of graph Laplacians and Laplace-Beltrami operators, possible. In the second part of the talk, which is based on recent work with Jeff Calder and Marta Lewicka, I will present a newly developed regularity theory for graph Laplacians which among other things allow us to bootstrap the L2 error estimates developed through optimal transport and upgrade them to uniform convergence and almost C^{0,1} convergence rates. The talk can be seen as a tale of how a flow of ideas from optimal transport, PDEs, and in general, analysis, has made possible a finer understanding of concrete objects popular in data analysis and machine learning.