In many instances of data analysis, we are confronted with the so called Manifold Hypothesis, that is the belief that our data do not lie scattered around in a high-dimensional Euclidean space, they instead live on a (intrisically low-dimensional) geometric object that is embedded in that high-dimensional Euclidean space. Such a geometric object is usually thought of as a manifold (i.e. we rule out the possibility of singular, or non-smooth, points).

What is the advantage of such a viewpoint? Well, as a part of, say $\mathbb{R}^N$, our set of data is described by means of $N$ parameters, as a manifold of dimension $d$ it can be understood (at least locally) in terms of $d$ parameters; ideally, $d\ll N$.

Once we acknowledge the possibility of such an hypothesis, we can interact with it in three ways:

  1. we may want to understand how to recognize a manifold,
  2. we may want to know how to deal with a manifold,
  3. we may want to think about how to exploit a manifold.

The aim of this course is to give an introduction to the applications of Differential and Riemannian Geometry, focusing on the aspects that more heavily enter in optimization problems, manifold learning techniques and information geometry; this presentation will be intertwined with examples of the points 1), 2) and 3) above.

The precise extent of these examples will depend on the background and the interests of the audience.

A good familiarity with the concepts of real analysis in one and several variables and with linear algebra is required, as well as some foundations in differential geometry (manifolds, tangent spaces, ...); some basic knowledge of numerical analysis, probability and statistics will be helpful in understanding the various examples of application.

Staff

    Docente

  • Samuele Mongodi

Metodi di iscrizione

Iscrizione manuale
Iscrizione spontanea (Studente)
Accesso ospiti