--

Apologies for missing the question - hopefully better late than never.

In the context you mention, "manifold hypothesis" refers to the fact that many natural datasets have high extrinsic but low intrinsic dimension. They can thus be represented in a low-dimensional space by means of a nonlinear dimensionality reduction (a.k.a. "manifold learning") method.

However, the term "manifold" does not have the strict topological/differential geometric meaning: such "data manifolds" might not have for example a constant dimension, and might have singularities. It is thus a convenient metaphor.

Conversely, in our protobook we exploit the symmetry (structure group) of manifolds, so it is somewhat different. The concepts are strongly related though, and in fact, in my ICLR talk I do mention the connection between manifold learning and latent graph inference in GNNs.

Hope this helps

--

--

Michael Bronstein
Michael Bronstein

Written by Michael Bronstein

DeepMind Professor of AI @Oxford. Serial startupper. ML for graphs, biochemistry, drug design, and animal communication.

No responses yet