Member-only story
Learning on Topological Spaces
A new computational fabric for Graph Neural Networks
Graph Neural Networks (GNNs) typically align their computation graph with the structure of the input graph. But are graphs the right computational fabric for GNNs? A recent line of papers challenges this assumption by replacing graphs with more general objects coming from the field of algebraic topology, which offer multiple theoretical and computational advantages.

This post was co-authored with Cristian Bodnar and Fabrizio Frasca and is based on the papers C. Bodnar, F. Frasca, et al., Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks (2021) ICML and C. Bodnar, F. Frasca et al., Weisfeiler and Lehman Go Cellular: CW Networks (2021) NeurIPS. It is part of the series on Graph Neural Networks through the lens of Differential Geometry and Algebraic Topology. See also other posts from the series discussing Neural Diffusion PDEs, graph rewiring with Ricci flows, and cellular sheaves.
“Topology! The stratosphere of human thought! In the twenty-fourth century, it might possibly be of use to someone.” — Aleksandr Solzhenitsyn, In the First Circle (1968)
Graphs are used to model anything ranging from computer networks to particle interactions in the Large Hadron Collider. What makes graphs so ubiquitous is their discrete and combinatorial nature, allowing them to express abstract relations while remaining amenable to computations. One of the reasons for their popularity is the fact that graphs abstract out the geometry, i.e. where the nodes are positioned in space or how the edges are curved, leaving only a representation of how nodes are connected. The origins of graph theory stem from this very observation made by Leonhard Euler in 1741 in his work on geometria situs (“geometry of location”) [1], in which he showed the famous problem of the Seven Bridges of Königsberg has no solution.