--

I disagree with this prediction. First, transformers are a particular instance of GNNs, so it's to a large extent a matter of semantics. Second, graphs are a natural way of reasoning about complex systems of related objects and therefore applicable in a lot of problems. I believe what will happen is an evolution of GNNs beyond message passing, towards higher-order structures (in the spirit of topological data analysis), algorithmic reasoning, and latent/causal graph learning. This is currently beyond the reach of transformers and similar architectures.

--

--

Michael Bronstein
Michael Bronstein

Written by Michael Bronstein

DeepMind Professor of AI @Oxford. Serial startupper. ML for graphs, biochemistry, drug design, and animal communication.

No responses yet