Dynamically Rewired Delayed Message Passing GNNs
Message-passing graph neural networks (MPNNs) tend to suffer from the phenomenon of over-squashing, causing performance deterioration for tasks relying on long-range interactions. This can be largely attributed to message passing only occurring locally, over a node’s immediate neighbours. Traditional static graph rewiring techniques typically attempt to counter this effect by allowing distant nodes to communicate instantly (and in the extreme case of Transformers, by making all nodes accessible at every layer). However, this incurs a computational price and comes at the expense of breaking the inductive bias provided by the input graph structure. In this post, we describe two novel mechanisms to overcome over-squashing while offsetting the side effects of static rewiring approaches: dynamic rewiring and delayed message passing. These techniques can be incorporated into any MPNN and lead to better performance than graph Transformers on long-range tasks.