Towards understanding graph neural networks: expressiveness and over-smoothing

April 28
Zhengdao Chen, Google
Friday April 28, 2023 10:00 am - 12:00 pm
Zoom meeting

Abstract: As graph neural networks (GNNs) have gained popularity in diverse application areas from network analyses to computational chemistry, it becomes important to analyze them theoretically in order to understand their strengths and limitations as well as to make better design choices. I will first present a series of efforts on understanding the expressiveness of various GNN models through perspectives including graph isomorphism testing and substructure counting. As highlights, we showed that both the message-passing GNN and the 2-Invariant Graph Network (2-IGN) are limited in their ability to count graph substructures, and we then proposed novel GNN models that are provably more expressive and also effective in molecular prediction tasks. Next, I will present an analysis of the over-smoothing phenomenon in linear GNN models through the Contextual Stochastic Block Model (CSBM) of random graphs. It sheds light on the mechanism that underlies over-smoothing by disentangling two counteracting effects of graph convolutions, which enabled us to derive quantitative estimates of the depth of the model at which over-smoothing can occur.

Bio: Zhengdao Chen received his PhD in mathematics from the Courant Institute of New York University and joined Google as a Research Scientist in 2022. During his PhD, he mainly worked with Joan Bruna, Eric Vanden-Eijnden and Léon Bottou on topics including graph neural networks, deep learning theory and physics-informed learning.