学术报告

您所在的位置:首页  学术交流  学术报告

Theoretical Foundations of Efficient and Scalable Graph Learning

发布时间:2024-12-30阅读次数:22


Graph neural networks (GNNs) have demonstrated remarkable empirical success in processing and representing graph-structured data across a wide range of fields. Despite their effectiveness, foundational challenges remain, particularly regarding their expressive power and the limitations imposed by model depth.

 

One key issue is the inherent limitation in GNN expressivity, as traditional GNNs struggle to distinguish certain non-isomorphic graphs. We address this limitation by showing that it can be overcome in specific tasks: (1) Message-passing GNNs and second-order folklore GNNs can differentiate (mixed-integer) linear and quadratic programs with distinct properties. (2) Subgraph GNNs effectively distinguish non-isomorphic graphs with bounded cycles.

 

Another persistent challenge is oversmoothing, where vertex features become nearly indistinguishable in deeper GNN layers. We theoretically demonstrate that residual connections can mitigate or completely prevent oversmoothing, providing a robust framework for enabling the development of deeper GNN architectures.

学术海报.pdf