AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (8.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Limits of Depth: Over-Smoothing and Over-Squashing in GNNs

Department of Computer Science and Engineering, National Institute of Technology Srinagar, Srinagar 190006, India
Show Author Information

Abstract

Graph Neural Networks (GNNs) have become a widely used tool for learning and analyzing data on graph structures, largely due to their ability to preserve graph structure and properties via graph representation learning. However, the effect of depth on the performance of GNNs, particularly isotropic and anisotropic models, remains an active area of research. This study presents a comprehensive exploration of the impact of depth on GNNs, with a focus on the phenomena of over-smoothing and the bottleneck effect in deep graph neural networks. Our research investigates the tradeoff between depth and performance, revealing that increasing depth can lead to over-smoothing and a decrease in performance due to the bottleneck effect. We also examine the impact of node degrees on classification accuracy, finding that nodes with low degrees can pose challenges for accurate classification. Our experiments use several benchmark datasets and a range of evaluation metrics to compare isotropic and anisotropic GNNs of varying depths, also explore the scalability of these models. Our findings provide valuable insights into the design of deep GNNs and offer potential avenues for future research to improve their performance.

References

[1]
J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, M. Sun, Graph neural networks: A review of methods and applications. doi:10.1016/j.aiopen.2021.01.001.
[2]

F. Xia, K. Sun, S. Yu, A. Aziz, L. Wan, S. Pan, and H. Liu, Graph learning: A survey, IEEE Trans. Artif. Intell., vol. 2, no. 2, pp. 109–127, 2021.

[3]
Y. Rong, T. Xu, J. Huang, W. Huang, H. Cheng, Y. Ma, Y. Wang, T. Derr, L. Wu, and T. Ma, Deep graph learning: Foundations, advances and applications, in Proc. 26 th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, https: //doi.org/10.1145/3394486.3406474 2020.
[4]
M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, Geometric deep learning: Going beyond Euclidean data, IEEE Signal Process. Mag., vol. 34, no. 4, pp. 18–42, 2017.
[5]
J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, Neural message passing for quantum chemistry, in Proc. 34 th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 1263–1272.
[6]
Z. Liu and J. Zhou, Introduction to Graph Neural Networks. Cham: Switzerland, Springer, 2020.
[7]
T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, in Proc. 5 th Int. Conf. Learning Representations. doi:10.48550/arXiv.1609.02907.
[8]
W. L. Hamilton, R. Ying, and J. Leskovec, Inductive representation learning on large graphs, in Proc. 31 st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 1025–1035.
[9]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, Graph attention networks, arXiv preprint arXiv: 1710.10903, 2017.
[10]
F. Wang and C. Zhang, Label propagation through linear neighborhoods, in Proc. 23 rd Int. Conf. Machine Learning, Pittsburgh, PA, USA, 2006, pp. 985–992.
[11]
Q. Li, Z. Han, and X. M. Wu, Deeper insights into graph convolutional networks for semi-supervised learning, in Proc. 32 nd AAAI Conf. Artificial Intelligence, New Orleans, LA, USA, 2018, pp. 3538–3545.
[12]
D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, Measuring and relieving the over-smoothing problem for graph neural networks from the topological view, in Proc. 34 th AAAI Conf. Artificial Intelligence, New York, NY, USA, 2020, pp. 3438–3445.
[13]
L. P. Xhonneux, M. Qu, and J. Tang, Continuous graph neural networks, in Proc. 37 th Int. Conf. Machine Learning. doi:10.48550/arXiv.1912.00967.
[14]
U. Alon and E. Yahav, On the bottleneck of graph neural networks and its practical implications, https: //doi.org/10.48550/arXiv.2006.05205, 2020.
[15]
D. Lukovnikov and A. Fischer, Improving breadth-wise backpropagation in graph neural networks helps learning long-range dependencies, http: //proceedings.mlr.press/v139/lukovnikov21a/lukovnikov21a.pdf, 2021.
[16]
Z. Yang, W. Cohen, and R. Salakhudinov, Revisiting semi-supervised learning with graph embeddings, in Proc. 33 rd Int. Conf. Machine Learning, New York, NY, USA, 2016, pp. 40–48.
[17]
W. Hu, M. Fey, H. Ren, M. Nakata, Y. Dong, and J. Leskovec, OGB-LSC: A large-scale challenge for machine learning on graphs. doi:10.48550/arXiv.2103.09430.
[18]
M. Fey and J. E. Lenssen, Fast graph representation learning with PyTorch geometric, arXiv preprint arXiv: 1903.02428, 2019.
[19]

Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, A comprehensive survey on graph neural networks, IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4–24, 2021.

[20]

J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, Graph neural networks: A review of methods and applications, AI Open, vol. 1, pp. 57–81, 2020.

[21]

Z. Zhang, P. Cui, and W. Zhu, Deep learning on graphs: A survey, IEEE Trans. Knowl. Data Eng., vol. 34, no. 1, pp. 249–270, 2022.

[22]

X. Guo and L. Zhao, A systematic survey on deep generative models for graph generation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 5, pp. 5370–5390, 2022.

[23]

X. Chang, P. Ren, P. Xu, Z. Li, X. Chen, and A. Hauptmann, A comprehensive survey of scene graphs: Generation and application, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 1–26, 2023.

[24]

D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, and J. Chanussot, Graph convolutional networks for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., vol. 59, no. 7, pp. 5966–5978, 2021.

[25]

D. Hong, N. Yokoya, N. Ge, J. Chanussot, and X. X. Zhu, Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification, ISPRS J. Photogramm. Remote Sens., vol. 147, pp. 193–205, 2019.

[26]
L. Zhang, X. Chang, J. Liu, M. Luo, Z. Li, L. Yao, and A. Hauptmann, TN-ZSTAD: Transferable network for zero-shot temporal activity detection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 3848–3861, 2023.
[27]

M. Li, P. Y. Huang, X. Chang, J. Hu, Y. Yang, and A. Hauptmann, Video pivoting unsupervised multi-modal machine translation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 3918–3932, 2023.

[28]

C. Yan, X. Chang, Z. Li, W. Guan, Z. Ge, L. Zhu, and Q. Zheng, ZeroNAS: Differentiable generative adversarial networks search for zero-shot learning, IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 12, pp. 9733–9740, 2022.

[29]

D. Hong, N. Yokoya, J. Chanussot, and X. X. Zhu, An augmented linear mixing model to address spectral variability for hyperspectral unmixing, IEEE Trans. Image Process., vol. 28, no. 4, pp. 1923–1938, 2019.

[30]

D. Hong, N. Yokoya, J. Chanussot, J. Xu, and X. X. Zhu, Learning to propagate labels on graphs: An iterative multitask regression framework for semi-supervised hyperspectral dimensionality reduction, ISPRS J. Photogramm. Remote Sens., vol. 158, pp. 35–49, 2019.

[31]
Y. Hu, H. You, Z. Wang, Z. Wang, E. Zhou, and Y. Gao, Graph-MLP: Node classification without message passing in graph, arXiv preprint arXiv: 2106.04051, 2021.
[32]

L. Breiman, Random forests, Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001.

[33]

A. Mohi ud din and S. Qureshi, A review of challenges and solutions in the design and implementation of deep graph neural networks, Int. J. Comput. Appl., vol. 45, no. 3, pp. 221–230, 2023.

[34]
P. Barceló, E. V. Kostylev, M. Monet, J. Pérez, J. Reutter, and J. P. Silva, The logical expressiveness of graph neural networks, http: //grlearning.github.io/papers/92.pdf, 2020.
[35]

A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore, Automating the construction of internet portals with machine learning, Inf. Retr., vol. 3, no. 2, pp. 127–163, 2000.

[36]
C. L. Giles, K. D. Bollacker, and S. Lawrence, CiteSeer: An automatic citation indexing system, in Proc. 3 rd ACM Conf. Digital Libraries, Pittsburgh, PA, USA, 1998, pp. 89–98.
[37]

P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad, Collective classification in network data, AI Mag., vol. 29, no. 3, pp. 93–106, 2008.

Big Data Mining and Analytics
Pages 205-216
Cite this article:
ud din AM, Qureshi S. Limits of Depth: Over-Smoothing and Over-Squashing in GNNs. Big Data Mining and Analytics, 2024, 7(1): 205-216. https://doi.org/10.26599/BDMA.2023.9020019

962

Views

166

Downloads

3

Crossref

6

Web of Science

7

Scopus

0

CSCD

Altmetrics

Received: 24 January 2023
Revised: 10 July 2023
Accepted: 18 July 2023
Published: 25 December 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return