AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Betweenness Approximation for Edge Computing with Hypergraph Neural Networks

School of Management, Hefei University of Technology, Hefei 230009, China
School of Computer Science and Technology, Anhui University, Hefei 230601, China
Global Cognition and International Communication Laboratory, Anhui University, Hefei 230601, China
Show Author Information

Abstract

Recent years have seen growing demand for the use of edge computing to achieve the full potential of the Internet of Things (IoTs), given that various IoT systems have been generating big data to facilitate modern latency-sensitive applications. Network Dismantling (ND), which is a basic problem, attempts to find an optimal set of nodes that will maximize the connectivity degradation in a network. However, current approaches mainly focus on simple networks that model only pairwise interactions between two nodes, whereas higher-order groupwise interactions among an arbitrary number of nodes are ubiquitous in the real world, which can be better modeled as hypernetwork. The structural difference between a simple and a hypernetwork restricts the direct application of simple ND methods to a hypernetwork. Although some hypernetwork centrality measures (e.g., betweenness) can be used for hypernetwork dismantling, they face the problem of balancing effectiveness and efficiency. Therefore, we propose a betweenness approximation-based hypernetwork dismantling method with a Hypergraph Neural Network (HNN). The proposed approach, called “HND”, trains a transferable HNN-based regression model on plenty of generated small-scale synthetic hypernetworks in a supervised way, utilizing the well-trained model to approximate the betweenness of the nodes. Extensive experiments on five actual hypernetworks demonstrate the effectiveness and efficiency of HND compared with various baselines.

References

[1]

L. Qi, Y. Liu, Y. Zhang, X. Zhang, M. Bilal, and H. Song, Privacy-aware point-of-interest category recommendation in internet of things, IEEE Internet of Things Journal, vol. 9, no. 21, pp. 21398–21408, 2022.

[2]

Y. Liu, D. Li, S. Wan, F. Wang, W. Dou, X. Xu, S. Li, R. Ma, and L. Qi, A long short-term memory-based model for greenhouse climate prediction, International Journal of Intelligent Systems, vol. 37, no. 1, pp. 135–151, 2022.

[3]

Q. Wang, C. Zhu, Y. Zhang, H. Zhong, J. Zhong, V. S. Sheng, Short text topic learning using heterogeneous information network, IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 5, pp. 5269–5281, 2023.

[4]

A. Braunstein, L. Dall’Asta, G. Semerjian, and L. Zdeborová, Network dismantling, Proceedings of the National Academy of Sciences, vol. 113, no. 44, pp. 12368–12373, 2016.

[5]

M. Doostmohammadian, H. R. Rabiee, and U. A. Khan, Centrality-based epidemic control in complex social networks, Social Network Analysis and Mining, vol. 10, no. 1, pp. 32:1–32:11, 2020.

[6]
D. Kempe, J. Kleinberg, and E. Tardos, Maximizing the spread of influence through a social network, in Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 2003, pp. 137–146.
[7]

X.-L. Ren, N. Gleinig, D. Helbing, and N. Antulov-Fantulin, Generalized network dismantling, Proceedings of the National Academy of Sciences, vol. 116, no. 14, pp. 6554–6559, 2019.

[8]

C. Fan, L. Zeng, Y. Sun, and Y.-Y. Liu, Finding key players in complex networks through deep reinforcement learning, Nature Machine Intelligence, vol. 2, pp. 317–324, 2020.

[9]

F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lucas, A. Patania, J.-G. Young, and G. Petri, Networks beyond pairwise interactions: Structure and dynamics, Physics Reports, vol. 874, pp. 1–92, 2020.

[10]
C. Fan, L. Zeng, Y. Ding, M. Chen, Y. Sun, and Z. Liu, Learning to identify high betweenness centrality nodes from scratch: A novel graph neural network approach, in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, New York, NY, USA, 2019, pp. 559−568.
[11]

S. K. Maurya, X. Liu, and T. Murata, Graph neural networks for fast node ranking approximation, ACM Transactions on Knowledge Discovery from Data, vol. 15, no. 5, pp. 1556–4681, 2021.

[12]

F. Morone and H. A. Makse, Influence maximization in complex networks through optimal percolation, Nature, vol. 524, pp. 65–68, 2015.

[13]

L. Zdeborova, P. Zhang, and H.-J. Zhou, Fast and simple decycling and dismantling of networks, Scientific Reports, vol. 6, no. 1, p. 37954, 2016.

[14]

S. Mugisha and H.-J. Zhou, Identifying optimal targets of network attack by belief propagation, Physical Review E, vol. 94, no. 1, p. 012305, 2016.

[15]

M. Grassia, M. De Domenico, and G. Mangioni, Machine learning dismantling and early-warning signals of disintegration in complex systems, Nature Communications, vol. 12, no. 1, p. 5190, 2021.

[16]

D. Yan, W. Xie, Y. Zhang, Q. He, and Y. Yang, Hypernetwork dismantling via deep reinforcement learning, IEEE Transactions on Network Science and Engineering, vol. 9, no. 5, pp. 3302–3315, 2022.

[17]
T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, in Proc. 5th International Conference on Learning Representations, Toulon, France, 2017, pp. 1−14.
[18]

Y. Liu, H. Wu, K. Rezaee, M. R. Rezaee, O. I. Khalaf, A. A. Khan, D. Ramesh, and L. Qi, Interaction-enhanced and time-aware graph convolutional network for successive point-of-interest recommendation in traveling enterprises, IEEE Transactions on Industrial Informatics, vol. 19, no. 1, pp. 635–643, 2023.

[19]
P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, Graph attention networks, in Proc. 6th International Conference on Learning Representations, Vancouver, Canada, pp. 1−12, 2018.
[20]
W. Hamilton, Z. Ying, and J. Leskovec, Inductive representation learning on large graphs, in Proc. Advances in Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 1024–1034.
[21]
X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua, Neural graph collaborative filtering, in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 2019, pp. 165−174.
[22]
X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, Lightgcn: Simplifying and powering graph convolution network for recommendation, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 2020, pp. 639−648.
[23]
T. Chen and R. C.-W. Wong, Handling information loss of graph neural networks for session-based recommendation, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Virtual Event, 2020, pp. 1172−1180.
[24]
W. Chen, Y. Gu, Z. Ren, X. He, H. Xie, T. Guo, D. Yin, and Y. Zhang, Semi-supervised user profiling with heterogeneous graph attention networks, in Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 2019, pp. 2116–2122.
[25]
D. Wang, P. Wang, K. Liu, Y. Zhou, C. E. Hughes, and Y. Fu, Reinforced imitative graph representation learning for mobile user profiling: An adversarial training perspective, in Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2021, pp. 4410–4417.
[26]
Y. Feng, H. You, Z. Zhang, R. Ji, and Y. Gao, Hypergraph neural networks, in Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 2019, pp. 3558–3565.
[27]
N. Yadati, M. Nimishakavi, P. Yadav, V. Nitin, A. Louis, and P. Talukdar, HyperGCN: A new method for training graph convolutional networks on hypergraphs, in Proc. Advances in Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 1511–1522.
[28]
J. Huang and J. Yang, Unignn: A unified framework for graph and hypergraph neural networks, in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Virtual Event, 2021, pp. 2563–2569.
[29]
K. Xu, W. Hu, J. Leskovec, and S. Jegelka, How powerful are graph neural networks? in Proc. 7th International Conference on Learning Representations, New Orleans, LA, USA, 2019, pp. 1−17.
[30]
A. Bretto, Hypergraph Theory : An Introduction. Switzerland: Springer International Publishing, 2013.
[31]
C. Berge, Hypergraphs: Combinatorics of Finite Sets. North Holland, Holland: Elsevier, 1989.
[32]
M. T. Do, S.-E. Yoon, B. Hooi, and K. Shin, Structural patterns and generative models of real-world hypergraphs, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Virtual Event, 2020, pp. 176−186.
[33]
Y. Kook, J. Ko, and K. Shin, Evolution of real-world hypergraphs: Patterns and models without oracles, in Proc. IEEE International Conference on Data Mining, Sorrento, Italy, 2020, pp. 272–281.
[34]
S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, BPR: Bayesian personalized ranking from implicit feedback, in Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, Montreal, Canada, 2009, pp. 452−461.
[35]
D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, in Proc. 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015, pp. 1−15.
[36]
A. Sinha, Z. Shen, Y. Song, H. Ma, D. Eide, B.-J. P. Hsu, and K. Wang, An overview of microsoft academic service (MAS) and applications, in Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 2015, pp. 243−246.
[37]

A. R. Benson, R. Abebe, M. T. Schaub, A. Jadbabaie, and J. Kleinberg, Simplicial closure and higher-order link prediction, Proceedings of the National Academy of Sciences, vol. 115, no. 48, pp. E11221–E11230, 2018.

[38]

A. Antelmi, G. Cordasco, C. Spagnuolo, and P. Szufel, Social influence maximization in hypergraphs, Entropy, vol. 23, no. 7, p. 796, 2021.

[39]

C. M. Schneider, A. A. Moreira, J. S. Andrade, S. Havlin, and H. J. Herrmann, Mitigation of malicious attacks on networks, Proceedings of the National Academy of Sciences, vol. 108, no. 10, pp. 3838–3841, 2011.

[40]

A.-L. Barabási and R. Albert, Emergence of scaling in random networks, Science, vol. 286, no. 5439, pp. 509–512, 1999.

Tsinghua Science and Technology
Pages 331-344
Cite this article:
Guo Y, Xie W, Wang Q, et al. Betweenness Approximation for Edge Computing with Hypergraph Neural Networks. Tsinghua Science and Technology, 2025, 30(1): 331-344. https://doi.org/10.26599/TST.2023.9010106

157

Views

21

Downloads

1

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 14 July 2023
Revised: 11 September 2023
Accepted: 27 September 2023
Published: 11 September 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return