AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (8.3 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Publishing Language: Chinese

Two-stage fusion multiview graph clustering based on the attention mechanism

Xingwang ZHAO1,2Zhedong HOU1,2Kaixuan YAO1,2Jiye LIANG1,2( )
School of Computer and Information Technology, Shanxi University, Taiyuan 030006, China
Key Laboratory of Computational Intelligence and Chinese Information Processing Ministry of Education(Shanxi University), Taiyuan 030006, China
Show Author Information

Abstract

Objective

Multiview graph clustering aims to investigate the inherent cluster structures in multiview graph data and has received quite extensive research attention over recent years. However, there are differences in the final quality of different views, but existing methods treat all views equally during the fusion process without assigning the corresponding weights based on the received quality of the view. This may result in the loss of complementary information from multiple views and go on to ultimately affect the clustering quality. Additionally, the topological structure and attribute information of nodes in multiview graph data differ significantly in terms of content and form, making it somewhat challenging to integrate these two types of information effectively. To solve these problems, this paper proposes two-stage fusion multiview graph clustering based on an attention mechanism.

Methods

The algorithm can be divided into three stages: feature filtering based on graph filtering, feature fusion based on the attention mechanism, and topological fusion based on the attention mechanism. In the first stage, graph filters are applied to combine the attribute information with the topological structure of each view. In this process, a smoother embedding representation is achieved by filtering out high-frequency noise. In the second stage, the smooth representations of individual views are fused using attention mechanisms to obtain the consensus smooth representation, which incorporates information from all views. Additionally, a consensus Laplacian matrix is obtained by combining multiple views' Laplacian matrices using learnable weights. To obtain the final embedded representation, the consensus Laplacian matrix and consensus smooth representation are inputted into an encoder. Subsequently, the similarity matrix for the final embedded representation is computed. Training samples are selected from the similarity matrix, and the embedded representation and learnable weights of the Laplacian matrix are optimized iteratively to obtain a somewhat more compressed embedded representation. Finally, performing spectral clustering on the embedding representation yields the clustering results. The performance of the algorithm is evaluated using widely-used clustering evaluation metrics, including accuracy, normalized mutual information, an adjusted Rand index, and an F1-score, on three datasets: Association for Computing Machinery (ACM), Digital Bibliography & Library Project (DBLP), and Internet Movie Database (IMDB).

Results

1) The experimental results show that the proposed algorithm is more effective in handling multiview graph data, particularly for the ACM and DBLP datasets, compared to extant methods. However, it may not perform as well as LMEGC and MCGC on the IMDB dataset. 2) Through the exploration of view quality using the proposed methods, the algorithm can learn weights specific to each view based on quality. 3) Compared to the best-performing single view on each dataset (ACM, DBLP, and IMDB), the proposed algorithm achieves an average performance improvement of 2.4%, 2.9%, and 2.1%, respectively, after fusing all views. 4) Exploring the effect of the number of graph filter layers and the ratio of positive to negative node pairs on the performance of the algorithm, it was found that the best performance was achieved with somewhat small graph filter layers. The optimal ratio for positive and negative node pairs was around 0.01 and 0.5.

Conclusions

The algorithm combines attribute information with topological information through graph filtering to obtain smoother representations that are more suitable for clustering. The attention mechanisms can learn weights from both the topological and attribute information perspectives based on view quality. In this way, the representation could get the information from each view while avoiding the influence of poor-quality views. The proposed method in this paper achieves the expected results, greatly enhancing the clustering performance of the algorithm.

CLC number: TP301.6 Document code: A Article ID: 1000-0054(2024)01-0001-12

References

[1]

LIU J H, WANG Y, QIAN Y H. Multi-view clustering with spectral structure fusion[J]. Journal of Computer research and Development, 2022, 59(4): 922-935. (in Chinese)

[2]

LIU X L, BAI L, ZHAO X W, et al. Incomplete multi-view clustering algorithm based on multi-order neighborhood fusion[J]. Journal of Software, 2022, 33(4): 1354-1372. (in Chinese)

[3]
LIN Z P, KANG Z. Graph filter-based multi-view attributed graph clustering[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence. Montreal, Canada: IJCAI. org, 2021: 2723-2729.
[4]
PAN E L, KANG Z. Multi-view contrastive graph clustering[C]//Proceedings of the 35th Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2021: 2148-2159.
[5]

LIN Z P, KANG Z, ZHANG L Z, et al. Multi-view attributed Graph clustering[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(2): 1872-1880.

[6]
FAN S H, WANG X, SHI C, et al. One2Multi graph autoencoder for multi-view graph clustering[C]//Proceedings of the Web Conference 2020. Taipei, China: ACM, 2020: 3070-3076.
[7]
CAI E C, HUANG J, HUANG B S, et al. GRAE: Graph recurrent autoencoder for multi-view graph clustering[C]//Proceedings of the 4th International Conference on Algorithms, Computing and Artificial Intelligence. Sanya, China: ACM, 2021: 72.
[8]

LIANG J Y, LIU X L, BAI L, et al. Incomplete multi-view clustering via local and global co-regularization[J]. Science China Information Sciences, 2022, 65(5): 152105.

[9]

CHUNG F R K. Spectral graph theory[M]. Providence: American Mathematical Society, 1997.

[10]
WU D Y, XU J, DONG X, et al. GSPL: A succinct kernel model for group-sparse projections learning of multiview data[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence. San Francisco, USA: Morgan Kaufmann, 2021: 3185-3191.
[11]
LI R H, ZHANG C Q, HU Q H, et al. Flexible multi-view representation learning for subspace clustering[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China: AAAI Press, 2019: 2916-2922.
[12]
NIE F P, LI J, LI X L. Self-weighted multiview clustering with multiple graphs[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence. Melbourne, Australia: AAAI Press, 2017: 2564-2570.
[13]

XIA W, WANG S, YANG M, et al. Multi-view graph embedding clustering network: Joint self-supervision and block diagonal representation[J]. Neural Networks, 2022, 145: 1-9.

[14]
CHENG J F, WANG Q Q, TAO Z Q, et al. Multi-view attribute graph convolution networks for clustering[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence. Yokohama, Japan: IJCAI. org, 2021: 411.
[15]
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017: 6000-6010.
[16]
DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers). Minneapolis, USA: ACL, 2019.
[17]
KITAEV N, KAISER L, LEVSKAYA A. Reformer: The efficient transformer[C]//Proceedings of the 8th International Conference on Learning Representations. Addis Ababa, Ethiopia: OpenReview. net, 2020.
[18]

SHUMAN D I, NARANG S K, FROSSARD P, et al. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains[J]. IEEE Signal Processing Magazine, 2013, 30(3): 83-98.

[19]

WANG J, LIANG J Y, YAO K X, et al. Graph convolutional autoencoders with co-learning of graph structure and node attributes[J]. Pattern Recognition, 2022, 121: 108215.

[20]
KIPF T N, Welling M. Variational graph auto-encoders[EB/OL]. [2016-01-01]. https://arxiv.org/abs/1611.07308.
[21]
TANG J, QU M, WANG M Z, et al. LINE: Large-scale information network embedding[C]//Proceedings of the 24th International Conference on World Wide Web. Florence, Italy: International World Wide Web Conferences Steering Committee, 2015: 1067-1077.
[22]
LIU W Y, CHEN P Y, YEUNG S, et al. Principled multilayer network embedding[C]//Proceedings of the 2017 International Conference on Data Mining Workshops. New Orleans, USA: IEEE Press, 2017: 134-141.
[23]
XIA R K, PAN Y, DU L, et al. Robust multi-view spectral clustering via low-rank and sparse decomposition[C]//Proceedings of the 28th AAAI Conference on Artificial Intelligence. Québec City, Canada: AAAI Press, 2014: 2149-2155.
[24]
FETTAL C, LABIOD L, NADIF M. Simultaneous linear multi-view attributed graph representation learning and clustering[C]//Proceedings of the 60th ACM International Conference on Web Search and Data Mining. Singapore, Singapore: ACM, 2023: 303-311.
Journal of Tsinghua University (Science and Technology)
Pages 1-12
Cite this article:
ZHAO X, HOU Z, YAO K, et al. Two-stage fusion multiview graph clustering based on the attention mechanism. Journal of Tsinghua University (Science and Technology), 2024, 64(1): 1-12. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.001

142

Views

3

Downloads

0

Crossref

0

Scopus

0

CSCD

Altmetrics

Received: 12 August 2023
Published: 15 January 2024
© Journal of Tsinghua University (Science and Technology). All rights reserved.
Return