Sort:
Survey Issue
Advances of Pipeline Model Parallelism for Deep Learning Training: An Overview
Journal of Computer Science and Technology 2024, 39 (3): 567-584
Published: 22 July 2024
Abstract Collect

Deep learning has become the cornerstone of artificial intelligence, playing an increasingly important role in human production and lifestyle. However, as the complexity of problem-solving increases, deep learning models become increasingly intricate, resulting in a proliferation of large language models with an astonishing number of parameters. Pipeline model parallelism (PMP) has emerged as one of the mainstream approaches to addressing the significant challenge of training “big models”. This paper presents a comprehensive review of PMP. It covers the basic concepts and main challenges of PMP. It also comprehensively compares synchronous and asynchronous pipeline schedules for PMP approaches, and discusses the main techniques to achieve load balance for both intra-node and inter-node training. Furthermore, the main techniques to optimize computation, storage, and communication are presented, with potential research directions being discussed.

Issue
Two-stage fusion multiview graph clustering based on the attention mechanism
Journal of Tsinghua University (Science and Technology) 2024, 64 (1): 1-12
Published: 15 January 2024
Abstract PDF (8.3 MB) Collect
Downloads:3
Objective

Multiview graph clustering aims to investigate the inherent cluster structures in multiview graph data and has received quite extensive research attention over recent years. However, there are differences in the final quality of different views, but existing methods treat all views equally during the fusion process without assigning the corresponding weights based on the received quality of the view. This may result in the loss of complementary information from multiple views and go on to ultimately affect the clustering quality. Additionally, the topological structure and attribute information of nodes in multiview graph data differ significantly in terms of content and form, making it somewhat challenging to integrate these two types of information effectively. To solve these problems, this paper proposes two-stage fusion multiview graph clustering based on an attention mechanism.

Methods

The algorithm can be divided into three stages: feature filtering based on graph filtering, feature fusion based on the attention mechanism, and topological fusion based on the attention mechanism. In the first stage, graph filters are applied to combine the attribute information with the topological structure of each view. In this process, a smoother embedding representation is achieved by filtering out high-frequency noise. In the second stage, the smooth representations of individual views are fused using attention mechanisms to obtain the consensus smooth representation, which incorporates information from all views. Additionally, a consensus Laplacian matrix is obtained by combining multiple views' Laplacian matrices using learnable weights. To obtain the final embedded representation, the consensus Laplacian matrix and consensus smooth representation are inputted into an encoder. Subsequently, the similarity matrix for the final embedded representation is computed. Training samples are selected from the similarity matrix, and the embedded representation and learnable weights of the Laplacian matrix are optimized iteratively to obtain a somewhat more compressed embedded representation. Finally, performing spectral clustering on the embedding representation yields the clustering results. The performance of the algorithm is evaluated using widely-used clustering evaluation metrics, including accuracy, normalized mutual information, an adjusted Rand index, and an F1-score, on three datasets: Association for Computing Machinery (ACM), Digital Bibliography & Library Project (DBLP), and Internet Movie Database (IMDB).

Results

1) The experimental results show that the proposed algorithm is more effective in handling multiview graph data, particularly for the ACM and DBLP datasets, compared to extant methods. However, it may not perform as well as LMEGC and MCGC on the IMDB dataset. 2) Through the exploration of view quality using the proposed methods, the algorithm can learn weights specific to each view based on quality. 3) Compared to the best-performing single view on each dataset (ACM, DBLP, and IMDB), the proposed algorithm achieves an average performance improvement of 2.4%, 2.9%, and 2.1%, respectively, after fusing all views. 4) Exploring the effect of the number of graph filter layers and the ratio of positive to negative node pairs on the performance of the algorithm, it was found that the best performance was achieved with somewhat small graph filter layers. The optimal ratio for positive and negative node pairs was around 0.01 and 0.5.

Conclusions

The algorithm combines attribute information with topological information through graph filtering to obtain smoother representations that are more suitable for clustering. The attention mechanisms can learn weights from both the topological and attribute information perspectives based on view quality. In this way, the representation could get the information from each view while avoiding the influence of poor-quality views. The proposed method in this paper achieves the expected results, greatly enhancing the clustering performance of the algorithm.

Total 2