Sort:
Regular Paper Issue
SAIH: A Scalable Evaluation Methodology for Understanding AI Performance Trend on HPC Systems
Journal of Computer Science and Technology 2024, 39 (2): 384-400
Published: 30 March 2024
Abstract Collect

Novel artificial intelligence (AI) technology has expedited various scientific research, e.g., cosmology, physics, and bioinformatics, inevitably becoming a significant category of workload on high-performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under the predefined problem size, in terms of datasets and AI models. However, driven by novel AI technology, most of AI applications are evolving fast on models and datasets to achieve higher accuracy and be applicable to more scenarios. Due to the lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH. With data and model augment, SAIH can progressively evaluate the AI performance trend of HPC systems, e.g., increasing from 5.2% to 59.6% of the peak theoretical hardware performance. The evaluation results are analyzed and summarized into insight findings on performance issues. For instance, we find that the AI application constantly consumes the I/O bandwidth of the shared parallel file system during its iteratively training model. If I/O contention exists, the shared parallel file system might become a bottleneck.

Open Access Issue
Evolutionary Multi-Tasking Optimization for High-Efficiency Time Series Data Clustering
Tsinghua Science and Technology 2024, 29 (2): 343-355
Published: 22 September 2023
Abstract PDF (3.8 MB) Collect
Downloads:63

Time series clustering is a challenging problem due to the large-volume, high-dimensional, and warping characteristics of time series data. Traditional clustering methods often use a single criterion or distance measure, which may not capture all the features of the data. This paper proposes a novel method for time series clustering based on evolutionary multi-tasking optimization, termed i-MFEA, which uses an improved multifactorial evolutionary algorithm to optimize multiple clustering tasks simultaneously, each with a different validity index or distance measure. Therefore, i-MFEA can produce diverse and robust clustering solutions that satisfy various preferences of decision-makers. Experiments on two artificial datasets show that i-MFEA outperforms single-objective evolutionary algorithms and traditional clustering methods in terms of convergence speed and clustering quality. The paper also discusses how i-MFEA can address two long-standing issues in time series clustering: the choice of appropriate similarity measure and the number of clusters.

Regular Paper Issue
Harmonia: Explicit Congestion Notification and Credit-Reservation Transport Converged Congestion Control in Datacenters
Journal of Computer Science and Technology 2021, 36 (5): 1071-1086
Published: 30 September 2021
Abstract Collect

Bursty traffic and thousands of concurrent flows incur inevitable network congestion in datacenter networks (DCNs) and then affect the overall performance. Various transport protocols are developed to mitigate the network congestion, including reactive and proactive protocols. Reactive schemes use different congestion signals, such as explicit congestion notification (ECN) and round trip time (RTT), to handle the network congestion after congestion arises. However, with the growth of scale and link speed in datacenters, reactive schemes encounter a significant problem of slow responding to congestion. On the contrary, proactive protocols (e.g., credit-reservation protocols) are designed to avoid congestion before it occurs, and they have the advantages of zero data loss, fast convergence and low buffer occupancy. But credit-reservation protocols have not been widely deployed in current DCNs (e.g., Microsoft, Amazon), which mainly deploy ECN-based protocols, such as data center transport control protocol (DCTCP) and data center quantized congestion notification (DCQCN). And in an actual deployment scenario, it is hard to guarantee one protocol to be deployed in every server at one time. When credit-reservation protocol is deployed to DCNs step by step, the network will be converted to multi-protocol state and will face the following fundamental challenges: 1) unfairness, 2) high buffer occupancy, and 3) heavy tail latency. Therefore, we propose Harmonia, aiming for converging ECN-based and credit-reservation protocols to fairness with minimal modification. To the best of our knowledge, Harmonia is the first to address the trouble of harmonizing proactive and reactive congestion control. Targeting the common ECN-based protocols—DCTCP and DCQCN, Harmonia leverages forward ECN and RTT to deliver real-time congestion information and redefines feedback control. After the evaluation, the results show that Harmonia effectively solves the unfair link allocation, eliminating the timeouts and addressing the buffer overflow.

Regular Paper Issue
Performance Evaluation of Memory-Centric ARMv8 Many-Core Architectures: A Case Study with Phytium 2000+
Journal of Computer Science and Technology 2021, 36 (1): 33-43
Published: 05 January 2021
Abstract Collect

This article presents a comprehensive performance evaluation of Phytium 2000+, an ARMv8-based 64-core architecture. We focus on the cache and memory subsystems, analyzing the characteristics that impact the high-performance computing applications. We provide insights into the memory-relevant performance behaviours of the Phytium 2000+ system through micro-benchmarking. With the help of the well-known rooine model, we analyze the Phytium 2000+ system, taking both memory accesses and computations into account. Based on the knowledge gained from these micro-benchmarks, we evaluate two applications and use them to assess the capabilities of the Phytium 2000+ system. The results show that the ARMv8-based many-core system is capable of delivering high performance for a wide range of scientific kernels.

Total 4