AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

SAIH: A Scalable Evaluation Methodology for Understanding AI Performance Trend on HPC Systems

School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
Show Author Information

Abstract

Novel artificial intelligence (AI) technology has expedited various scientific research, e.g., cosmology, physics, and bioinformatics, inevitably becoming a significant category of workload on high-performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under the predefined problem size, in terms of datasets and AI models. However, driven by novel AI technology, most of AI applications are evolving fast on models and datasets to achieve higher accuracy and be applicable to more scenarios. Due to the lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH. With data and model augment, SAIH can progressively evaluate the AI performance trend of HPC systems, e.g., increasing from 5.2% to 59.6% of the peak theoretical hardware performance. The evaluation results are analyzed and summarized into insight findings on performance issues. For instance, we find that the AI application constantly consumes the I/O bandwidth of the shared parallel file system during its iteratively training model. If I/O contention exists, the shared parallel file system might become a bottleneck.

Electronic Supplementary Material

Download File(s)
JCST-2108-11840-Highlights.pdf (147.1 KB)

References

[1]
Ravanbakhsh S, Oliva J B, Fromenteau S, Price L, Ho S, Schneider J G, Póczos B. Estimating cosmological parameters from the dark matter distribution. In Proc. the 33rd International Conference on Machine Learning, Jun. 2016, pp.2407–2416.
[2]
Mathuriya A, Bard D, Mendygral P, Meadows L, Arnemann J, Shao L, He S Y, Kärnä T, Moise D, Pennycook S J, Maschhoff K, Sewall J, Kumar N, Ho S, Ringenburg M F, Prabhat P, Lee V. CosmoFlow: Using deep learning to learn the universe at scale. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2018, pp.819–829. DOI: 10.1109/SC.2018.00068.
[3]
Kurth T, Zhang J, Satish N, Racah E, Mitliagkas I, Patwary M M A, Malas T, Sundaram N, Bhimji W, Smorkalov M, Deslippe J, Shiryaev M, Sridharan S, Prabhat, Dubey P. Deep learning at 15PF: Supervised and semi-supervised classification for scientific data. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2017, Article No. 7. DOI: 10.1145/3126908.3126916.
[4]
Patton R M, Johnston J T, Young S R, Schuman C D, March D D, Potok T E, Rose D C, Lim S H, Karnowski T P, Ziatdinov M A, Kalinin S V. 167-PFlops deep learning for electron microscopy: From learning physics to atomic manipulation. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2018, pp.638–648. DOI: 10.1109/SC.2018.00053.
[5]
Balaprakash P, Egele R, Salim M, Wild S, Vishwanath V, Xia F F, Brettin T, Stevens R. Scalable reinforcement-learning-based neural architecture search for cancer deep learning research. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2019, Article No. 37. DOI: 10.1145/3295500.3356202.
[6]

Wozniak J M, Jain R, Balaprakash P, Ozik J, Collier N T, Bauer J, Xia F F, Brettin T, Stevens R, Mohd-Yusof J, Cardona C G, Van Essen B, Baughman M. CANDLE/Supervisor: A workflow framework for machine learning applied to cancer research. BMC Bioinformatics , 2018, 19(18): Article No. 491. DOI: 10.1186/s12859-018-2508-4.

[7]
Kurth T, Treichler S, Romero J, Mudigonda M, Luehr N, Phillips E, Mahesh A, Matheson M, Deslippe J, Fatica M, Prabhat P, Houston M. Exascale deep learning for climate analytics. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2018, pp.649–660. DOI: 10.1109/SC.2018.00054.
[8]
Dongarra J J. The LINPACK benchmark: An explanation. In Proc. the 1st International Conference on Supercomputing, Jun. 1987, pp.456–474. DOI: 10.1007/3-540-18991-2_27.
[9]
Mattson P, Cheng C, Diamos G F, Coleman C, Micikevicius P, Patterson D A, Tang H L, Wei G Y, Bailis P, Bittorf V, Brooks D, Chen D H, Dutta D, Gupta U, Hazelwood K M, Hock A, Huang X Y, Kang D, Kanter D, Kumar N, Liao J, Narayanan D, Oguntebi T, Pekhimenko G, Pentecost L, Reddi V J, Robie T, John T S, Wu J, Xu L J, Young C, Zaharia M. MLPerf training benchmark. In Proc. Machine Learning and Systems, Mar. 2020, pp.336–349. DOI: 10.48550/arXiv.1910.01500.
[10]

Zhang L, Ji Q. A bayesian network model for automatic and interactive image segmentation. IEEE Trans. Image Processing , 2011, 20(9): 2582–2593. DOI: 10.1109/TIP. 2011.2121080.

[11]

Shen Z, Bao W Z, Huang D S. Recurrent neural network for predicting transcription factor binding sites. Scientific Reports , 2018, 8(1): Article No. 15270. DOI: 10.1038/s41598-018-33321-1.

[12]

Trabelsi A, Chaabane M, Ben-Hur A. Comprehensive evaluation of deep learning architectures for prediction of DNA/RNA sequence binding specificities. Bioinformatics , 2019, 35(14): i269–i277. DOI: 10.1093/bioinformatics/btz 339.

[13]

Lyu C, Chen B, Ren Y F, Ji D H. Long short-term memory RNN for biomedical named entity recognition. BMC Bioinformatics , 2017, 18(1): Article No. 462. DOI: 10.1186/s12859-017- 1868-5.

[14]
Karpatne A, Kumar V. Big data in climate: Opportunities and challenges for machine learning. In Proc. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2017, pp.21–22. DOI: 10.1145/3097983.3105810.
[15]
Zhang W, Wei W, Xu L J, Jin L L, Li C. AI matrix: A deep learning benchmark for Alibaba data centers. arXiv: 1909.10562, 2019. http://arxiv.org/abs/1909.1056, Mar. 2024.
[16]
Haidar A, Tomov S, Dongarra J, Higham N J. Harnessing GPU tensor cores for fast FP16 arithmetic to speed up mixed-precision iterative refinement solvers. In Proc. the International Conference for High Performance Computing, Networking, Storage and Analysis, Nov. 2018, pp.603–613. DOI: 10.1109/SC.2018.00050.
[17]
Ben-Nun T, Besta M, Huber S, Ziogas A N, Peter D, Hoefler T. A modular benchmarking infrastructure for high-performance and reproducible deep learning. In Proc. the 2019 IEEE International Parallel and Distributed Processing Symposium, May 2019, pp.66–77. DOI: 10.1109/IPDPS.2019.00018.
[18]
Deng J, Dong W, Socher R, Li L J, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In Proc. the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2009, pp.248–255. DOI: 10.1109/CVPR.2009.5206848.
[19]
Zhu H Y, Akrout M, Zheng B J, Pelegris A, Phanishayee A, Schroeder B, Pekhimenko G. TBD: Benchmarking and analyzing deep neural network training. arXiv: 1803.06905, 2018. http://arxiv.org/abs/1803.06905, Mar. 2024.
[20]
Farrell S, Emani M, Balma J, Drescher L, Drozd A, Fink A, Fox G C, Kanter D, Kurth T, Mattson P, Mu D W, Ruhela A, Sato K, Shirahata K, Tabaru T, Tsaris A, Balewski J, Cumming B, Danjo T, Domke J, Fukai T, Fukumoto N, Fukushi T, Gerofi B, Honda T, Imamura T, Kasagi A, Kawakami K, Kudo S, Kuroda A, Martinasso M, Matsuoka S, Mendonça H, Minami K, Ram P, Sawada T, Shankar M, John T S, Tabuchi A, Vishwanath V, Wahib M, Yamazaki M, Yin J Q. MLPerf HPC: A holistic benchmark suite for scientific machine learning on HPC systems. arXiv: 2110.11466, 2021. https://arxiv.org/abs/2110.11466, Mar. 2024.
[21]
Jiang Z H, Gao W L, Tang F, Wang L, Xiong X W, Luo C J, Lan C X, Li H X, Zhan J F. HPC AI500 v2.0: The methodology, tools, and metrics for benchmarking HPC AI systems. In Proc. the 2021 IEEE International Conference on Cluster Computing, Sept. 2021, pp.47–58. DOI: 10.1109/Cluster48925.2021.00022.
[22]
Goodfellow I, Pouget-Abadie J, Mirza M et al. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems, Dec. 2014.
[23]

Sandfort V, Yan K, Pickhardt P J, Summers R M. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Scientific Reports , 2019, 9(1): Article No. 16884. DOI: 10.1038/s41598-019-52737-x.

[24]
Sun Y, Yuan P S, Sun Y M. MM-GAN: 3D MRI data augmentation for medical image segmentation via generative adversarial networks. In Proc. the 2020 IEEE International Conference on Knowledge Graph, Aug. 2020, pp.227–234. DOI: 10.1109/ICBK50248.2020.00041.
[25]
Milz S, Rüdiger T, Süss S. Aerial GANeration: Towards realistic data augmentation using conditional GANs. In Proc. the European Conference on Computer Vision, Sept. 2018, pp.59–72. DOI: 10.1007/978-3-030-11012-3_5.
[26]
Jin H F, Song Q Q, Hu X. Auto-keras: An efficient neural architecture search system. In Proc. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Jul. 2019, pp.1946–1956. DOI: 10.1145/3292500.3330648.
[27]
Liu C X, Zoph B, Neumann M, Shlens J, Hua W, Li L J, Li F F, Yuille A, Huang J, Murphy K. Progressive neural architecture search. In Proc. the 15th European Conference on Computer Vision, Sept. 2018, pp.19–35. DOI: 10.1007/978-3-030-01246-5_2.
[28]
Tassev S, Eisenstein D J, Wandelt B D, Zaldarriaga M. sCOLA: The N-body COLA method extended to the spatial domain. arXiv: 1502.07751, 2015. https://arxiv.org/abs/1502.07751, Mar. 2024.
[29]

Hahn O, Abel T. Multi-scale initial conditions for cosmological simulations. Monthly Notices of the Royal Astronomical Society , 2011, 415(3): 2101–2121. DOI: 10.1111/j.1365-2966.2011.18820.x.

[30]
Liu H X, Simonyan K, Yang Y M. DARTS: Differentiable architecture search. arXiv: 1806.09055, 2018. https://doi.org/10.48550/arXiv.1806.09055, Mar. 2024.
[31]
Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv: 1412.6980, 2015. https://arxiv.org/abs/1412.6980, Mar. 2024.
[32]
You Y, Zhang Z, Hsieh C J, Demmel J, Keutzer K. ImageNet training in minutes. In Proc. the 47th International Conference on Parallel Processing, Aug. 2018, Article No. 1. DOI: 10.1145/3225058.3225069.
[33]
Yan D, Wang W, Chu X W. Demystifying tensor cores to optimize half-precision matrix multiply. In Proc. the 2020 IEEE International Parallel and Distributed Processing Symposium, May 2020, pp.634–643. DOI: 10.1109/IPDPS47924.2020.00071.
Journal of Computer Science and Technology
Pages 384-400
Cite this article:
Du J-S, Li D-S, Wen Y-P, et al. SAIH: A Scalable Evaluation Methodology for Understanding AI Performance Trend on HPC Systems. Journal of Computer Science and Technology, 2024, 39(2): 384-400. https://doi.org/10.1007/s11390-023-1840-y

164

Views

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 16 August 2021
Accepted: 07 June 2023
Published: 30 March 2024
© Institute of Computing Technology, Chinese Academy of Sciences 2024
Return