Article Link
Collect
Submit Manuscript
Show Outline
Outline
Abstract
Keywords
Electronic Supplementary Material
References
Show full outline
Hide outline
Regular Paper

A Case for Adaptive Resource Management in Alibaba Datacenter Using Neural Networks

State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
Peng Cheng Laboratory, Shenzhen 518055, China
Alibaba Inc., Hangzhou 311121, China
Department of Computer Science, Wayne State University, Michigan, MI 48202, U.S.A.
Show Author Information

Abstract

Both resource efficiency and application QoS have been big concerns of datacenter operators for a long time, but remain to be irreconcilable. High resource utilization increases the risk of resource contention between co-located workload, which makes latency-critical (LC) applications suffer unpredictable, and even unacceptable performance. Plenty of prior work devotes the effort on exploiting effective mechanisms to protect the QoS of LC applications while improving resource efficiency. In this paper, we propose MAGI, a resource management runtime that leverages neural networks to monitor and further pinpoint the root cause of performance interference, and adjusts resource shares of corresponding applications to ensure the QoS of LC applications. MAGI is a practice in Alibaba datacenter to provide on-demand resource adjustment for applications using neural networks. The experimental results show that MAGI could reduce up to 87.3% performance degradation of LC application when co-located with other antagonist applications.

Electronic Supplementary Material

Download File(s)
jcst-35-1-209-Highlights.pdf (1.1 MB)

References

[1]
Reiss C, Tumanov A, Ganger G R, Katz R H, Kozuch M A. Heterogeneity and dynamicity of clouds at scale: Google trace analysis. In Proc. the 3rd ACM Symposium on Cloud Computing, October 2012, Article No. 7.
[2]
Liu H. A measurement study of server utilization in public clouds. In Proc. the 9th IEEE International Conference on Dependable, Autonomic and Secure Computing, December 2011, pp.435-442.
[3]

Delimitrou C, Kozyrakis C. Quasar: Resource-efficient and QoS-aware cluster management. ACM SIGPLAN Notices, 2014, 49(4): 127-144.

[4]
Cortez E, Bonde A, Muzio A, Russinovich M, Fontoura M, Bianchini R. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proc. the 26th Symposium on Operating Systems Principles, October 2017, pp.153-167.
[5]

Lo D, Cheng L Q, Govindaraju R, Ranganathan P, Kozyrakis C. Heracles: Improving resource efficiency at scale. ACM SIGARCH Computer Architecture News, 2015, 43: 450-462.

[6]
Chen S, Delimitrou C, Martxınez J F. PARTIES: QoS-aware resource partitioning for multiple interactive services. In Proc. the 24th International Conference on Architectural Support for Programming Languages and Operating Systems, April 2019, pp.107-120.
[7]

Zhuravlev S, Blagodurov S, Fedorova A. Addressing shared resource contention in multicore processors via scheduling. ACM SIGPLAN Notices, 2010, 45: 129-142.

[8]
Zhang X, Tune E, Hagmann R et al. CPI2: CPU performance isolation for shared compute clusters. In Proc. the 8th ACM European Conference on Computer Systems, April 2013, pp.379-391.
[9]
Yasin A. A top-down method for performance analysis and counters architecture. In Proc. the 2014 IEEE International Symposium on Performance Analysis of Systems and Software, March 2014, pp.35-44.
[10]
Kasture H, Sanchez D. Tailbench: A benchmark suite and evaluation methodology for latency-critical applications. In Proc. the 2016 IEEE International Symposium on Workload Characterization, September 2016, pp.3-12.
[11]

Henning J L. SPEC CPU2006 benchmark descriptions. SIGARCH Comput. Archit. News, 2006, 34(4): 1-17.

[12]
Verma A, Pedrosa L, Korupolu M, Oppenheimer D, Tune E, Wilkes J. Large-scale cluster management at Google with Borg. In Proc. the 10th European Conference on Computer Systems, April 2015, Article No. 18.
[13]
Hindman B, Konwinski A, Zaharia M, Ghodsi A, Joseph A D, Katz R H, Shenker S, Stoica I. Mesos: A platform for fine-grained resource sharing in the data center. In Proc. the 8th USENIX Symposium on Networked Systems Design and Implementation, March 2011, Article No. 4.
[14]
Schwarzkopf M, Konwinski A, Abd-El-Malek M, Wilkes J. Omega: Flexible, scalable schedulers for large compute clusters. In Proc. the 8th ACM European Conference on Computer Systems, April 2013, pp.351-364.
[15]
Ousterhout K, Wendell P, Zaharia M, Stoica I. Sparrow: Distributed, low latency scheduling. In Proc. the 24th ACM Symposium on Operating Systems Principles, November 2013, pp.69-84.
[16]

Zhang Z, Li C, Tao Y Y, Yang R Y, Tang H, Xu J. Fuxi: A fault-tolerant resource management and job scheduling system at Internet scale. Proceedings of the VLDB Endowment, 2014, 7(13): 1393-1404.

[17]
Guo J, Chang Z H, Wang S, Ding H Y, Feng Y H, Mao L, Bao Y G. Who limits the resource efficiency of my datacenter: An analysis of Alibaba datacenter traces. In Proc. the International Symposium on Quality of Service, June 2019, Article No. 39.
[18]
Herdrich A, Verplanke E, Autee P, Illikkal R, Gianos C, Singhal R, Iyer R. Cache QoS: From concept to reality in the intel Xeon processor E5-2600 v3 product family. In Proc. the 2016 IEEE International Symposium on High Performance Computer Architecture, March 2016, pp.657-668.
Journal of Computer Science and Technology
Pages 209-220
Cite this article:
Wang S, Zhu Y-H, Chen S-P, et al. A Case for Adaptive Resource Management in Alibaba Datacenter Using Neural Networks. Journal of Computer Science and Technology, 2020, 35(1): 209-220. https://doi.org/10.1007/s11390-020-9732-x
Metrics & Citations  
Article History
Copyright
Return