AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (6.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Task-Aware Flow Scheduling with Heterogeneous Utility Characteristics for Data Center Networks

Fang Dong( )Xiaolin GuoPengcheng ZhouDian Shen
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China.
Show Author Information

Abstract

With the continuous enrichment of cloud services, an increasing number of applications are being deployed in data centers. These emerging applications are often communication-intensive and data-parallel, and their performance is closely related to the underlying network. With their distributed nature, the applications consist of tasks that involve a collection of parallel flows. Traditional techniques to optimize flow-level metrics are agnostic to task-level requirements, leading to poor application-level performance. In this paper, we address the heterogeneous task-level requirements of applications and propose task-aware flow scheduling. First, we model tasks’ sensitivity to their completion time by utilities. Second, on the basis of Nash bargaining theory, we establish a flow scheduling model with heterogeneous utility characteristics, and analyze it using Lagrange multiplier method and KKT condition. Third, we propose two utility-aware bandwidth allocation algorithms with different practical constraints. Finally, we present Tasch, a system that enables tasks to maintain high utilities and guarantees the fairness of utilities. To demonstrate the feasibility of our system, we conduct comprehensive evaluations with real-world traffic trace. Communication stages complete up to 1.4 × faster on average, task utilities increase up to 2.26 ×, and the fairness of tasks improves up to 8.66 × using Tasch in comparison to per-flow mechanisms.

References

[1]
C. Y., Hong M. Caesar, and P. B. Godfrey, Finishing flows quickly with preemptive scheduling, ACM SIGCOMM Computer Communication Review, vol. 42, no. 4, pp. 127-138, 2012.
[2]
M., Alizadeh S., Yang M., Sharif S., Katti N., Mckeown B. Prabhakar, and S. Shenker, pFabric: Minimal nearoptimal datacenter transport, ACM SIGCOMM Computer Communication Review, vol. 43, no. 4, pp. 435-446, 2013.
[3]
C., Wilson H., Ballani T. Karagiannis, and A. Rowtron, Better never than late: Meeting deadlines in datacenter networks, ACM SIGCOMM Computer Communication Review, vol. 41, no. 4, pp. 50-61, 2011.
[4]
B., Vamanan J. Hasan, and T. N. Vijaykumar, Deadline aware datacenter tcp (d2tcp), ACM SIGCOMM Computer Communication Review, vol. 42, no. 4, pp. 115-126, 2012.
[5]
M., Chowdhury Y. Zhong, and I. Stoica, Efficient coflow scheduling with varys, ACM SIGCOMM Computer Communication Review, vol. 44, no. 4, pp. 443-454, 2014.
[6]
M. Chowdhury and I. Stoica, Efficient coflow scheduling without prior knowledge, ACM SIGCOMM Computer Communication Review, vol. 45, no. 4, 2015, pp. 393-406.
[7]
L., Chen W., Cui B. Li, and B. Li, Optimizing coflow completion times with utility max-min fairness, in IEEE INFOCOM 2016 - the IEEE International Conference on Computer Communications, 2016, pp. 1-9.
[8]
W., Bai L., Chen K., Chen D., Han C. Tian, and H. Wang, Information-agnostic flow scheduling for commodity data centers, in Usenix Conference on Networked Systems Design and Implementation, 2015, pp. 455-468.
[9]
T., Benson A., Anand A. Akella, and M. Zhang, Microte: Fine grained traffic engineering for data centers, in Proceedings of the Seventh Conference on Emerging Networking Experiments and Technologies, 2011.
[10]
M., Al-Fares S., Radhakrishnan B., Raghavan N. Huang, and A. Vahdat, Hedera: Dynamic flow scheduling for data center networks, in Usenix Symposium on Networked Systems Design and Implementation, 2010, pp. 281-296.
[11]
A., Munir G., Baig S. M., Irteza I. A., Qazi A. X. Liu, and F. R. Dogar, Friends, not foes: Synthesizing existing transport strategies for data center networks, .
[12]
H., Wu Z., Feng C. Guo, and Y. Zhang, Ictcp: Incast congestion control for tcp in data-center networks, IEEE/ACM Transactions on Networking, vol. 21, no. 2, pp. 345-358, 2013.
[13]
D., Zats T., Das P., Mohan D. Borthakur, and R. Katz, Detail: Reducing the flow completion time tail in datacenter networks, ACM SIGCOMM Computer Communication Review, vol. 42, no. 4, pp. 139-150, 2012.
[14]
M., Alizadeh A., Greenberg D. A., Maltz J., Padhye P., Patel B., Prabhakar S. Sengupta, and M. Sridharan, Data center tcp (dctcp), ACM SIGCOMM Computer Communication Review, vol. 40, no. 4, pp. 63-74, 2010.
[15]
M., Alizadeh A., Kabbani T., Edsall B., Prabhakar A. Vahdat, and M. Yasuda, Less is more: Trading a little bandwidth for ultra-low latency in the data center, in Usenix Conference on Networked Systems Design and Implementation, 2012, p. 19.
[16]
S., Floyd D. K. K. Ramakrishnan, and D. L. Black, The addition of Explicit Congestion Notification (ECN) to IP, RFC 3168, https://rfc-editor.org/rfc/rfc3168.txt, Sep. 2001.
[17]
F., Lu J., Li S., Jiang Y. Song, and F. Wang, Geographic information and node selfish-based routing algorithm for delay tolerant networks, Tsinghua Science and Technology, vol. 22, no. 3, pp. 243-253, 2017.
[18]
D., Tao Z. Lin, and B. Wang, Load feedback-based resource scheduling and dynamic migration-based data locality for virtual hadoop clusters in openstack-based clouds, Tsinghua Science and Technology, vol. 22, no. 2, pp. 149-159, 2017.
[19]
L., Liu J. Li, and J. Wu, Taps: Task-aware preemptive flow scheduling, in IEEE International Workshop on Local & Metropolitan Area Networks, 2015, pp. 1-2.
[20]
J., Jiang S., Ma B. Li, and B. Li, Tailor: Trimming coflow completion times in datacenter networks, in International Conference on Computer Communication and Networks, 2016.
[21]
F. R., Dogar T., Karagiannis H. Ballani, and A. Rowstron, Decentralized task-aware scheduling for data center networks, ACM SIGCOMM Computer Communication Review, vol. 44, no. 4, pp. 431-442, 2013.
[22]
S., Luo H., Yu Y., Zhao S., Wang S. Yu, and L. Li, Towards practical and near-optimal coflow scheduling for data center networks, IEEE Transactions on Parallel & Distributed Systems, vol. 27, no. 11, pp. 3366-3380, 2016.
[23]
H., Zhang L., Chen B., Yi K., Chen M. Chowdhury, and Y. Gengee, Coda: Toward automatically identifying and scheduling coflows in the dark, in Proceedings of the 2016 ACM SIGCOMM Conference, 2016, pp. 160-173.
[24]
Z., Li Y., Zhang D., Li K. Chen, and Y. Peng, Optas: Decentralized flow monitoring and scheduling for tiny tasks, in IEEE International Conference on Computer Communications, 2016, pp. 1-9.
[25]
H., Susanto H. Jin, and K. Chen, Stream: Decentralized opportunistic inter-coflow scheduling for datacenter networks, in IEEE International Conference on Network Protocols, 2016, pp. 1-10.
[26]
C., Raiciu S., Barre C., Pluntke A., Greenhalgh D. Wischik, and M. Handley, Improving datacenter performance and robustness with multipath tcp, in ACM SIGCOMM 2011, 2011, pp. 266-277.
[27]
A., Greenberg J. R., Hamilton N., Jain S., Kandula C., Kim P., Lahiri D. A., Maltz P. Patel, and S. Sengupta, Vl2: A scalable and flexible data center network, Communications of the ACM, vol. 54, no. 3, pp. 95-104, 2011.
[28]
J. L. Gastwirth, A general definition of the Lorenz curve, Econometrica: Journal of the Econometric Society, vol. 39, no. 6, pp. 1037-1039, 1971.
[29]
G. Dhandapani and A. Sundaresan, Netlink sockets, overview, Tech. Report, Information and Telecommunications Technology Center, Department of Electrical Engineering & Computer Science, The University of Kansas, Lawrence, KS, USA, 1999.
Tsinghua Science and Technology
Pages 400-411
Cite this article:
Dong F, Guo X, Zhou P, et al. Task-Aware Flow Scheduling with Heterogeneous Utility Characteristics for Data Center Networks. Tsinghua Science and Technology, 2019, 24(4): 400-411. https://doi.org/10.26599/TST.2018.9010122

604

Views

41

Downloads

6

Crossref

N/A

Web of Science

7

Scopus

1

CSCD

Altmetrics

Received: 15 July 2018
Accepted: 03 September 2018
Published: 07 March 2019
© The author(s) 2019
Return