AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

On Peer-Assisted Data Dissemination in Data Center Networks: Analysis and Implementation

Google Inc., Mountain View, CA 94043, USA. This work was done while the author was with Temple University, Philadelphia, PA 19122, USA
Temple University, Philadelphia, PA 19122, USA
Sun Yat-Sen University, Guangzhou 510275, China
Show Author Information

Abstract

Data Center Networks (DCNs) are the fundamental infrastructure for cloud computing. Driven by the massive parallel computing tasks in cloud computing, one-to-many data dissemination becomes one of the most important traffic patterns in DCNs. Many architectures and protocols are proposed to meet this demand. However, these proposals either require complicated configurations on switches and servers, or cannot deliver an optimal performance. In this paper, we propose the peer-assisted data dissemination for DCNs. This approach utilizes the rich physical connections with high bandwidths and mutli-path connections, to facilitate efficient one-to-many data dissemination. We prove that an optimal P2P data dissemination schedule exists for FatTree, a specially-designed DCN architecture. We then present a theoretical analysis of this algorithm in the general multi-rooted tree topology, a widely-used DCN architecture. Additionally, we explore the performance of an intuitive line structure for data dissemination. Our analysis and experimental results prove that this simple structure is able to produce a comparable performance to the optimal algorithm. Since DCN applications heavily rely on virtualization to achieve optimal resource sharing, we present a general implementation method for the proposed algorithms, which aims to mitigate the impact of the potentially-high churn rate of the virtual machines.

References

[1]
Murder: Fast datacenter code deploys using bittorrent, http://engineering.twitter.com/2010/07/murder-fast-data- center-codedeploys.html, 2010.
[2]
J. Dean and S. Ghemawat, Mapreduce: Simplified data processing on large clusters, in Proc. of OSDI ’04, Berkeley, CA, USA, 2004.
[3]
Y. Vigfusson, H. Abu-Libdeh, M. Balakrishnan, K. Birman, and Y. Tock, Dr. multicast: Rx for data center communication scalability, in Proc. Of LADIS ’08, New York, NY, USA, 2008.
[4]
T. Benson, A. Anand, A. Akella, and M. Zhang, Understanding data center traffic characteristics, SIGCOMM Comput. Commun. Rev., vol. 40, pp. 92-99, 2010.
[5]
S. James and P. Crowley, Fast content distribution on datacenter networks, in Proc. of ANCS’11, 2011.
[6]
C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, Dcell: A scalable and fault-tolerant network structure for data centers, in Proc. of SIGCOMM ’08, New York, NY, USA, 2008.
[7]
C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu, Bcube: A high performance, server-centric network architecture for modular data centers, SIGCOMM Comput. Commun. Rev., vol. 39, pp. 63-74, 2009.
[8]
H. Abu-Libdeh, P. Costa, A. Rowstron, G. O’Shea, and A. Donnelly, Symbiotic routing in future data centers, SIGCOMM Comput. Commun. Rev., vol. 40, pp. 51-62, 2010.
[9]
K. Chen, C. Guo, H. Wu, J. Yuan, Z. Feng, Y. Chen, S. Lu, and W. Wu, Generic and automatic address configuration for data center networks, SIGCOMM Comput. Commun. Rev., vol. 40, pp. 39-50, 2010.
[10]
M. Al-Fares, A. Loukissas, and A. Vahdat, A scalable, commodity data center network architecture, in Proc. of SIGCOMM ’08, New York, NY, USA, 2008.
[11]
“BitTorrent,” http://www.bittorrent.com/, 2013.
[12]
R. Niranjan Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat, Portland: A scalable fault-tolerant layer 2 data center network fabric, SIGCOMM Comput. Commun. Rev., vol. 39, pp. 39-50, 2009.
[13]
A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta, Vl2: A scalable and flexible data center network, Commun. ACM, vol. 54, pp. 95-104, 2011.
[14]
C. E. Leiserson, Fat-trees: Universal networks for hardware-efficient supercomputing, IEEE Trans. Comput., vol. 34, pp. 892-901, 1985.
[15]
S. T. O’Neil, A. Chaudhary, D. Z. Chen, and H. Wang, The topology aware file distribution problem, in Proc. of COCOON’11, Springer-Verlag, 2011.
[16]
C. Killian, M. Vrable, A. C. Snoeren, A. Vahdat, and J. Pasquale, Brief announcement: The overlay network content distribution problem, in Proc. of PODC ’05, New York, NY, USA, 2005.
[17]
J. Mundinger, R. Weber, and G. Weiss, Optimal scheduling of peer-topeer file dissemination, J. of Scheduling, vol. 11, pp. 105-120, 2008.
[19]
M. Burrows, The chubby lock service for loosely-coupled distributed systems, in Proc. of OSDI ’06, Berkeley, CA, USA, 2006.
[20]
T. D. Chandra, R. Griesemer, and J. Redstone, Paxos made live: An engineering perspective, in Proc. of PODC ’07, New York, NY, USA, 2007.
[21]
J. Cohen, P. Fraigniaud, J.-C. Konig, and A. Raspaud, Broadcasting and multicasting in cut-through routed networks, in Proc. of the 11th International Parallel Processing Symposium, 1997.
[22]
B. Stroustrup, The C++ Programming Language. Boston, MA, USA: Addison-Wesley Longman Publishing Co., 2000.
[23]
A. Phanishayee, E. Krevat, V. Vasudevan, D. G. Andersen, G. R. Ganger, G. A. Gibson, and S. Seshan, Measurement and analysis of tcp throughput collapse in cluster-based storage systems, in Proc. of FAST ’08, Berkeley, CA, USA, 2008.
Tsinghua Science and Technology
Pages 51-64
Cite this article:
Zhao Y, Wu J, Liu C. On Peer-Assisted Data Dissemination in Data Center Networks: Analysis and Implementation. Tsinghua Science and Technology, 2014, 19(1): 51-64. https://doi.org/10.1109/TST.2014.6733208

579

Views

33

Downloads

8

Crossref

N/A

Web of Science

8

Scopus

0

CSCD

Altmetrics

Received: 13 December 2013
Accepted: 20 December 2013
Published: 07 February 2014
© The author(s) 2014
Return