With the development of high-performance computing and the expansion of large-scale multiprocessor systems, it is significant to study the reliability of systems. Probabilistic fault diagnosis is of practical value to the reliability analysis of multiprocessor systems. In this paper, we design a linear time diagnosis algorithm with the multiprocessor system whose threshold is set to 3, where the probability that any node is correctly diagnosed in the discrete state can be calculated. Furthermore, we give the probabilities that all nodes of a d-regular and d-connected graph can be correctly diagnosed in the continuous state under the Weibull fault distribution and the Chi-square fault distribution. We prove that they approach to 1, which implies that our diagnosis algorithm can correctly diagnose almost all nodes of the graph.
- Article type
- Year
- Co-author
BCube is one kind of important data center networks. Hamiltonicity and Hamiltonian connectivity have significant applications in communication networks. So far, there have been many results concerning fault-tolerant Hamiltonicity and fault-tolerant Hamiltonian connectivity in some data center networks. However, these results only consider faulty edges and faulty servers. In this paper, we study the fault-tolerant Hamiltonicity and the fault-tolerant Hamiltonian connectivity of BCube(n,k) under considering faulty servers, faulty links/edges, and faulty switches. For any integers n ≥ 2 and k ≥ 0, let BCn,k be the logic structure of BCube(n,k) and F be the union of faulty elements of BCn,k. Let fv, fe, and fs be the number of faulty servers, faulty edges, and faulty switches of BCube(n,k), respectively. We show that BCn,k − F is fault-tolerant Hamiltonian if fv +fe + (n − 1)fs ≤ (n − 1)(k + 1) − 2 and BCn,k −F is fault-tolerant Hamiltonian-connected if fv + fe + (n − 1)fs ≤ (n − 1)(k + 1) − 3. To the best of our knowledge, this paper is the first work which takes faulty switches into account to study the fault-tolerant Hamiltonicity and the fault-tolerant Hamiltonian connectivity in data center networks.
The 3-ary n-cube, denoted as
As users increasingly befriend others and interact online via their social media accounts, online social networks (OSNs) are expanding rapidly. Confronted with the big data generated by users, it is imperative that data storage be distributed, scalable, and cost-efficient. Yet one of the most significant challenges about this topic is determining how to minimize the cost without deteriorating system performance. Although many storage systems use the distributed key value store, it cannot be directly applied to OSN storage systems. And because users’ data are highly correlated, hash storage leads to frequent inter-server communications, and the high inter-server traffic costs decrease the OSN storage system’s scalability. Previous studies proposed conducting network partitioning and data replication based on social graphs. However, data replication increases storage costs and impacts traffic costs. Here, we consider how to minimize costs from the perspective of data storage, by combining partitioning and replication. Our cost-efficient data storage approach supports scalable OSN storage systems. The proposed approach co-locates frequently interactive users together by conducting partitioning and replication simultaneously while meeting load-balancing constraints. Extensive experiments are undertaken on two realworld traces, and the results show that our approach achieves lower cost compared with state-of-the-art approaches. Thus we conclude that our approach enables economic and scalable OSN data storage.
The capability of the data center network largely decides the performance of cloud computing. However, the number of servers in the data center network becomes increasingly huge, because of the continuous growth of the application requirements. The performance improvement of cloud computing faces great challenges of how to connect a large number of servers in building a data center network with promising performance. Traditional tree-based data center networks have issues of bandwidth bottleneck, failure of single switch, etc. Recently proposed data center networks such as DCell, FiConn, and BCube, have larger bandwidth and better fault-tolerance with respect to traditional tree-based data center networks. Nonetheless, for DCell and FiConn, the fault-tolerant length of path between servers increases in case of failure of switches; BCube requires higher performance in switches when its scale is enlarged. Based on the above considerations, we propose a new server-centric data center network, called BCDC, based on crossed cube with excellent performance. Then, we study the connectivity of BCDC networks. Furthermore, we propose communication algorithms and fault-tolerant routing algorithm of BCDC networks. Moreover, we analyze the performance and time complexities of the proposed algorithms in BCDC networks. Our research will provide the basis for design and implementation of a new family of data center networks.