Sort:
Cover Article Issue
Towards High-Performance Graph Processing: From a Hardware/Software Co-Design Perspective
Journal of Computer Science and Technology 2024, 39(2): 245-266
Published: 30 March 2024
Abstract Collect

Graph processing has been widely used in many scenarios, from scientific computing to artificial intelligence. Graph processing exhibits irregular computational parallelism and random memory accesses, unlike traditional workloads. Therefore, running graph processing workloads on conventional architectures (e.g., CPUs and GPUs) often shows a significantly low compute-memory ratio with few performance benefits, which can be, in many cases, even slower than a specialized single-thread graph algorithm. While domain-specific hardware designs are essential for graph processing, it is still challenging to transform the hardware capability to performance boost without coupled software codesigns. This article presents a graph processing ecosystem from hardware to software. We start by introducing a series of hardware accelerators as the foundation of this ecosystem. Subsequently, the codesigned parallel graph systems and their distributed techniques are presented to support graph applications. Finally, we introduce our efforts on novel graph applications and hardware architectures. Extensive results show that various graph applications can be efficiently accelerated in this graph processing ecosystem.

Regular Paper Issue
Minimal Context-Switching Data Race Detection with Dataflow Tracking
Journal of Computer Science and Technology 2024, 39(1): 211-226
Published: 25 January 2024
Abstract Collect

Data race is one of the most important concurrent anomalies in multi-threaded programs. Emerging constraint-based techniques are leveraged into race detection, which is able to find all the races that can be found by any other sound race detector. However, this constraint-based approach has serious limitations on helping programmers analyze and understand data races. First, it may report a large number of false positives due to the unrecognized dataflow propagation of the program. Second, it recommends a wide range of thread context switches to schedule the reported race (including the false one) whenever this race is exposed during the constraint-solving process. This ad hoc recommendation imposes too many context switches, which complicates the data race analysis. To address these two limitations in the state-of-the-art constraint-based race detection, this paper proposes DFTracker, an improved constraint-based race detector to recommend each data race with minimal thread context switches. Specifically, we reduce the false positives by analyzing and tracking the dataflow in the program. By this means, DFTracker thus reduces the unnecessary analysis of false race schedules. We further propose a novel algorithm to recommend an effective race schedule with minimal thread context switches for each data race. Our experimental results on the real applications demonstrate that 1) without removing any true data race, DFTracker effectively prunes false positives by 68% in comparison with the state-of-the-art constraint-based race detector; 2) DFTracker recommends as low as 2.6–8.3 (4.7 on average) thread context switches per data race in the real world, which is 81.6% fewer context switches per data race than the state-of-the-art constraint based race detector. Therefore, DFTracker can be used as an effective tool to understand the data race for programmers.

Regular Paper Issue
Evaluating RISC-V Vector Instruction Set Architecture Extension with Computer Vision Workloads
Journal of Computer Science and Technology 2023, 38(4): 807-820
Published: 06 December 2023
Abstract Collect

Computer vision (CV) algorithms have been extensively used for a myriad of applications nowadays. As the multimedia data are generally well-formatted and regular, it is beneficial to leverage the massive parallel processing power of the underlying platform to improve the performances of CV algorithms. Single Instruction Multiple Data (SIMD) instructions, capable of conducting the same operation on multiple data items in a single instruction, are extensively employed to improve the efficiency of CV algorithms. In this paper, we evaluate the power and effectiveness of RISC-V vector extension (RV-V) on typical CV algorithms, such as Gray Scale, Mean Filter, and Edge Detection. By our examinations, we show that compared with the baseline OpenCV implementation using scalar instructions, the equivalent implementations using the RV-V (version 0.8) can reduce the instruction count of the same CV algorithm up to 24x, when processing the same input images. Whereas, the actual performances improvement measured by the cycle counts is highly related with the specific implementation of the underlying RV-V co-processor. In our evaluation, by using the vector co-processor (with eight execution lanes) of Xuantie C906, vector-version CV algorithms averagely exhibit up to 2.98x performances speedups compared with their scalar counterparts.

Survey Issue
On the Security of Smart Home Systems: A Survey
Journal of Computer Science and Technology 2023, 38(2): 228-247
Published: 30 March 2023
Abstract Collect

Among the plethora of IoT (Internet of Things) applications, the smart home is one of the fastest-growing. However, the rapid development of the smart home has also made smart home systems a target for attackers. Recently, researchers have made many efforts to investigate and enhance the security of smart home systems. Toward a more secure smart home ecosystem, we present a detailed literature review on the security of smart home systems. Specifically, we categorize smart home systems’ security issues into the platform, device, and communication issues. After exploring the research and specific issues in each of these security areas, we summarize the root causes of the security flaws in today's smart home systems, which include the heterogeneity of internal components of the systems, vendors' customization, the lack of clear responsibility boundaries and the absence of standard security standards. Finally, to better understand the security of smart home systems and potentially provide better protection for smart home systems, we propose research directions, including automated vulnerability mining, vigorous security checking, and data-driven security analysis.

Regular Paper Issue
Improving Entity Linking in Chinese Domain by Sense Embedding Based on Graph Clustering
Journal of Computer Science and Technology 2023, 38(1): 196-210
Published: 28 February 2023
Abstract Collect

Entity linking refers to linking a string in a text to corresponding entities in a knowledge base through candidate entity generation and candidate entity ranking. It is of great significance to some NLP (natural language processing) tasks, such as question answering. Unlike English entity linking, Chinese entity linking requires more consideration due to the lack of spacing and capitalization in text sequences and the ambiguity of characters and words, which is more evident in certain scenarios. In Chinese domains, such as industry, the generated candidate entities are usually composed of long strings and are heavily nested. In addition, the meanings of the words that make up industrial entities are sometimes ambiguous. Their semantic space is a subspace of the general word embedding space, and thus each entity word needs to get its exact meanings. Therefore, we propose two schemes to achieve better Chinese entity linking. First, we implement an n-gram based candidate entity generation method to increase the recall rate and reduce the nesting noise. Then, we enhance the corresponding candidate entity ranking mechanism by introducing sense embedding. Considering the contradiction between the ambiguity of word vectors and the single sense of the industrial domain, we design a sense embedding model based on graph clustering, which adopts an unsupervised approach for word sense induction and learns sense representation in conjunction with context. We test the embedding quality of our approach on classical datasets and demonstrate its disambiguation ability in general scenarios. We confirm that our method can better learn candidate entities’ fundamental laws in the industrial domain and achieve better performance on entity linking through experiments.

Regular Paper Issue
Discovering Cohesive Temporal Subgraphs with Temporal Density Aware Exploration
Journal of Computer Science and Technology 2022, 37(5): 1068-1085
Published: 30 September 2022
Abstract Collect

Real-world networks, such as social networks, cryptocurrency networks, and e-commerce networks, always have occurrence time of interactions between nodes. Such networks are typically modeled as temporal graphs. Mining cohesive subgraphs from temporal graphs is practical and essential in numerous data mining applications, since mining cohesive subgraphs gets insights into the time-varying nature of temporal graphs. However, existing studies on mining cohesive subgraphs, such as Densest-Exact and k-truss, are mainly tailored for static graphs (whose edges have no temporal information). Therefore, those cohesive subgraph models cannot indicate both the temporal and the structural characteristics of subgraphs. To this end, we explore the model of cohesive temporal subgraphs by incorporating both the evolving and the structural characteristics of temporal subgraphs. Unfortunately, the volume of time intervals in a temporal network is quadratic. As a result, the time complexity of mining temporal cohesive subgraphs is high. To efficiently address the problem, we first mine the temporal density distribution of temporal graphs. Guided by the distribution, we can safely prune many unqualified time intervals with the linear time cost. Then, the remaining time intervals where cohesive temporal subgraphs fall in are examined using the greedy search. The results of the experiments on nine real-world temporal graphs indicate that our model outperforms state-of-the-art solutions in efficiency and quality. Specifically, our model only takes less than two minutes on a million-vertex DBLP and has the highest overall average ranking in EDB and TC metrics.

Regular Paper Issue
Toward High-Performance Delta-Based Iterative Processing with a Group-Based Approach
Journal of Computer Science and Technology 2022, 37(4): 797-813
Published: 25 July 2022
Abstract Collect

Many systems have been built to employ the delta-based iterative execution model to support iterative algorithms on distributed platforms by exploiting the sparse computational dependencies between data items of these iterative algorithms in a synchronous or asynchronous approach. However, for large-scale iterative algorithms, existing synchronous solutions suffer from slow convergence speed and load imbalance, because of the strict barrier between iterations; while existing asynchronous approaches induce excessive redundant communication and computation cost as a result of being barrier-free. In view of the performance trade-off between these two approaches, this paper designs an efficient execution manager, called Aiter-R, which can be integrated into existing delta-based iterative processing systems to efficiently support the execution of delta-based iterative algorithms, by using our proposed group-based iterative execution approach. It can efficiently and correctly explore the middle ground of the two extremes. A heuristic scheduling algorithm is further proposed to allow an iterative algorithm to adaptively choose its trade-off point so as to achieve the maximum efficiency. Experimental results show that Aiter-R strikes a good balance between the synchronous and asynchronous policies and outperforms state-of-the-art solutions. It reduces the execution time by up to 54.1% and 84.6% in comparison with existing asynchronous and the synchronous models, respectively.

Regular Paper Issue
FDGLib: A Communication Library for Efficient Large-Scale Graph Processing in FPGA-Accelerated Data Centers
Journal of Computer Science and Technology 2021, 36(5): 1051-1070
Published: 30 September 2021
Abstract Collect

With the rapid growth of real-world graphs, the size of which can easily exceed the on-chip (board) storage capacity of an accelerator, processing large-scale graphs on a single Field Programmable Gate Array (FPGA) becomes difficult. The multi-FPGA acceleration is of great necessity and importance. Many cloud providers (e.g., Amazon, Microsoft, and Baidu) now expose FPGAs to users in their data centers, providing opportunities to accelerate large-scale graph processing. In this paper, we present a communication library, called FDGLib, which can easily scale out any existing single FPGA-based graph accelerator to a distributed version in a data center, with minimal hardware engineering efforts. FDGLib provides six APIs that can be easily used and integrated into any FPGA-based graph accelerator with only a few lines of code modifications. Considering the torus-based FPGA interconnection in data centers, FDGLib also improves communication efficiency using simple yet effective torus-friendly graph partition and placement schemes. We interface FDGLib into AccuGraph, a state-of-the-art graph accelerator. Our results on a 32-node Microsoft Catapult-like data center show that the distributed AccuGraph can be 2.32x and 4.77x faster than a state-of-the-art distributed FPGA-based graph accelerator ForeGraph and a distributed CPU-based graph system Gemini, with better scalability.

Survey Issue
A Survey of Non-Volatile Main Memory Technologies: State-of-the-Arts, Practices, and Future Directions
Journal of Computer Science and Technology 2021, 36(1): 4-32
Published: 05 January 2021
Abstract Collect

Non-Volatile Main Memories (NVMMs) have recently emerged as a promising technology for future memory systems. Generally, NVMMs have many desirable properties such as high density, byte-addressability, non-volatility, low cost, and energy efficiency, at the expense of high write latency, high write power consumption, and limited write endurance. NVMMs have become a competitive alternative of Dynamic Random Access Memory (DRAM), and will fundamentally change the landscape of memory systems. They bring many research opportunities as well as challenges on system architectural designs, memory management in operating systems (OSes), and programming models for hybrid memory systems. In this article, we first revisit the landscape of emerging NVMM technologies, and then survey the state-of-the-art studies of NVMM technologies. We classify those studies with a taxonomy according to different dimensions such as memory architectures, data persistence, performance improvement, energy saving, and wear leveling. Second, to demonstrate the best practices in building NVMM systems, we introduce our recent work of hybrid memory system designs from the dimensions of architectures, systems, and applications. At last, we present our vision of future research directions of NVMMs and shed some light on design challenges and opportunities.

Survey Issue
A Survey on Graph Processing Accelerators: Challenges and Opportunities
Journal of Computer Science and Technology 2019, 34(2): 339-371
Published: 22 March 2019
Abstract Collect

Graph is a well known data structure to represent the associated relationships in a variety of applications, e.g., data science and machine learning. Despite a wealth of existing efforts on developing graph processing systems for improving the performance and/or energy efficiency on traditional architectures, dedicated hardware solutions, also referred to as graph processing accelerators, are essential and emerging to provide the benefits significantly beyond what those pure software solutions can offer. In this paper, we conduct a systematical survey regarding the design and implementation of graph processing accelerators. Specifically, we review the relevant techniques in three core components toward a graph processing accelerator: preprocessing, parallel graph computation, and runtime scheduling. We also examine the benchmarks and results in existing studies for evaluating a graph processing accelerator. Interestingly, we find that there is not an absolute winner for all three aspects in graph acceleration due to the diverse characteristics of graph processing and the complexity of hardware configurations. We finally present and discuss several challenges in details, and further explore the opportunities for the future research.

Total 11