Sort:
Regular Paper Issue
Understanding and Detecting Inefficient Image Displaying Issues in Android Apps
Journal of Computer Science and Technology 2024, 39 (2): 434-459
Published: 30 March 2024
Abstract Collect

Mobile applications (apps for short) often need to display images. However, inefficient image displaying (IID) issues are pervasive in mobile apps, and can severely impact app performance and user experience. This paper first establishes a descriptive framework for the image displaying procedures of IID issues. Based on the descriptive framework, we conduct an empirical study of 216 real-world IID issues collected from 243 popular open-source Android apps to validate the presence and severity of IID issues, and then shed light on these issues’ characteristics to support research on effective issue detection. With the findings of this study, we propose a static IID issue detection tool TAPIR and evaluate it with 243 real-world Android apps. Encouragingly, 49 and 64 previously-unknown IID issues in two different versions of 16 apps reported by TAPIR are manually confirmed as true positives, respectively, and 16 previously-unknown IID issues reported by TAPIR have been confirmed by developers and 13 have been fixed. Then, we further evaluate the performance impact of these detected IID issues and the performance improvement if they are fixed. The results demonstrate that the IID issues detected by TAPIR indeed cause significant performance degradation, which further show the effectiveness and efficiency of TAPIR.

Regular Paper Issue
TOAST: Automated Testing of Object Transformers in Dynamic Software Updates
Journal of Computer Science and Technology 2022, 37 (1): 50-66
Published: 31 January 2022
Abstract Collect

Dynamic software update (DSU) patches programs on the fly. It often involves the critical task of object transformation that converts live objects of the old-version program to their semantically consistent counterparts under the new-version program. This task is accomplished by invoking an object transformer on each stale object. However, a defective transformer failing to maintain consistency would cause errors or even crash the program. We propose TOAST (Test Object trAnSformaTion), an automated approach to detecting potential inconsistency caused by object transformers. TOAST first analyzes an update to identify multiple target methods and then adopts a fuzzer with specially designed inconsistency guidance to randomly generate object states to drive two versions of a target method. This creates two corresponding execution traces and a pair of old and new objects. TOAST finally performs object transformation to create a transformed object and detects inconsistency between it and the corresponding new object produced from scratch by the new program. Moreover, TOAST checks behavior inconsistency by comparing the return variables and exceptions of the two executions. Experimental evaluation on 130 updates with default transformers shows that TOAST is promising: it got 96.0% precision and 85.7% recall in state inconsistency detection, and 81.4% precision and 94.6% recall in behavior inconsistency detection. The inconsistency guidance improved the fuzzing efficiency by 14.1% for state inconsistency detection and 40.5% for behavior inconsistency detection.

Regular Paper Issue
Predicted Robustness as QoS for Deep Neural Network Models
Journal of Computer Science and Technology 2020, 35 (5): 999-1015
Published: 30 September 2020
Abstract Collect

The adoption of deep neural network (DNN) model as the integral part of real-world software systems necessitates explicit consideration of their quality-of-service (QoS). It is well-known that DNN models are prone to adversarial attacks, and thus it is vitally important to be aware of how robust a model’s prediction is for a given input instance. A fragile prediction, even with high confidence, is not trustworthy in light of the possibility of adversarial attacks. We propose that DNN models should produce a robustness value as an additional QoS indicator, along with the confidence value, for each prediction they make. Existing approaches for robustness computation are based on adversarial searching, which are usually too expensive to be excised in real time. In this paper, we propose to predict, rather than to compute, the robustness measure for each input instance. Specifically, our approach inspects the output of the neurons of the target model and trains another DNN model to predict the robustness. We focus on convolutional neural network (CNN) models in the current research. Experiments show that our approach is accurate, with only 10%–34% additional errors compared with the offline heavy-weight robustness analysis. It also significantly outperforms some alternative methods. We further validate the effectiveness of the approach when it is applied to detect adversarial attacks and out-of-distribution input. Our approach demonstrates a better performance than, or at least is comparable to, the state-of-the-art techniques.

EditorialNotes Issue
Preface
Journal of Computer Science and Technology 2019, 34 (5): 939-941
Published: 06 September 2019
Abstract Collect
Total 4