Sort:
Regular Paper Issue
Multi-Feature Fusion Based Structural Deep Neural Network for Predicting Answer Time on Stack Overflow
Journal of Computer Science and Technology 2023, 38 (3): 582-599
Published: 30 May 2023
Abstract Collect

Stack Overflow provides a platform for developers to seek suitable solutions by asking questions and receiving answers on various topics. However, many questions are usually not answered quickly enough. Since the questioners are eager to know the specific time interval at which a question can be answered, it becomes an important task for Stack Overflow to feedback the answer time to the question. To address this issue, we propose a model for predicting the answer time of questions, named Predicting Answer Time (i.e., PAT model), which consists of two parts: a feature acquisition and fusion model, and a deep neural network model. The framework uses a variety of features mined from questions in Stack Overflow, including the question description, question title, question tags, the creation time of the question, and other temporal features. These features are fused and fed into the deep neural network to predict the answer time of the question. As a case study, post data from Stack Overflow are used to assess the model. We use traditional regression algorithms as the baselines, such as Linear Regression, K-Nearest Neighbors Regression, Support Vector Regression, Multilayer Perceptron Regression, and Random Forest Regression. Experimental results show that the PAT model can predict the answer time of questions more accurately than traditional regression algorithms, and shorten the error of the predicted answer time by nearly 10 hours.

Regular Paper Issue
Optimization of Web Service Testing Task Assignment in Crowdtesting Environment
Journal of Computer Science and Technology 2023, 38 (2): 455-470
Published: 30 March 2023
Abstract Collect

Crowdtesting has emerged as an attractive and economical testing paradigm that features testers from different countries, with various backgrounds and working conditions. Recent developments in crowdsourcing testing suggest that it is feasible to manage test populations and processes, but they are often outside the scope of standard testing theory. This paper explores how to allocate service-testing tasks to proper testers in an ever-changing crowdsourcing environment. We formalize it as an optimization problem with the objective to ensure the testing quality of the crowds, while considering influencing factors such as knowledge capability, the rewards, the network connections, and the geography and the skills required. To solve the proposed problem, we design a task assignment algorithm based on the Differential Evolution (DE) algorithm. Extensive experiments are conducted to evaluate the efficiency and effectiveness of the proposed algorithm in real and synthetic data, and the results show better performance compared with other heuristic-based algorithms.

Total 2