AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Optimization of Web Service Testing Task Assignment in Crowdtesting Environment

Information Science and Technology College, Dalian Maritime University, Dalian 116026, China
Show Author Information

Abstract

Crowdtesting has emerged as an attractive and economical testing paradigm that features testers from different countries, with various backgrounds and working conditions. Recent developments in crowdsourcing testing suggest that it is feasible to manage test populations and processes, but they are often outside the scope of standard testing theory. This paper explores how to allocate service-testing tasks to proper testers in an ever-changing crowdsourcing environment. We formalize it as an optimization problem with the objective to ensure the testing quality of the crowds, while considering influencing factors such as knowledge capability, the rewards, the network connections, and the geography and the skills required. To solve the proposed problem, we design a task assignment algorithm based on the Differential Evolution (DE) algorithm. Extensive experiments are conducted to evaluate the efficiency and effectiveness of the proposed algorithm in real and synthetic data, and the results show better performance compared with other heuristic-based algorithms.

Electronic Supplementary Material

Download File(s)
JCST-2007-10824-Highlights.pdf (147 KB)

References

[1]

Hussain S, Wang Z S, Toure I K, Diop A. Web service testing tools: A comparative study. International Journal of Computer Science Issues, 2013, 10(1/2/3): 641–647.

[2]
Yu H, Shen Z Q, Fauvel S, Cui L Z. Efficient scheduling in crowdsourcing based on workers’ mood. In Proc. the 2017 IEEE Int. Conf. Agents (ICA), Jul. 2017, pp.121–126. DOI: 10.1109/AGENTS.2017.8015317.
[3]
Rahman H, Roy S B, Thirumuruganathan S, Amer-Yahia S, Das G. Task assignment optimization in collaborative crowdsourcing. In Proc. the 2015 IEEE Int. Conf. Data Mining, Nov. 2015, pp.949–954. DOI: 10.1109/ICDM.2015.119.
[4]
Komarov S, Reinecke K, Gajos K Z. Crowdsourcing performance evaluations of user interfaces. In Proc. the 2013 SIGCHI Conf. Human Factors in Computing Systems, Apr. 2013, pp.207–216. DOI: 10.1145/2470654.2470684.
[5]
Surowiecki J. The Wisdom of Crowds. Anchor Books, 2005.
[6]
Yan M Z, Sun H L, Liu X D. iTest: Testing software with mobile crowdsourcing. In Proc. the 1st Int. Workshop on Crowd-Based Software Development Methods and Technologies, Nov. 2014, pp.19–24. DOI: 10.1145/2666539.2666569.
[7]
Yan M Z, Sun H L, Liu X D. Efficient testing of web services with mobile crowdsourcing. In Proc. the 7th Asia-Pacific Symp. Internetware, Nov. 2015, pp.157–165. DOI: 10.1145/2875913.2875926.
[8]

Chen R, Guo S K, Wang X Z, Zhang T L. Fusion of multi-RSMOTE with fuzzy integral to classify bug reports with an imbalanced distribution. IEEE Trans. Fuzzy Systems, 2019, 27(12): 2406–2420. DOI: 10.1109/TFUZZ.2019.2899809.

[9]

Jiang H, Li X C, Ren Z L, Xuan J F, Jin Z. Toward better summarizing bug reports with crowdsourcing elicited attributes. IEEE Trans. Reliability, 2019, 68(1): 2–22. DOI: 10.1109/TR.2018.2873427.

[10]

Liang W, Yu Z W, Qi H, Guo B, Xiong H Y. Multi-objective optimization based allocation of heterogeneous spatial crowdsourcing tasks. IEEE Trans. Mobile Computing, 2018, 17(7): 1637–1650. DOI: 10.1109/TMC.2017.2771259.

[11]

Roy S B, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal, 2015, 24(4): 467–491. DOI: 10.1007/s00778-015-0385-2.

[12]
Catallo I, Martinenghi D. The dimensions of crowdsourcing task design. In Proc. the 17th Int. Conf. Web Engineering (ICWE), Jun. 2017, pp.394–402. DOI: 10.1007/978-3-319-60131-1_25.
[13]

Zhu H, Zhang Y F. Collaborative testing of web services. IEEE Trans. Services Computing, 2012, 5(1): 116–130. DOI: 10.1109/TSC.2010.54.

[14]

Guo S K, Chen R, Li H, Zhang T L, Liu Y Q. Identify severity bug report with distribution imbalance by CR-SMOTE and ELM. International Journal of Software Engineering and Knowledge Engineering, 2019, 29(2): 139–175. DOI: 10.1142/S0218194019500074.

[15]

Gao L, Gan Y, Zhou B H, Dong M Y. A user-knowledge crowdsourcing task assignment model and heuristic algorithm for Expert Knowledge Recommendation Systems. Engineering Applications of Artificial Intelligence, 2020, 96: 103959. DOI: 10.1016/j.engappai.2020.103959.

[16]

Zou D X, Liu H K, Gao L Q, Li S. An improved differential evolution algorithm for the task assignment problem. Engineering Applications of Artificial Intelligence, 2011, 24(4): 616–624. DOI: 10.1016/j.engappai.2010.12.002.

[17]

Rahman H, Thirumuruganathan S, Roy S B, Amer-Yahia S, Das G. Worker skill estimation in team-based tasks. Proceedings of the VLDB Endowment, 2015, 8(11): 1142–1153. DOI: 10.14778/2809974.2809977.

[18]
Khazankin R, Satzger B, Dustdar S. Optimized execution of business processes on crowdsourcing platforms. In Proc. the 8th Int. Conf. Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), Oct. 2012, pp.443–451. DOI: 10.4108/icst.collaboratecom.2012.250434.
[19]
Anagnostopoulos A, Becchetti L, Castillo C, Gionis A, Leonardi S. Online team formation in social networks. In Proc. the 21st Int. Conf. World Wide Web, Apr. 2012, pp.839–848. DOI: 10.1145/2187836.2187950.
[20]

Deng W, Zhao H M, Yang X H, Xiong J X, Sun M, Li B. Study on an improved adaptive PSO algorithm for solving multi-objective gate assignment. Applied Soft Computing, 2017, 59: 288–302. DOI: 10.1016/j.asoc.2017.06.004.

[21]

Deng W, Xu J J, Zhao H M. An improved ant colony optimization algorithm based on hybrid strategies for scheduling problem. IEEE Access, 2019, 7: 20281–20292. DOI: 10.1109/ACCESS.2019.2897580.

[22]

Karaboğa D, Ökdem S. A simple and global optimization algorithm for engineering problems: Differential evolution algorithm. Turkish Journal of Electrical Engineering and Computer Sciences, 2004, 12(1): 53–60.

[23]
Anagnostopoulos A, Becchetti L, Fazzone A, Mele I, Riondato M. The importance of being expert: Efficient max-finding in crowdsourcing. In Proc. the 2015 ACM SIGMOD Int. Conf. Management of Data, May 2015, pp.983–998. DOI: 10.1145/2723372.2723722.
[24]

Tran-Thanh L, Stein S, Rogers A, Jennings N R. Efficient crowdsourcing of unknown experts using bounded multi-armed bandits. Artificial Intelligence, 2014, 214: 89–111. DOI: 10.1016/j.artint.2014.04.005.

[25]

Yang D J, Xue G L, Fang X, Tang J. Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones. IEEE/ACM Trans. Networking, 2016, 24(3): 1732–1744. DOI: 10.1109/TNET.2015.2421897.

[26]

Zheng Z B, Zhang Y L, Lyu M R. Investigating QoS of real-world web services. IEEE Trans. Services Computing, 2014, 7(1): 32–39. DOI: 10.1109/TSC.2012.34.

[27]

Feige U, Mirrokni V S, Vondrák J. Maximizing non-monotone submodular functions. SIAM Journal on Computing, 2011, 40(4): 1133–1153. DOI: 10.1137/090779346.

[28]
Tian X T, Li H H, Liu F. Web service reliability test method based on log analysis. In Proc. the 2017 IEEE Int. Conf. Software Quality, Reliability and Security Companion (QRS-C), Jul. 2017, pp.195–199. DOI: 10.1109/QRS-C.2017.38.
[29]
Gardlo B, Egger S, Seufert M, Schatz R. Crowdsourcing 2.0: Enhancing execution speed and reliability of web-based QoE testing. In Proc. the 2014 IEEE Int. Conf. Communications (ICC), Jun. 2014, pp.1070–1075. DOI: 10.1109/ICC.2014.6883463.
[30]
Gardlo B. Quality of experience evaluation methodology via crowdsourcing [Ph.D. thesis]. University of Žilina, Žilina, 2012.
[31]
Blanco R, Halpin H, Herzig D M, Mika P, Pound J, Thompson H S, Duc T T. Repeatable and reliable search system evaluation using crowdsourcing. In Proc. the 34th Int. ACM SIGIR Conf. Research and Development in Information Retrieval, Jul. 2011, pp.923–932. DOI: 10.1145/2009916.2010039.
[32]
Sherief N, Jiang N, Hosseini M, Phalp K, Ali R. Crowdsourcing software evaluation. In Proc. the 18th Int. Conf. Evaluation and Assessment in Software Engineering, May 2014, pp.19. DOI: 10.1145/2601248.2601300.
[33]
Chen F X, Kim S. Crowd debugging. In Proc. the 10th Joint Meeting on Foundations of Software Engineering, Aug. 2015, pp.320–332. DOI: 10.1145/2786805.2786819.
[34]
Petrillo F, Lacerda G, Pimenta M, Freitas C. Visualizing interactive and shared debugging sessions. In Proc. the 3rd IEEE Working Conf. Software Visualization (VISSOFT), Sept. 2015, pp.140–144. DOI: 10.1109/VISSOFT.2015.7332425.
[35]
Petrillo F, Soh Z, Khomh F, Pimenta M, Freitas C, Guéhéneuc Y. Towards understanding interactive debugging. In Proc. the 2016 IEEE Int. Conf. Software Quality, Reliability and Security (QRS), Aug. 2016, pp.152–163. DOI: 10.1109/QRS.2016.27.
[36]

Chen X, Jiang H, Chen Z Y, He T K, Nie L M. Automatic test report augmentation to assist crowdsourced testing. Frontiers of Computer Science, 2019, 13(5): 943–959. DOI: 10.1007/s11704-018-7308-5.

[37]
Guaiani F, Muccini H. Crowd and laboratory testing, can they co-exist? An exploratory study. In Proc. the 2nd IEEE/ACM Int. Workshop on CrowdSourcing in Software Engineering, May 2015, pp.32–37. DOI: 10.1109/CSI-SE.2015.14.
[38]
Stol K J, Fitzgerald B. Research protocol for a case study of crowdsourcing software development. Technical Report, TR_2014_03, Lero, 2014. DOI: 10.13140/2.1.1151.3123. https://www.researchgate.net/publication/273383598_Research_Protocol_for_a_Case_Study_of_Crowdsourcing_Software_Development, Mar. 2023.
[39]
Jiang H, Nie L M, Sun Z Y, Ren Z L, Kong W Q, Zhang T, Luo X P. ROSF: Leveraging information retrieval and supervised learning for recommending code snippets. IEEE Trans. Services Computing, 2019, 12(1): 34–46. DOI: 10.1109/TSC.2016.2592909.
[40]
Boutsis I, Kalogeraki V. Crowdsourcing under real-time constraints. In Proc. the 27th Int. Symp. Parallel and Distributed Processing, May 2013, pp.753–764. DOI: 10.1109/IPDPS.2013.84.
[41]
Boutsis I, Kalogeraki V. On task assignment for real-time reliable crowdsourcing. In Proc. the 34th Int. Conf. Distributed Computing Systems, Jul. 2014. DOI: 10.1109/ICDCS.2014.9.
Journal of Computer Science and Technology
Pages 455-470
Cite this article:
Tang W-J, Chen R, Zhang J-L, et al. Optimization of Web Service Testing Task Assignment in Crowdtesting Environment. Journal of Computer Science and Technology, 2023, 38(2): 455-470. https://doi.org/10.1007/s11390-022-0824-7

349

Views

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 02 July 2020
Accepted: 23 December 2022
Published: 30 March 2023
© Institute of Computing Technology, Chinese Academy of Sciences 2023
Return