AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

CA-DTS: A Distributed and Collaborative Task Scheduling Algorithm for Edge Computing Enabled Intelligent Road Network

Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, Nanjing 210098, China
School of Computer and Information, Hohai University, Nanjing 210098, China
School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Department of Computer and Information Sciences, University of Delaware, Newark, DE 19716, U.S.A.
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
Show Author Information
An erratum to this article is available online at:

Abstract

Edge computing enabled Intelligent Road Network (EC-IRN) provides powerful and convenient computing services for vehicles and roadside sensing devices. The continuous emergence of transportation applications has caused a huge burden on roadside units (RSUs) equipped with edge servers in the Intelligent Road Network (IRN). Collaborative task scheduling among RSUs is an effective way to solve this problem. However, it is challenging to achieve collaborative scheduling among different RSUs in a completely decentralized environment. In this paper, we first model the interactions involved in task scheduling among distributed RSUs as a Markov game. Given that multi-agent deep reinforcement learning (MADRL) is a promising approach for the Markov game in decision optimization, we propose a collaborative task scheduling algorithm based on MADRL for EC-IRN, named CA-DTS, aiming to minimize the long-term average delay of tasks. To reduce the training costs caused by trial-and-error, CA-DTS specially designs a reward function and utilizes the distributed deployment and collective training architecture of counterfactual multi-agent policy gradient (COMA). To improve the stability of performance in large-scale environments, CA-DTS takes advantage of the action semantics network (ASN) to facilitate cooperation among multiple RSUs. The evaluation results of both the testbed and simulation demonstrate the effectiveness of our proposed algorithm. Compared with the baselines, CA-DTS can achieve convergence about 35% faster, and obtain average task delay that is lower by approximately 9.4%, 9.8%, and 6.7%, in different scenarios with varying numbers of RSUs, service types, and task arrival rates, respectively.

Electronic Supplementary Material

Download File(s)
JCST-2209-12839-Highlights.pdf (1.2 MB)

References

[1]
Chen S B. Intelligent perception system for vehicle-road cooperation. arXiv: 2208.14052, 2022. https://doi.org/10.48550/arXiv.2208.14052, Oct. 2023.
[2]

Lu S D, Shi W S. The emergence of vehicle computing. IEEE Internet Computing , 2021, 25(3): 18–22. DOI: 10.1109/MIC.2021.3066076.

[3]

Xie G Q, Yang L T, Wu W, Zeng K Y, Xiao X Z, Li R F. Security enhancement for real-time parallel in-vehicle applications by CAN FD message authentication. IEEE Trans. Intelligent Transportation Systems , 2021, 22(8): 5038–5049. DOI: 10.1109/TITS.2020.3000783.

[4]

Shi W S, Cao J, Zhang Q, Li Y H Z, Xu L Y. Edge computing: Vision and challenges. IEEE Internet of Things Journal , 2016, 3(5): 637–646. DOI: 10.1109/JIOT.2016.2579 198.

[5]

Luo Q Y, Hu S H, Li C L, Li G H, Shi W S. Resource scheduling in edge computing: A survey. IEEE Communications Surveys & Tutorials , 2021, 23(4): 2131–2165. DOI: 10.1109/COMST.2021.3106401.

[6]

Liu F M, Chen J, Zhang Q X, Li B. Online MEC offloading for V2V networks. IEEE Trans. Mobile Computing , 2023, 22(11): 6097–6109. DOI: 10.1109/TMC.2022.3186893.

[7]
Chen S T, Wang L, Liu F M. Optimal admission control mechanism design for time-sensitive services in edge computing. In Proc. the 2022 IEEE Conference on Computer Communications, May 2022, pp.1169–1178. DOI: 10.1109/INFOCOM48880.2022.9796847.
[8]

Guo E T, Chen Z F, Rahardja S, Yang J J. 3D detection and pose estimation of vehicle in cooperative vehicle infrastructure system. IEEE Sensors Journal , 2021, 21(19): 21759–21771. DOI: 10.1109/JSEN.2021.3101497.

[9]

Li M S, Gao J, Zhao L, Shen X M. Deep reinforcement learning for collaborative edge computing in vehicular networks. IEEE Trans. Cognitive Communications and Networking , 2020, 6(4): 1122–1135. DOI: 10.1109/TCCN.2020.3003036.

[10]

Wu Y C, Lin C, Quek T Q S. A robust distributed hierarchical online learning approach for dynamic MEC networks. IEEE Journal on Selected Areas in Communications , 2022, 40(2): 641–656. DOI: 10.1109/JSAC.2021.3118 342.

[11]

Qiu X Y, Zhang W K, Chen W H, Zheng Z B. Distributed and collective deep reinforcement learning for computation offloading: A practical perspective. IEEE Trans. Parallel and Distributed Systems , 2021, 32(5): 1085–1101. DOI: 10.1109/TPDS.2020.3042599.

[12]

Liu X L, Yu J D, Wang J, Gao Y. Resource allocation with edge computing in IoT networks via machine learning. IEEE Internet of Things Journal , 2020, 7(4): 3415–3426. DOI: 10.1109/JIOT.2020.2970110.

[13]

Xiong X, Zheng K, Lei L, Hou L. Resource allocation based on deep reinforcement learning in IoT edge computing. IEEE Journal on Selected Areas in Communications , 2020, 38(6): 1133–1146. DOI: 10.1109/JSAC.2020.2986615.

[14]

Zhai Y L, Bao T H, Zhu L H, Shen M, Du X J, Guizani M. Toward reinforcement-learning-based service deployment of 5G mobile edge computing with request-aware scheduling. IEEE Wireless Communications , 2020, 27(1): 84–91. DOI: 10.1109/MWC.001.1900298.

[15]

Zou J F, Hao T B, Yu C, Jin H. A3C-DO: A regional resource scheduling framework based on deep reinforcement learning in edge scenario. IEEE Trans. Computers , 2021, 70(2): 228–239. DOI: 10.1109/TC.2020.2987567.

[16]

Ren Y L, Chen X Y, Guo S, Guo S Y, Xiong A. Blockchain-based VEC network trust management: A DRL algorithm for vehicular service offloading and migration. IEEE Trans. Vehicular Technology , 2021, 70(8): 8148–8160. DOI: 10.1109/TVT.2021.3092346.

[17]

Zhan Y F, Guo S, Li P, Zhang J. A deep reinforcement learning based offloading game in edge computing. IEEE Trans. Computers , 2020, 69(6): 883–893. DOI: 10.1109/TC.2020.2969148.

[18]

Li Y Q, Wang X, Gan X Y, Jin H M, Fu L Y, Wang X B. Learning-aided computation offloading for trusted collaborative mobile edge computing. IEEE Trans. Mobile Computing , 2020, 19(12): 2833–2849. DOI: 10.1109/TMC.2019.2934103.

[19]

Tang M, Wong V W S. Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans. Mobile Computing , 2022, 21(6): 1985–1997. DOI: 10.1109/TMC.2020.3036871.

[20]

Chen Q, Zheng Z M, Hu C, Wang D, Liu F M. On-edge multi-task transfer learning: Model and practice with data-driven task allocation. IEEE Trans. Parallel and Distributed Systems , 2020, 31(6): 1357–1371. DOI: 10.1109/TPDS.2019.2962435.

[21]

Cui J J, Liu Y W, Nallanathan A. Multi-agent reinforcement learning-based resource allocation for UAV networks. IEEE Trans. Wireless Communications , 2020, 19(2): 729–743. DOI: 10.1109/TWC.2019.2935201.

[22]

Afrin M, Jin J, Rahman A, Rahman A, Wan J F, Hossain E. Resource allocation and service provisioning in multi-agent cloud robotics: A comprehensive survey. IEEE Communications Surveys & Tutorials , 2021, 23(2): 842–870. DOI: 10.1109/COMST.2021.3061435.

[23]

Zhang Y T, Di B Y, Zheng Z J, Lin J L, Song L Y. Distributed multi-cloud multi-access edge computing by multi-agent reinforcement learning. IEEE Trans. Wireless Communications , 2021, 20(4): 2565–2578. DOI: 10.1109/TWC.2020.3043038.

[24]

Huang X Y, Leng S P, Maharjan S, Zhang Y. Multi-agent deep reinforcement learning for computation offloading and interference coordination in small cell networks. IEEE Trans. Vehicular Technology , 2021, 70(9): 9282–9293. DOI: 10.1109/TVT.2021.3096928.

[25]

Zhang K, Cao J Y, Zhang Y. Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks. IEEE Trans. Industrial Informatics , 2022, 18(2): 1405–1413. DOI: 10.1109/TII.2021.3088 407.

[26]

Zhu C, Tao J, Pastor G, Xiao Y, Ji Y S, Zhou Q, Li Y, Ylä-Jääski A. Folo: Latency and quality optimized task allocation in vehicular fog computing. IEEE Internet of Things Journal , 2019, 6(3): 4150–4161. DOI: 10.1109/JIOT.2018.2875520.

[27]

Wu X Z, Subramanian S, Guha R, White R G, Li J Y, Lu K W, Bucceri A, Zhang T. Vehicular communications using DSRC: Challenges, enhancements, and evolution. IEEE Journal on Selected Areas in Communications , 2013, 31(9): 399–408. DOI: 10.1109/JSAC.2013.SUP.0513 036.

[28]

Luo Q Y, Li C L, Luan T H, Shi W S. Collaborative data scheduling for vehicular edge computing via deep reinforcement learning. IEEE Internet of Things Journal , 2020, 7(10): 9637–9650. DOI: 10.1109/JIOT.2020.2983660.

[29]

Levy Y, Yechiali U. Utilization of idle time in an M/G/1 queueing system. Management Science , 1975, 22(2): 202–211. DOI: 10.1287/mnsc.22.2.202.

[30]

Wang X J, Ning Z L, Guo S. Multi-agent imitation learning for pervasive edge computing: A decentralized computation offloading algorithm. IEEE Trans. Parallel and Distributed Systems , 2021, 32(2): 411–425. DOI: 10.1109/TPDS.2020.3023936.

[31]

Sutton R S, Barto A G. Reinforcement learning: An introduction. IEEE Trans. Neural Networks , 1998, 9(5): 1054. DOI: 10.1109/TNN.1998.712192.

[32]
Filar J, Vrieze K. Competitive Markov Decision Processes. Springer, 2012. DOI: 10.1007/978-1-4612-4054-9.
[33]
Foerster J, Farquhar G, Afouras T, Nardelli N, Whiteson S. Counterfactual multi-agent policy gradients. In Proc. the 32nd AAAI Conference on Artificial Intelligence, Apr. 2018. DOI: 10.1609/aaai.v32i1.11794.
[34]
Wang W X, Yang T P, Liu Y, Hao J Y, Hao X T, Hu Y J, Chen Y F, Fan C J, Gao Y. Action semantics network: Considering the effects of actions in multiagent systems. In Proc. the 8th International Conference on Learning Representations, Apr. 2019.
[35]

Liu C B, Tang F, Hu Y K, Li K L, Tang Z, Li K Q. Distributed task migration optimization in MEC by extending multi-agent deep reinforcement learning approach. IEEE Trans. Parallel and Distributed Systems , 2021, 32(7): 1603–1614. DOI: 10.1109/TPDS.2020.3046737.

[36]

Niv Y, Duff M O, Dayan P. Dopamine, uncertainty and TD learning. Behavioral and Brain Functions , 2005, 1: Article No. 6. DOI: 10.1186/1744-9081-1-6.

Journal of Computer Science and Technology
Pages 1113-1131
Cite this article:
Hu S-H, Luo Q-Y, Li G-H, et al. CA-DTS: A Distributed and Collaborative Task Scheduling Algorithm for Edge Computing Enabled Intelligent Road Network. Journal of Computer Science and Technology, 2023, 38(5): 1113-1131. https://doi.org/10.1007/s11390-023-2839-0

304

Views

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 19 September 2022
Accepted: 25 May 2023
Published: 30 September 2023
© Institute of Computing Technology, Chinese Academy of Sciences 2023
Return