AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (8.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Reinforcement Learning-Driven Intelligent Truck Dispatching Algorithms for Freeway Logistics

China Telecom Research Institute, Beijing 102209, China
Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
School of Civil Engineering and Transportation, South China University of Technology, Guangzhou 510640, China
Faculty of Transportation Engineering, Kunming University of Science and Technology, Kunming 650500, China
Show Author Information

Abstract

Freeway logistics plays a pivotal role in economic development. Although the rapid development in big data and artificial intelligence motivates long-haul freeway logistics towards informatization and intellectualization, the transportation of bulk commodities still faces serious challenges arisen from dispersed freight demands and the lack of co-ordination among different operators. The present study thereby proposed intelligent algorithms for truck dispatching for freeway logistics. Specifically, our contributions include the establishment of mathematical models for full-truckload (FTL) and less-than-truckload (LTL) transportation modes, respectively, and the introduction of reinforcement learning with deep Q-networks tailored for each transportation mode to improve the decision-making in order acceptance and truck repositioning. Simulation experiments based on the real-world freeway logistics data collected in Guiyang, China show that our algorithms improved operational profitability substantially with a 76% and 30% revenue increase for FTL and LTL modes, respectively, compared with single-stage optimization. These results demonstrate the potential of reinforcement learning in revolutionizing freeway logistics and should lay a foundation for future research in intelligent logistics systems.

References

[1]
L. Ma, Mortgage pressure and self-employed truck drivers’ work behaviors in the context of the COVID-19 pandemic, PhD dissertation, W. P. Carey School of Business, Arizona State University, Tempe, AZ, USA, 2021.
[2]

R. Kaewpuang, D. Niyato, P. S. Tan, and P. Wang, Cooperative management in full-truckload and less-than-truckload vehicle system, IEEE Trans. Veh. Technol., vol. 66, no. 7, pp. 5707–5722, 2017.

[3]

H. W. Chang, Y. C. Tai, and J. Y. J. Hsu, Context-aware taxi demand hotspots prediction, Int. J. Bus. Intell. Data Min., vol. 5, no. 1, pp. 3–18, 2010.

[4]
K. Zhang, Z. Feng, S. Chen, K. Huang, and G. Wang, A framework for passengers demand prediction and recommendation, in Proc. IEEE Int. Conf. Services Computing (SCC), San Francisco, CA, USA, 2016, pp. 340–347.
[5]

Y. Li, J. Lu, L. Zhang, and Y. Zhao, Taxi booking mobile app order demand prediction based on short-term traffic forecasting, Transp. Res. Rec. J. Transp. Res. Board, vol. 2634, no. 1, pp. 57–68, 2017.

[6]

L. Moreira-Matias, J. Gama, M. Ferreira, J. Mendes-Moreira, and L. Damas, Predicting taxi–passenger demand using streaming data, IEEE Trans. Intell. Transp. Syst., vol. 14, no. 3, pp. 1393–1402, 2013.

[7]

J. Huo, C. Liu, J. Chen, Q. Meng, J. Wang, and Z. Liu, Simulation-based dynamic origin–destination matrix estimation on freeways: A Bayesian optimization approach, Transp. Res. Part E Logist. Transp. Rev., vol. 173, p. 103108, 2023.

[8]

F. Rodrigues, I. Markou, and F. C. Pereira, Combining time-series and textual data for taxi demand prediction in event areas: A deep learning approach, Inf. Fusion, vol. 49, pp. 120–129, 2019.

[9]

L. Liu, Z. Qiu, G. Li, Q. Wang, W. Ouyang, and L. Lin, Contextualized spatial–temporal network for taxi origin-destination demand prediction, IEEE Trans. Intell. Transp. Syst., vol. 20, no. 10, pp. 3875–3887, 2019.

[10]

D. H. Lee, H. Wang, R. L. Cheu, and S. H. Teo, Taxi dispatch system based on current demands and real-time traffic conditions, Transp. Res. Rec. J. Transp. Res. Board, vol. 1882, no. 1, pp. 193–200, 2004.

[11]

H. Billhardt, A. Fernández, S. Ossowski, J. Palanca, and J. Bajo, Taxi dispatching strategies with compensations, Expert Syst. Appl., vol. 122, pp. 173–182, 2019.

[12]
L. Zhang, T. Hu, Y. Min, G. Wu, J. Zhang, P. Feng, P. Gong, and J. Ye, A taxi order dispatch model based on combinatorial optimization, in Proc. 23rd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Halifax, Canada, 2017, pp. 2151–2159.
[13]
M. Meghjani and K. Marczuk, A hybrid approach to matching taxis and customers, in Proc. IEEE Region 10 Conf. (TENCON), Singapore, 2016, pp. 167–169.
[14]
G. Gao, M. Xiao, and Z. Zhao, Optimal multi-taxi dispatch for mobile taxi-hailing systems, in Proc. 45th Int. Conf. Parallel Processing (ICPP), Philadelphia, PA, USA, 2016, pp. 294–303.
[15]
G. Dai, J. Huang, S. M. Wambura, and H. Sun, A balanced assignment mechanism for online taxi recommendation, in Proc. 18th IEEE Int. Conf. Mobile Data Management (MDM), Daejeon, Republic of Korea, 2017, pp. 102–111.
[16]

K. Braekers, K. Ramaekers, and I. Van Nieuwenhuyse, The vehicle routing problem: State of the art classification and review, Comput. Ind. Eng., vol. 99, pp. 300–313, 2016.

[17]

Y. Bengio, A. Lodi, and A. Prouvost, Machine learning for combinatorial optimization: A methodological tourd’horizon, Eur. J. Oper. Res., vol. 290, no. 2, pp. 405–421, 2021.

[18]
O. Vinyals, M. Fortunato, and N. Jaitly, Pointer networks, arXiv preprint arXiv: 1506.03134, 2015.
[19]
Z. Li, Q. Chen, and V. Koltun, Combinatorial optimization with graph convolutional networks and guided tree search, arXiv preprint arXiv: 1810.10659, 2018.
[20]
A. K. Kalakanti, S. Verma, T. Paul, and T. Yoshida, RL SolVeR pro: Reinforcement learning for solving vehicle routing problem, in Proc. 1st Int. Conf. Artificial Intelligence and Data Sciences (AiDAS), Ipoh, Malaysia, 2019, pp. 94–99.
[21]

F. Miao, S. Han, S. Lin, J. A. Stankovic, D. Zhang, S. Munir, H. Huang, T. He, and G. J. Pappas, Taxi dispatch with real-time sensing data in metropolitan areas: A receding horizon control approach, IEEE Trans. Autom. Sci. Eng., vol. 13, no. 2, pp. 463–478, 2016.

[22]

M. Lowalekar, P. Varakantham, and P. Jaillet, Online spatio-temporal matching in stochastic and dynamic domains, Artif. Intell., vol. 261, pp. 71–112, 2018.

[23]
M. Qu, H. Zhu, J. Liu, G. Liu, and H. Xiong, A cost-effective recommender system for taxi drivers, in Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, New York, NY, USA, 2014, pp. 45–54.
[24]
V. Armant and K. N. Brown, Minimizing the driving distance in ride sharing systems, in Proc. IEEE 26th Int. Conf. Tools with Artificial Intelligence, Limassol, Cyprus, 2014, pp. 568–575.
[25]
X. Chen, F. Miao, G. J. Pappas, and V. Preciado, Hierarchical data-driven vehicle dispatch and ride-sharing, in Proc. IEEE 56th Annual Conf. Decision and Control (CDC), Melbourne, Australia, 2017, pp. 4458–4463.
[26]

J. F. Cordeau and G. Laporte, The dial-a-ride problem (DARP): Variants, modeling issues and algorithms, Q. J. Belg. Fr. Ital. Oper. Res. Soc., vol. 1, no. 2, pp. 89–101, 2003.

[27]

Y. Li and Y. Liu, Optimizing flexible one-to-two matching in ride-hailing systems with boundedly rational users, Transp. Res. Part E Logist. Transp. Rev., vol. 150, p. 102329, 2021.

[28]

J. Wang, X. Wang, S. Yang, H. Yang, X. Zhang, and Z. Gao, Predicting the matching probability and the expected ride/shared distance for each dynamic ridepooling order: A mathematical modeling approach, Transp. Res. Part B Methodol., vol. 154, pp. 125–146, 2021.

[29]

A. Schulz and C. Pfeiffer, A Branch-and-Cut algorithm for the dial-a-ride problem with incompatible customer types, Transp. Res. Part E Logist. Transp. Rev., vol. 181, p. 103394, 2024.

[30]

X. Cheng, S. Fu, J. Sun, M. Zuo, and X. Meng, Trust in online ride-sharing transactions: Impacts of heterogeneous order features, J. Manag. Inf. Syst., vol. 40, no. 1, pp. 183–207, 2023.

[31]
M. Zhou, J. Jin, W. Zhang, Z. Qin, Y. Jiao, C. Wang, G. Wu, Y. Yu, and J. Ye, Multi-agent reinforcement learning for order-dispatching via order-vehicle distribution matching, in Proc. 28th ACM Int. Conf. Information and Knowledge Management, Beijing, China, 2019, pp. 2645–2653.
[32]

S. Feng, P. Duan, J. Ke, and H. Yang, Coordinating ride-sourcing and public transport services with a reinforcement learning approach, Transp. Res. Part C Emerg. Technol., vol. 138, p. 103611, 2022.

[33]

Z. T. Qin, X. Tang, Y. Jiao, F. Zhang, Z. Xu, H. Zhu, and J. Ye, Ride-hailing order dispatching at DiDi via reinforcement learning, Inf. J. Appl. Anal., vol. 50, no. 5, pp. 272–286, 2020.

[34]

Z. Zhu, J. Ke, and H. Wang, A mean-field Markov decision process model for spatial-temporal subsidies in ride-sourcing markets, Transp. Res. Part B Methodol., vol. 150, pp. 540–565, 2021.

[35]
I. Jindal, Z. T. Qin, X. Chen, M. Nokleby, and J. Ye, Optimizing taxi carpool policies via reinforcement learning and spatio-temporal mining, in Proc. IEEE Int. Conf. Big Data (Big Data), Seattle, WA, USA, 2018, pp. 1417–1426.
[36]

A. O. Al-Abbasi, A. Ghosh, and V. Aggarwal, DeepPool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning, IEEE Trans. Intell. Transp. Syst., vol. 20, no. 12, pp. 4714–4727, 2019.

[37]

T. Abeywickrama, V. Liang, and K. L. Tan, Optimizing bipartite matching in real-world applications by incremental cost computation, Proc. VLDB Endow., vol. 14, no. 7, pp. 1150–1158, 2021.

[38]

H. Hosni, J. Naoum-Sawaya, and H. Artail, The shared-taxi problem: Formulation and solution methods, Transp. Res. Part B Methodol., vol. 70, pp. 303–318, 2014.

[39]

B. Ghaddar, J. Naoum-Sawaya, A. Kishimoto, N. Taheri, and B. Eck, A Lagrangian decomposition approach for the pump scheduling problem in water networks, Eur. J. Oper. Res., vol. 241, no. 2, pp. 490–501, 2015.

[40]
A. Grover and J. Leskovec, Node2vec: Scalable feature learning for networks, in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 855–864.
[41]
H. Van Hasselt, A. Guez, and D. Silver, Deep reinforcement learning with double Q-learning, Proc. AAAI Conf. Artif. Intell., doi: 10.1609/aaai.v30i1.10295.
[42]
Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, Dueling network architectures for deep reinforcement learning, in Proc. 33rd Int. Conf. Int. Conf. Machine Learning, New York, NY, USA, 2016, pp. 1995–2003.
Complex System Modeling and Simulation
Pages 368-386
Cite this article:
Jing X, Pei X, Xu P, et al. Reinforcement Learning-Driven Intelligent Truck Dispatching Algorithms for Freeway Logistics. Complex System Modeling and Simulation, 2024, 4(4): 368-386. https://doi.org/10.23919/CSMS.2024.0016

25

Views

0

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 04 February 2024
Revised: 27 May 2024
Accepted: 21 June 2024
Published: 30 December 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return