The construction industry necessitates complex interactions among multiple agents (e.g., workers and robots) for efficient task execution. In this paper, we present a framework that aims at achieving multiagent reinforcement learning (RL) for robot control in construction tasks. Our proposed framework leverages the principles of proximal policy optimization (PPO) and develops a multiagent variant to enable robots to acquire sophisticated control policies. We evaluated the effectiveness of our framework through four collaborative construction tasks. The results revealed the efficient collaboration mechanism between agents and demonstrated the ability of our approach to enable multiple robots to learn and adapt their behaviors in complex and dynamic construction tasks while effectively preventing collisions. The results also revealed the advantages of combining RL and inverse kinematics (IK) in enabling precise installation. The findings from this research contribute to the advancement of multiagent RL in the domain of construction robotics. By enabling robots to collaborate effectively, we pave the way for more efficient, flexible, and intelligent construction processes.
H. Golizadeh, C. K. H. Hon, R. Drogemuller, et al. Digital engineering potential in addressing causes of construction accidents. Automat Constr, 2018, 95: 284–295.
X. Y. Huang, J. Hinze. Analysis of construction worker fall accidents. J Constr Eng Manage, 2003, 129: 262–271.
Q. P. Ha, L. Yen, C. Balaguer. Robotic autonomous systems for earthmoving in military applications. Automat Constr, 2019, 107: 102934.
E. Gambao, C. Balaguer, F. Gebhart. Robot assembly system for computer-integrated construction. Automat Constr, 2000, 9: 479–487.
K. Jung, B. Chu, D. Hong. Robot-based construction automation: An application to steel beam assembly (Part II). Automat Constr, 2013, 32: 62–79.
B. Chu, K. Jung, M. T. Lim, et al. Robot-based construction automation: An application to steel beam assembly (Part I). Automat Constr, 2013, 32: 46–61.
X. Y. Ma, C. Mao, G. W. Liu. Can robots replace human beings?—Assessment on the developmental potential of construction robot. J Build Eng, 2022, 56: 104727.
C. J. Liang, X. Wang, V. R. Kamat, et al. Human–robot collaboration in construction: Classification and research trends. J Constr Eng Manage, 2021, 147: 03121006.
M. Vega-Heredia, R. E. Mohan, T. Y. Wen, et al. Design and modelling of a modular window cleaning robot. Automat Constr, 2019, 103: 268–278.
R. Saltaren, R. Aracil, O. Reinoso. Analysis of a climbing parallel robot for construction applications. Comput Aided Civ Inf, 2004, 19: 436–445.
N. Melenbrink, J. Werfel, A. Menges. On-site autonomous construction robots: Towards unsupervised building. Automat Constr, 2020, 119: 103312.
N. Melenbrink, K. Rinderspacher, A. Menges, et al. Autonomous anchoring for robotic construction. Automat Constr, 2020, 120: 103391.
S. K. Baduge, S. Thilakarathna, J. S. Perera, et al. Artificial intelligence and smart vision for building and construction 4.0: Machine and deep learning methods and applications. Automat Constr, 2022, 141: 104440.
T. Sasaki, K. Kawashima. Remote control of backhoe at construction site with a pneumatic robot system. Automat Constr, 2008, 17: 907–914.
S. Lee, T. M. Adams. Spatial model for path planning of multiple mobile construction robots. Comput Aided Civ Inf, 2004, 19: 231–245.
K. S. Holkar, M. L. Waghmare. An overview of model predictive control. Int J Control Autom, 2010, 3: 47–64.
M. Ambrosino, F. Boucher, P. Mengeot, et al. A trajectory-based explicit reference governor for the laying activity with heavy pre-fabricated elements. Constr Robot, 2023, 7: 41–52.
M. M. Nicotra, E. Garone. The explicit reference governor: A general framework for the closed-form control of constrained nonlinear systems. IEEE Control Syst Mag, 2018, 38: 89–107.
K. Merckaert, B. Convens, C. J. Wu, et al. Real-time motion control of robotic manipulators for safe human–robot coexistence. Rob Comput Integr Manuf, 2022, 73: 102223.
D. Lee, M. Kim. Autonomous construction hoist system based on deep reinforcement learning in high-rise building construction. Automat Constr, 2021, 128: 103737.
B. Belousov, B. Wibranek, J. Schneider, et al. Robotic architectural assembly with tactile skills: Simulation and optimization. Automat Constr, 2022, 133: 104006.
A. A. Apolinarska, M. Pacher, H. Li, et al. Robotic assembly of timber joints using reinforcement learning. Automat Constr, 2021, 125: 103569.
H. P. Li, H. B. He. Multiagent trust region policy optimization. IEEE Trans Neural Netw Learn Syst, 2024, 35: 12873–12887.
Y. D. Wang, H. Liu, W. B. Zheng, et al. Multi-objective workflow scheduling with deep- Q-network-based multi-agent reinforcement learning. IEEE Access, 2019, 7: 39974–39982.
T. Rashid, M. Samvelyan, C. S. de Witt, et al. Monotonic value function factorisation for deep multi-agent reinforcement learning. J Mach Learn Res, 2020, 21: 178.
H. S. Choi, C. S. Han, K. Y. Lee, et al. Development of hybrid robot for construction works with pneumatic actuator. Automat Constr, 2005, 14: 452–459.
S. Kang, E. Miranda. Planning and visualization for automated robotic crane erection processes in construction. Automat Constr, 2006, 15: 398–414.
S. Cho, S. Han. Reinforcement learning-based simulation and automation for tower crane 3D lift planning. Automat Constr, 2022, 144: 104620.
F. A. Azad, S. A. Rad, M. Arashpour. Back-stepping control of delta parallel robots with smart dynamic model selection for construction applications. Automat Constr, 2022, 137: 104211.
J. N. Cai, A. Du, X. Y. Liang, et al. Prediction-based path planning for safe and efficient human–robot collaboration in construction via deep reinforcement learning. J Comput Civ Eng, 2023, 37: 04022046.
C. J. Liang, V. R. Kamat, C. C. Menassa. Teaching robots to perform quasi-repetitive construction tasks through human demonstration. Automat Constr, 2020, 120: 103370.
L. Huang, Z. H. Zhu, Z. B. Zou. To imitate or not to imitate: Boosting reinforcement learning-based construction robotic control for long-horizon tasks using virtual demonstrations. Automat Constr, 2023, 146: 104691.
V. Asghari, A. J. Biglari, S. C. Hsu. Multiagent reinforcement learning for project-level intervention planning under multiple uncertainties. J Manage Eng, 2023, 39: 04022075.
C. P. Andriotis, K. G. Papakonstantinou. Managing engineering systems with large state and action spaces through deep reinforcement learning. Reliab Eng Syst Safe, 2019, 191: 106483.
L. Yu, Z. B. Xu, T. F. Zhang, et al. Energy-efficient personalized thermal comfort control in office buildings based on multi-agent deep reinforcement learning. Build Environ, 2022, 223: 109458.
J. P. Liu, P. K. Liu, L. Feng, et al. Automated clash resolution for reinforcement steel design in concrete frames via Q-learning and Building Information Modeling. Automat Constr, 2020, 112: 103062.
J. Zhong, T. Wang, L. L. Cheng. Collision-free path planning for welding manipulator via hybrid algorithm of deep reinforcement learning and inverse kinematics. Complex Intell Syst, 2022, 8: 1899–1912.
G. Elías Alonso, X. G. Jin. Skeleton‐level control for multi‐agent simulation through deep reinforcement learning. Comput Animat Virt Worlds, 2022, 33: e2079.