Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Robotic assembly is widely utilized in large-scale manufacturing due to its high production efficiency, and the peg-in-hole assembly is a typical operation. While single peg-in-hole tasks have achieved great performance through reinforcement learning (RL) methods, multiple peg-in-hole assembly remains challenging due to complex geometry and physical constraints. To address this, we introduce a control policy workflow for multiple peg-in-hole assembly, dividing the task into three primitive sub-tasks: picking, alignment, and insertion to modularize the long-term task and improve sample efficiency. Sequential control policy (SeqPolicy), containing three control policies, is used to implement all the sub-tasks step-by-step. This approach introduces human knowledge to manage intermediate states, such as lifting height and aligning direction, thereby enabling flexible deployment across various scenarios. SeqPolicy demonstrated higher training efficiency with faster convergence and a higher success rate compared to the single control policy. Its adaptability is confirmed through generalization experiments involving objects with varying geometries. Recognizing the importance of object pose for control policies, a low-cost and adaptable method using visual representation containing objects’ pose information from RGB images is proposed to estimate objects’ pose in robot base frame directly in working scenarios. The representation is extracted by a Siamese-CNN network trained with self-supervised contrastive learning. Utilizing it, the alignment sub-task is successfully executed. These experiments validate the solution’s reusability and adaptability in multiple peg-in-hole scenarios.
K. P. Valavanis and K. M. Stellakis, A general organizer model for robotic assemblies and intelligent robotic systems, IEEE Trans. Syst., Man, Cybern., vol. 21, no. 2, pp. 302–317, 1991.
Y. Jiang, Z. Huang, B. Yang, and W. Yang, A review of robotic assembly strategies for the full operation procedure: Planning, execution and evaluation, Robot. Comput. Integr. Manuf., vol. 78, p. 102366, 2022.
C. C. Beltran-Hernandez, D. Petit, I. G. Ramirez-Alpizar, and K. Harada, Variable compliance control for robotic peg-in-hole assembly: A deep-reinforcement-learning approach, Appl. Sci., vol. 10, no. 19, p. 6923, 2020.
W. Chen, C. Zeng, H. Liang, F. Sun, and J. Zhang, Multimodality driven impedance-based Sim2Real transfer learning for robotic multiple peg-in-hole assembly, IEEE Trans. Cybern., vol. 54, no. 5, pp. 2784–2797, 2024.
H. Park, J. Park, D. H. Lee, J. H. Park, M. H. Baeg, and J. H. Bae, Compliance-based robotic peg-in-hole assembly strategy without force feedback, IEEE Trans. Ind. Electron., vol. 64, no. 8, pp. 6299–6309, 2017.
M. A. Lee, Y. Zhu, P. Zachares, M. Tan, K. Srinivasan, S. Savarese, F.-F. Li, A. Garg, and J. Bohg, Making sense of vision and touch: Learning multimodal representations for contact-rich tasks, IEEE Trans. Robot., vol. 36, no. 3, pp. 582–596, 2020.
J. Xu, Z. Hou, W. Wang, B. Xu, K. Zhang, and K. Chen, Feedback deep deterministic policy gradient with fuzzy reward for robotic multiple peg-in-hole assembly tasks, IEEE Trans. Ind. Inf., vol. 15, no. 3, pp. 1658–1667, 2019.
H. Chen, G. Zhang, H. Zhang, and T. A. Fuhlbrigge, Integrated robotic system for high precision assembly in a semi-structured environment, Assem. Autom., vol. 27, no. 3, pp. 247–252, 2007.
J. Kober, J. A. Bagnell, and J. Peters, Reinforcement learning in robotics: A survey, Int. J. Robot. Res., vol. 32, no. 11, pp. 1238–1274, 2013.
C. H. Wu and M. G. Kim, Modeling of part-mating strategies for automating assembly operations for robots, IEEE Trans. Syst., Man. Cybern., vol. 24, no. 7, pp. 1065–1074, 1994.
P. Falco, A. Attawia, M. Saveriano, and D. Lee, On policy learning robust to irreversible events: An application to robotic in-hand manipulation, IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 1482–1489, 2018.
C. Zeng, S. Li, B. Fang, Z. Chen, and J. Zhang, Generalization of robot force-relevant skills through adapting compliant profiles, IEEE Robot. Autom. Lett., vol. 7, no. 2, pp. 1055–1062, 2022.
X. Liu, Z. Liu, G. Wang, Z. Liu, and P. Huang, Efficient reinforcement learning method for multi-phase robot manipulation skill acquisition via human knowledge, IEEE Trans. Automat. Sci. Eng., pp. 1–10, 2024.
X. Liu, G. Wang, Z. Liu, Y. Liu, Z. Liu, and P. Huang, Hierarchical reinforcement learning integrating with human knowledge for practical robot skill learning in complex multi-stage manipulation, IEEE Trans. Automat. Sci. Eng., vol. 21, no. 3, pp. 3852–3862, 2024.
A. A. Apolinarska, M. Pacher, H. Li, N. Cote, R. Pastrana, F. Gramazio, and M. Kohler, Robotic assembly of timber joints using reinforcement learning, Autom. Constr., vol. 125, p. 103569, 2021.
R. S. Sutton, D. Precup, and S. Singh, Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning, Artif. Intell., vol. 112, no. 1–2, pp. 181–211, 1999.
G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto, Robot learning from demonstration by constructing skill trees, Int. J. Robot. Res., vol. 31, no. 3, pp. 360–375, 2012.
D. Han, B. Mulyana, V. Stankovic, and S. Cheng, A survey on deep reinforcement learning algorithms for robotic manipulation, Sensors, vol. 23, no. 7, p. 3762, 2023.
Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, 2013.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).