AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Full Length Article | Open Access

Feature-aided pose estimation approach based on variational auto-encoder structure for spacecrafts

Yanfang LIUa,b,( )Rui ZHOUaDesong DUaShuqing CAOc,dNaiming QIa,b
Department of Aerospace Engineering, Harbin Institute of Technology, Harbin 150001, China
Suzhou Research Institute of HIT, Suzhou 215104, China
Shanghai Institute of Spaceflight Control Technology, Shanghai 201109, China
Shanghai Key Laboratory of Aerospace Intelligent Control Technology, Shanghai 201109, China

Peer review under responsibility of Editorial Committee of CJA.

Show Author Information

Abstract

Real-time 6 Degree-of-Freedom (DoF) pose estimation is of paramount importance for various on-orbit tasks. Benefiting from the development of deep learning, Convolutional Neural Networks (CNNs) in feature extraction has yielded impressive achievements for spacecraft pose estimation. To improve the robustness and interpretability of CNNs, this paper proposes a Pose Estimation approach based on Variational Auto-Encoder structure (PE-VAE) and a Feature- Aided pose estimation approach based on Variational Auto-Encoder structure (FA-VAE), which aim to accurately estimate the 6 DoF pose of a target spacecraft. Both methods treat the pose vector as latent variables, employing an encoder-decoder network with a Variational Auto-Encoder (VAE) structure. To enhance the precision of pose estimation, PE-VAE uses the VAE structure to introduce reconstruction mechanism with the whole image. Furthermore, FA-VAE enforces feature shape constraints by exclusively reconstructing the segment of the target spacecraft with the desired shape. Comparative evaluation against leading methods on public datasets reveals similar accuracy with a threefold improvement in processing speed, showcasing the significant contribution of VAE structures to accuracy enhancement, and the additional benefit of incorporating global shape prior features.

References

1

Xue ZH, Liu JG. Review of space manipulator control technologies. Robot 2022;44(1):107–28 [Chinese].

2

Spiller D, Magionami E, Schiattarella V, et al. On-orbit recognition of resident space objects by using star trackers. Acta Astronaut 2020;177:478–96.

3

Zhou R, Liu YF, Qi NM, et al. Overview of visual pose estimation methods for space missions. Opt Precis Eng 2022;30(20):2538–53.

4
Bischof B, Kerstein L, Starke J, et al. Roger - robotic geostationary orbit restorer. 34th COSPAR scientific assembly. 2003.
5
Nishida SI, Kawamoto S, Okawa Y, et al. Space debris removal system using a small satellite. Proceedings of the 57th international astronautical congress. Reston: AIAA; 2006.
6
Debus T, Dougherty S. Overview and performance of the frontend robotics enabling near-term demonstration (FREND) robotic arm. Proceedings of the AIAA infotech@aerospace conference. Reston: AIAA; 2009.
7

Li YP, Wang YP, Xie YC. Using consecutive point clouds for pose and motion estimation of tumbling non-cooperative target. Adv Space Res 2019;63(5):1576–87.

8

Huo YR, Li Z, Zhang F. Fast and accurate spacecraft pose estimation from single shot space imagery using box reliability and keypoints existence judgments. IEEE Access 2020;8:216283–97.

9

Opromolla R, Fasano G, Rufino G, et al. Uncooperative pose estimation with a LIDAR-based system. Acta Astronaut 2015;110:287–97.

10

Zhao GP, Xu SX, Bo YM. LiDAR-based non-cooperative tumbling spacecraft pose tracking by fusing depth maps and point clouds. Sensors 2018;18(10):3432.

11

Pasqualetto Cassinis L, Fonod R, Gill E. Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft. Prog Aerosp Sci 2019;110:100548.

12
Capuano V, Alimo SR, Ho AQ, et al. Robust features extraction for on-board monocular-based spacecraft pose acquisition. Proceedings of the AIAA scitech 2019 forum. Reston: AIAA; 2019.
13

Gong BC, Wang S, Li S, et al. Review of space relative navigation based on angles-only measurements. Astrodynamics 2023;7(2):131–52.

14

Qiu LW, Tang L, Zhong R. Toward the recognition of spacecraft feature components: A new benchmark and a new model. Astrodynamics 2022;6(3):237–48.

15

Anzai Y, Yairi T, Takeishi N, et al. Visual localization for asteroid touchdown operation based on local image features. Astrodynamics 2020;4(2):149–61.

16

Hu RH, Huang XY, Xu C. Integrated visual navigation based on angles-only measurements for asteroid final landing phase. Astrodynamics 2023;7(1):69–82.

17

Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vis 2004;60(2):91–110.

18

Bay H, Ess A, Tuytelaars T, Gool LV. Surf: Speeded up robust features. Comput Vis Image Underst 2008;110(3):346–59.

19
E Rublee, V Rabaud, K Konolige, et al., ORB, An efficient alternative to SIFT or SURF. 2011 international conference on computer vision. Piscataway: IEEE Press; 2011.p. 2564–71.
20
MA Fischler and RC Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Readings in Computer Vision, 1987, Elsevier; Amsterdam, 726–40.
21

Lepetit V, Moreno-Noguer F, Fua P. EPnP: an accurate O(n) solution to the PnP problem. Int J Comput Vis 2009;81(2):155–66.

22

Drummond T, Cipolla R. Real-time visual tracking of complex structures. IEEE Trans Pattern Anal Mach Intell 2002;24(7):932–46.

23

Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM 2017;60(6):84–90.

24

Kisantal M, Sharma S, Park TH, et al. Satellite pose estimation challenge: dataset, competition design, and results. IEEE Trans Aerosp Electron Syst 2020;56(5):4083–98.

25
Park TH, Märtens M, Lecuyer G, et al. SPEED: Next-generation dataset for spacecraft pose estimation across domain gap. 2022 IEEE aerospace conference (AERO). Piscataway: IEEE Press; 2022. p. 1–15.
26
Price A, Yoshida K. A monocular pose estimation case study: The Hayabusa2 minerva-II2 deployment. 2021 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW). Piscataway: IEEE Press; 2021. p. 1992–2001.
27
Musallam MA, Gaudilliere V, Ghorbel E, et al. Spacecraft recognition leveraging knowledge of space environment: Simulator, dataset, competition design and analysis. 2021 IEEE international conference on image processing challenges (ICIPC). Piscataway: IEEE Press; 2021. p. 11–5.
28
Proença PF, Gao Y. Deep learning for spacecraft pose estimation from photorealistic rendering. 2020 IEEE international conference on robotics and automation (ICRA). Piscataway: IEEE Press; 2020. p. 6007–13.
29
Hu YL, Speierer S, Jakob W, et al. Wide-depth-range 6D object pose estimation in space. 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR). Piscataway: IEEE Press; 2021.p.15865–74.
30

Sharma S, D’Amico S. Neural network-based pose estimation for noncooperative spacecraft rendezvous. IEEE Trans Aerosp Electron Syst 2020;56(6):4638–58.

31

Piazza M, Maestrini M, Di Lizia P. Monocular relative pose estimation pipeline for uncooperative resident space objects. J Aerosp Inf Syst 2022;19(9):613–32.

32
Park TH, Sharma S, D’Amico S. Towards robust learning-based pose estimation of noncooperative spacecraft. arXiv preprint: 1909.00392; 2019.
33
Chen B, Cao JW, Parra A, et al. Satellite pose estimation with deep landmark regression and nonlinear pose refinement. 2019 IEEE/CVF international conference on computer vision workshop (ICCVW). Piscataway: IEEE Press; 2019. p. 2816–24.
34
Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. 2016 IEEE conference on computer vision and pattern recognition (CVPR). Piscataway: IEEE Press; 2016. p. 779–88.
35
Sharma S, Beierle C, D’Amico S. Pose estimation for noncooperative spacecraft rendezvous using convolutional neural networks. 2018 IEEE aerospace conference. Piscataway: IEEE Press; 2018. p. 1–12.
36
Posso J, Bois G, Savaria Y. Mobile-URSONet: An embeddable neural network for onboard spacecraft pose estimation. 2022 IEEE international symposium on circuits and systems (ISCAS). Piscataway: IEEE Press; 2022. p. 794–8.
37

Khan S, Naseer M, Hayat M, et al. Transformers in vision: A survey. ACM Comput Surv 2022;54(10s):200.

38

Han K, Wang YH, Chen HT, et al. A survey on vision transformer. IEEE Trans Pattern Anal Mach Intell 2023;45(1):87–110.

39
Zheng C, Zhu SJ, Mendieta M, et al. 3D human pose estimation with spatial and temporal transformers. 2021 IEEE/CVF international conference on computer vision (ICCV). Piscataway: IEEE Press; 2021. p. 11636–45.
40
Li WH, Liu H, Tang H, et al. MHFormer: Multi-hypothesis transformer for 3D human pose estimation. 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR). Piscataway: IEEE Press; 2022. p. 13137–46.
41

Wang Z, Sun XL, Li Z, et al. Transformer based monocular satellite pose estimation. Acta Aeronaut Astronaut Sin 2022;43(5):325298 [Chinese].

42
Zhao X, Ding WC, An YQ, Du YL, Yu T, et al. Fast segment anything. arXiv preprint:230612156; 2023.
43
Kirillov A, Mintun E, Ravi N, et al. Segment anything. arXiv preprint:230402643; 2023.
44
Geirhos R, Rubisch P, Michaelis C, et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint:1811.12231; 2018.
45
Sun MJ, Li ZC, Xiao CW, et al. Can shape structure features improve model robustness under diverse adversarial settings? 2021 IEEE/CVF international conference on computer Vision (ICCV). Piscataway: IEEE Press; 2021. p. 7506–15.
46

Pérez-Villar JIB, García-Martín Á, Bescós J, et al. Spacecraft pose estimation: Robust 2D and 3D-structural losses and unsupervised domain adaptation by inter-model consensus. IEEE Trans Aerospace Electron Syst 2023:1–12.

47
Wang SL, Wang SB, Jiao B, et al. CA-SpaceNet: Counterfactual analysis for 6D pose estimation in space. 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS). Piscataway: IEEE Press; 2022. p. 10627–34.
48
Redmon J, Farhadi A. YOLOv3: An incremental improvement. arXiv preprint: 180402767; 2018.
49
J Rolfe, LeCun Y. Discriminative recurrent sparse auto-encoders. 1st international conference on learning representations. 2013.
50

Wang Z, Chen ML, Guo YL, et al. Bridging the domain gap in satellite pose estimation: a self-training approach based on geometrical constraints. IEEE Trans Aerosp Electron Syst 2023:1–14.

51

Park TH, Märtens M, Jawaid M, et al. Satellite pose estimation competition 2021: Results and analyses. Acta Astronaut 2023;204:640–65.

52

Park TH, D’Amico S. Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap. Adv Space Res 2023, https://doi.org/10.1016/j.asr.2023.03.036.

53

Zhou R, Liu YF, Cao SQ, et al. Design and experiment of spacecraft relative motion simulation and relative pose measurement evaluation system. J Mech Eng 2023;59(13):11-23 [Chinese].

Chinese Journal of Aeronautics
Pages 329-341
Cite this article:
LIU Y, ZHOU R, DU D, et al. Feature-aided pose estimation approach based on variational auto-encoder structure for spacecrafts. Chinese Journal of Aeronautics, 2024, 37(8): 329-341. https://doi.org/10.1016/j.cja.2024.03.017

23

Views

0

Crossref

0

Web of Science

0

Scopus

Altmetrics

Received: 02 September 2023
Revised: 07 October 2023
Accepted: 07 January 2024
Published: 20 March 2024
© 2024 Chinese Society of Aeronautics and Astronautics.

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Return