Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
In this study, a novel approach based on the U-Net deep neural network for image segmentation is leveraged for real-time extraction of tracklets from optical acquisitions. As in all machine learning (ML) applications, a series of steps is required for a working pipeline: dataset creation, preprocessing, training, testing, and post-processing to refine the trained network output. Online websites usually lack ready-to-use datasets; thus, an in-house application artificially generates 360 labeled images. Particularly, this software tool produces synthetic night-sky shots of transiting objects over a specified location and the corresponding labels: dual-tone pictures with black backgrounds and white tracklets. Second, both images and labels are downscaled in resolution and normalized to accelerate the training phase. To assess the network performance, a set of both synthetic and real images was inputted. After the preprocessing phase, real images were fine-tuned for vignette reduction and background brightness uniformity. Additionally, they are down-converted to eight bits. Once the network outputs labels, post-processing identifies the centroid right ascension and declination of the object. The average processing time per real image is less than 1.2 s; bright tracklets are easily detected with a mean centroid angular error of 0.25 deg in 75% of test cases with a 2 deg field-of-view telescope. These results prove that an ML-based method can be considered a valid choice when dealing with trail reconstruction, leading to acceptable accuracy for a fast image processing pipeline.
Bennett, A. A., Schaub, H., Carpenter, R. Assessing debris strikes in spacecraft telemetry: Development and comparison of various techniques. Acta Astronautica, 2021, 181: 516–529.
Masias, M., Freixenet, J., Lladó, X., Peracaula, M. A review of source detection approaches in astronomical images. Monthly Notices of the Royal Astronomical Society, 2012, 422(2): 1674–1689.
Izzo, D., Märtens, M., Pan, B. F. A survey on artificial intelligence trends in spacecraft guidance dynamics and control. Astrodynamics, 2019, 3(4): 287–299.
Song, Y., Miao, X. Y., Cheng, L., Gong, S. P. The feasibility criterion of fuel-optimal planetary landing using neural networks. Aerospace Science and Technology, 2021, 116: 106860.
Zou, K. H., Warfield, S. K., Bharatha, A., Tempany, C. M. C., Kaus, M. R., Haker, S. J., Wells, W. M. III, Jolesz, F. A., Kikinis, R. Statistical validation of image segmentation quality based on a spatial overlap index1: Scientific reports. Academic Radiology, 2004, 11(2): 178–189.
Yamashita, R., Nishio, M., Do, R. K. G., Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights into Imaging, 2018, 9(4): 611–629.
Silburt, A., Ali-Dib, M., Zhu, C. C., Jackson, A., Valencia, D., Kissin, Y., Tamayo, D., Menou, K. Lunar crater identification via deep learning. Icarus, 2019, 317: 27–38.
745
Views
32
Downloads
18
Crossref
15
Web of Science
17
Scopus
1
CSCD
Altmetrics
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/