AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (24.8 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Few-Shot Object Detection via Dual-Domain Feature Fusion and Patch-Level Attention

State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
Department of Advanced Manufacturing and Robotics, Peking University, Beijing 100871, China
Show Author Information

Abstract

Few-shot object detection receives much attention with the ability to detect novel class objects using limited annotated data. The transfer learning-based solution becomes popular due to its simple training with good accuracy, however, it is still challenging to enrich the feature diversity during the training process. And fine-grained features are also insufficient for novel class detection. To deal with the problems, this paper proposes a novel few-shot object detection method based on dual-domain feature fusion and patch-level attention. Upon original base domain, an elementary domain with more category-agnostic features is superposed to construct a two-stream backbone, which benefits to enrich the feature diversity. To better integrate various features, a dual-domain feature fusion is designed, where the feature pairs with the same size are complementarily fused to extract more discriminative features. Moreover, a patch-wise feature refinement termed as patch-level attention is presented to mine internal relations among the patches, which enhances the adaptability to novel classes. In addition, a weighted classification loss is given to assist the fine-tuning of the classifier by combining extra features from FPN of the base training model. In this way, the few-shot detection quality to novel class objects is improved. Experiments on PASCAL VOC and MS COCO datasets verify the effectiveness of the method.

References

[1]

W. Rawat and Z. Wang, Deep convolutional neural networks for image classification: A comprehensive review, Neural Comput., vol. 29, no. 9, pp. 2352−2449, 2017.

[2]

J. Jiao, H. Pan, C. Chen, T. Jin, Y. Dong, and J. Chen, Two-stage lesion detection approach based on dimension-decomposition and 3D context, Tsinghua Science and Technology, vol. 27, no. 1, pp. 103−113, 2022.

[3]
S. Guo, F. Liu, X. Yuan, C. Zou, L. Chen, and T. Shen, HSPOG: An optimized target recognition method based on histogram of spatial pyramid oriented gradients, Tsinghua Science and Technology, vol. 26, no. 4, pp. 475-483, 2021.
[4]

X. Wu, D. Sahoo, and S. C. H. Hoi, Recent advances in deep learning for object detection, Neurocomputing, vol. 396, pp. 39−64, 2020.

[5]
M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. D. Freitas, Learning to learn by gradient descent by gradient descent, in Proc. Int. Conf. on Neural Information Processing Systems, Red Hook, NY, USA, 2016, pp. 3988−3996.
[6]
W. Y. Chen, Y. C. Liu, Z. Kira, Y. C. Wang, and J. B. Huang, A closer look at few-shot classification, arXiv preprint arXiv: 1904.04232, 2019.
[7]
Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, Rethinking few-shot image classification: A good embedding is all you need? in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J. M. Frahm, eds. Cham, Switzerland: Springer, 2020, pp. 266−282.
[8]
Y. X. Wang, D. Ramanan, and M. Hebert, Meta-learning to detect rare objects, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), Seoul, Republic of Korea, 2019, pp. 9924−9933.
[9]
X. Yan, Z. Chen, A. Xu, X. Wang, X. Liang, and L. Lin, Meta R-CNN: Towards general solver for instance-level low-shot learning, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), Seoul, Republic of Korea, 2019, pp. 9576−9585.
[10]
B. Kang, Z. Liu, X. Wang, F. Yu, J. Feng, and T. Darrell, Few-shot object detection via feature reweighting, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), Seoul, Republic of Korea, 2019, pp. 8419−8428.
[11]
X. Wang, T. Huang, J. Gonzalez, T. Darrell, and F. Yu, Frustratingly simple few-shot object detection, in Proc. Int. Conf. on Machine Learning, Virtual, 2020, pp. 9919−9928.
[12]
J. Wu, S. Liu, D. Huang, and Y. Wang, Multi-scale positive sample refinement for few-shot object detection, in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J. M. Frahm, eds. Cham, Switzerland: Springer, 2020, pp. 456−472.
[13]
B. Sun, B. Li, S. Cai, Y. Yuan, and C. Zhang, FSCE: Few-shot object detection via contrastive proposal encoding, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 7348−7358.
[14]
L. Liu, B. Ma, Y. Zhang, X. Yi, and H. Li, AFD-net: Adaptive fully-dual network for few-shot object detection, in Proc. 29th ACM Int. Conf. Multimedia, Virtual Event, 2021, pp. 2549−2557.
[15]
K. Guirguis, A. Hendawy, G. Eskandar, M. Abdelsamad, M. Kayser, and J. Beyerer, CFA: Constraint-based finetuning approach for generalized few-shot object detection, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 2022, pp. 4038−4048.
[16]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, ImageNet: A large-scale hierarchical image database, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248−255.
[17]
Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni. Generalizing from a few examples: A survey on few-shot learning, ACM Computing Surveys, 2020, vol. 53, no. 3, pp. 1−34.
[18]

H. Xu, X. Wang, F. Shao, B. Duan, and P. Zhang, Few-shot object detection via sample processing, IEEE Access, vol. 9, pp. 29207−29221, 2021.

[19]
J. Redmon and A. Farhadi, YOLO9000: Better, faster, stronger, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 6517−6525.
[20]

S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137−1149, 2017.

[21]

Y. Lu, X. Chen, Z. Wu, and J. Yu, Decoupled metric network for single-stage few-shot object detection, IEEE Trans. Cybern., vol. 53, no. 1, pp. 514−525, 2023.

[22]
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, SSD: Single Shot MultiBox Detector. Cham, Switzerland: Springer, 2016, pp. 21−37.
[23]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Proc. 31st Int. Conf. on Neural Information Processing Systems (NIPS'17), Red Hook, NY, USA, 6000–6010, 2017.
[24]

X. Xu, T. Gao, Y. Wang, and X. Xuan, Event temporal relation extraction with attention mechanism and graph neural network, Tsinghua Science and Technology, vol. 27, no. 1, pp. 79−90, 2022.

[25]
L. Zhang, K. Zhang, and H. Pan, SUNet++: A deep network with channel attention for small-scale object segmentation on 3D medical images, Tsinghua Science and Technology, vol. 28, no. 4, pp. 628−638, 2023.
[26]
H. Hu, S. Bai, A. Li, J. Cui, and L. Wang, Dense relation distillation with context-aware aggregation for few-shot object detection, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 10180−10189.
[27]

G. Ren, W. Geng, P. Guan, Z. Cao, and J. Yu, Pixel-wise grasp detection via twin deconvolution and multi-dimensional attention, IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 8, pp. 4002−4010, 2023.

[28]

W. Geng, Z. Cao, P. Guan, F. Jing, M. Tan, and J. Yu, Grasp detection with hierarchical multi-scale feature fusion and inverted shuffle residual, Tsinghua Science and Technology, vol. 29, no. 1, pp. 244−256, 2024.

[29]
T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 936−944.
[30]
J. Leng, T. Chen, X. Gao, Y. Yu, Y. Wang, F. Gao, and Y. Wang, A comparative review of recent few-shot object detection algorithms. arXiv preprint arXiv:2111.00201, 2021.
[31]

S. Wang, M. E. Celebi, Y.-D. Zhang, X. Yu, S. Lu, X. Yao, Q. Zhou, M.-G. Miguel, Y. Tian, J. M. Gorriz, et al., Advances in data preprocessing for biomedical data fusion: An overview of the methods, challenges, and prospects, Inf. Fusion, vol. 76, pp. 376−421, 2021.

[32]

Y.-D. Zhang, Z. Dong, S.-H. Wang, X. Yu, X. Yao, Q. Zhou, H. Hu, M. Li, C. Jiménez-Mesa, J. Ramirez, et al., Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation, Inf. Fusion, vol. 64, pp. 149−187, 2020.

[33]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770−778.
[34]
X. Wang, R. Girshick, A. Gupta, and K. He, Non-local neural networks, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7794−7803.
[35]
J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, Dual attention network for scene segmentation, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 3141−3149.
[36]

M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, The pascal visual object classes (VOC) challenge, Int. J. Comput. Vis., vol. 88, no. 2, pp. 303−338, 2010.

[37]
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, Microsoft COCO: Common Objects in Context. Cham, Switzerland: Springer, 2014.
[38]

K. Lu, H. Wang, H. Zhang, and L. Wang, Convergence in high probability of distributed stochastic gradient descent algorithms, IEEE Trans. Automat. Contr., vol. 69, no. 4, pp. 2189−2204, 2024.

[39]
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, Grad-CAM: Visual explanations from deep networks via gradient-based localization, in Proc. IEEE Int. Conf. Computer Vision (ICCV), Venice, Italy, 2017, pp. 618−626.
Tsinghua Science and Technology
Pages 1237-1250
Cite this article:
Ren G, Liu J, Wang M, et al. Few-Shot Object Detection via Dual-Domain Feature Fusion and Patch-Level Attention. Tsinghua Science and Technology, 2025, 30(3): 1237-1250. https://doi.org/10.26599/TST.2024.9010031

321

Views

117

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 13 November 2023
Revised: 23 January 2024
Accepted: 01 February 2024
Published: 30 December 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return