Journal Home > Volume 29 , Issue 5

Breast mass identification is of great significance for early screening of breast cancer, while the existing detection methods have high missed and misdiagnosis rate for small masses. We propose a small target breast mass detection network named Residual asymmetric dilated convolution-Cross layer attention-Mean standard deviation adaptive selection-You Only Look Once (RCM-YOLO), which improves the identifiability of small masses by increasing the resolution of feature maps, adopts residual asymmetric dilated convolution to expand the receptive field and optimize the amount of parameters, and proposes the cross-layer attention that transfers the deep semantic information to the shallow layer as auxiliary information to obtain key feature locations. In the training process, we propose an adaptive positive sample selection algorithm to automatically select positive samples, which considers the statistical features of the intersection over union sets to ensure the validity of the training set and the detection accuracy of the model. To verify the performance of our model, we used public datasets to carry out the experiments. The results showed that the mean Average Precision (mAP) of RCM-YOLO reached 90.34%, compared with YOLOv5, the missed detection rate for small masses of RCM-YOLO was reduced to 11%, and the single detection time was reduced to 28 ms. The detection accuracy and speed can be effectively improved by strengthening the feature expression of small masses and the relationship between features. Our method can help doctors in batch screening of breast images, and significantly promote the detection rate of small masses and reduce misdiagnosis.


menu
Abstract
Full text
Outline
About this article

Detection and Diagnosis of Small Target Breast Masses Based on Convolutional Neural Networks

Show Author's information Ling Tan1Ying Liang1Jingming Xia2( )Hui Wu1Jining Zhu1
School of Computer Science, Nanjing University of Information Science & Technology, Nanjing 210044, China
School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing 210044, China

Abstract

Breast mass identification is of great significance for early screening of breast cancer, while the existing detection methods have high missed and misdiagnosis rate for small masses. We propose a small target breast mass detection network named Residual asymmetric dilated convolution-Cross layer attention-Mean standard deviation adaptive selection-You Only Look Once (RCM-YOLO), which improves the identifiability of small masses by increasing the resolution of feature maps, adopts residual asymmetric dilated convolution to expand the receptive field and optimize the amount of parameters, and proposes the cross-layer attention that transfers the deep semantic information to the shallow layer as auxiliary information to obtain key feature locations. In the training process, we propose an adaptive positive sample selection algorithm to automatically select positive samples, which considers the statistical features of the intersection over union sets to ensure the validity of the training set and the detection accuracy of the model. To verify the performance of our model, we used public datasets to carry out the experiments. The results showed that the mean Average Precision (mAP) of RCM-YOLO reached 90.34%, compared with YOLOv5, the missed detection rate for small masses of RCM-YOLO was reduced to 11%, and the single detection time was reduced to 28 ms. The detection accuracy and speed can be effectively improved by strengthening the feature expression of small masses and the relationship between features. Our method can help doctors in batch screening of breast images, and significantly promote the detection rate of small masses and reduce misdiagnosis.

Keywords: deep learning, mammography diagnosis, mass detection, cross-layer attention, adaptive positive sample selection

References(48)

[1]
C. P. Wild, E. Weilderpass, and B. W. Stewart, World Cancer Report. Lyon, France: IARC Press, 2020.
[2]

K. Katanoda and T. Matsuda, Five-year relative survival rate of liver cancer in the USA, Europe and Japan, Jpn J. Clin. Oncol., vol. 44, no. 3, pp. 302–303, 2014.

[3]

H. R. Peppard, B. E. Nicholson, C. M. Rochman, J. K. Merchant, R. C. Mayo 3rd, and J. A. Harvey, Digital breast tomosynthesis in the diagnostic setting: Indications and clinical applications, Radiographics, vol. 35, no. 4, pp. 975–990, 2015.

[4]

L. Shen, L. R. Margolies, J. H. Rothstein, E. Fluder, R. McBride, and W. Sieh, Deep learning to improve breast cancer detection on screening mammography, Sci. Rep., vol. 9, no. 1, p. 12495, 2019.

[5]

Z. Yang, Z. Cao, Y. Zhang, Y. Tang, X. Lin, R. Ouyang, M. Wu, M. Han, J. Xiao, L. Huang, et al., MommiNet-v2: Mammographic multi-view mass identification networks, Med. Image Anal., vol. 73, p. 102204, 2021.

[6]

A. Jalalian, S. B. T. Mashohor, H. R. Mahmud, M. I. B. Saripan, A. R. B. Ramli, and B. Karasfi, Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: A review, Clin. Imag., vol. 37, no. 3, pp. 420–426, 2013.

[7]

Z. Wang, Y. Kang, G. Yu, and Y. Zhao, Breast tumor detection algorithm based on feature selection ELM, Journal of Northeastern University, no. NaturalScience, pp. 792–796, 2013.

[8]
Kriti, J. Virmani, N. Dey, and V. Kumar, PCA-PNN and PCA-SVM based CAD systems for breast density classification, in Applications of Intelligent Optimization in Biology and Medicine, A. E. Hassanien, C. Grosan, and M. Fahmy Tolba, eds. Cham, Switzerland: Springer, 2015, pp. 159–180.
DOI
[9]

Q. Hua, L. Chen, P. Li, S. Zhao, and Y. Li, A pixel–channel hybrid attention model for image processing, Tsinghua Science and Technology, vol. 27, no. 5, pp. 804–816, 2022.

[10]

Y. Li, X. Zhang, M. Zhang, and L. Shen, Object detection in autonomous driving scene based on improved Efficientdet, Computer Engineering and Applications, vol. 58, no. 6, pp. 183–191, 2022.

[11]

Z. Xu and X. Bai, Small ship target detection method for remote sensing images based on dual feature enhancement, Acta Optica Sinica, vol. 42, no. 18, pp. 128–137, 2022.

[12]
L. Zhang, K. Zhang, and H. Pan, SUNet++: A deep network with channel attention for small-scale object segmentation on 3D medical images, Tsinghua Science and Technology, vol. 28, no. 4, pp. 628–638, 2023.
DOI
[13]

W. C. Walton, S. J. Kim, and L. A. Mullen, Automated registration for dual-view X-ray mammography using convolutional neural networks, IEEE Trans. Biomed Eng., vol. 69, no. 11, pp. 3538–3550, 2022.

[14]

G. Liu, Y. Wei, Y. Xie, J. Li, L. Qiao, and J. J. Yang, A computer-aided system for ocular myasthenia gravis diagnosis, Tsinghua Science and Technology, vol. 26, no. 5, pp. 749–758, 2021.

[15]

T. Kooi, G. Litjens, B. van Ginneken, A. Gubern-Mérida, C. I. Sánchez, R. Mann, A. den Heeten, and N. Karssemeijer, Large scale deep learning for computer aided detection of mammographic lesions, Med. Image Anal., vol. 35, pp. 303–312, 2017.

[16]
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, SSD: Single shot multibox detector, in Proc. Computer Vision – ECCV 2016, Amsterdam, the Netherlands, 2016, pp. 21–37.
DOI
[17]
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You only look once: Unified, real-time object detection, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 779–788.
DOI
[18]
J. Redmon and A. Farhadi, YOLO9000: Better, faster, stronger, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 7263–7271.
DOI
[19]
J. Redmon and A. Farhadi, Yolov3: An incremental improvement, in Proc. 2018 IEEE Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 3526–3532.
[20]
R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proc. 2014 IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 580–587.
DOI
[21]
R. Girshick, Fast R-CNN, in Proc. 2015 IEEE Int. Conf. Computer Vision (ICCV), Santiago, Chile, 2015, pp. 1440–1448.
DOI
[22]
A. Akselrod-Ballin, L. Karlinsky, S. Alpert, S. Hasoul, R. Ben-Ari, and E. Barkan, A region based convolutional network for tumor detection and classification in breast mammography, in Deep Learning and Data Labeling for Medical Applications, G. Carneiro, D. Mateus, L. Peter, A. Bradley, J. M. R. S. Tavares, V. Belagiannis, J. P. Papa, J. C. Nascimento, M. Loog, Z. Lu, et al., eds. Cham, Switzerland: Springer International Publishing, 2016, pp. 197–205.
DOI
[23]

D. Ribli, A. Horváth, Z. Unger, P. Pollner, and I. Csabai, Detecting and classifying lesions in mammograms with deep learning, Sci. Rep., vol. 8, no. 1, p. 4165, 2018.

[24]

S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017.

[25]

F. F. Ting, Y. J. Tan, and K. S. Sim, Convolutional neural network improvement for breast cancer classification, Expert Syst. Appl., vol. 120, pp. 103–115, 2019.

[26]
S. Wang, X. Ding, P. Chen, and C. Liu, An improved cascade R-CNN for detecting mass in mammograms, Journal of Northeast Normal University (Natural Science Edition), vol. 52, no. 4, pp. 66–73, 2020.
[27]

M. A. Al-masni, M. A. Al-antari, J. M. Park, G. Gi, T. Y. Kim, P. Rivera, E. Valarezo, M. T. Choi, S. M. Han, and T. S. Kim, Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system, Comput. Meth. Programs Biomed., vol. 157, pp. 85–94, 2018.

[28]

A. Rodríguez-Ruiz, E. Krupinski, J. J. Mordang, K. Schilling, S. H. Heywang-Köbrunner, I. Sechopoulos, and R. M. Mann, Detection of breast cancer with mammography: Effect of an artificial intelligence support system, Radiology, vol. 290, no. 2, pp. 305–314, 2019.

[29]

Y. Zhang, G. Zhu, T. Shi, K. Zhang, and J. Yan, Small object detection in remote sensing images based on feature fusion and attention, Acta Optica Sinica, vol. 42, no. 24, pp. 140–150, 2022.

[30]

X. Han and F. Li, Remote sensing small object detection based on cross-layer attention enhancement, Laser & Optoelectronics Progress, vol. 60, no. 12, pp. 462–470, 2023.

[31]

W. Zhan, C. Sun, M. Wang, J. She, Y. Zhang, Z. Zhang, and Y. Sun, An improved YOLOv5 real-time detection method for small objects captured by UAV, Soft Comput. A Fusion Found. Methodol. Appl., vol. 26, no. 1, pp. 361–373, 2022.

[32]
X. Wang, R. Girshick, A. Gupta, and K. He, Non-local neural networks, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7794–7803.
DOI
[33]

L. Han and X. Bi, Retinal disease detection based on enhanced feature fusion YOLOv5, Applied Science and Technology, vol. 49, no. 1, pp. 66–72, 2022.

[34]

J. Lian, Y. Yin, L. Li, Z. Wang, and Y. Zhou, Small object detection in traffic scenes based on attention feature fusion, Sensors, vol. 21, no. 9, p. 7422, 2021.

[35]

X. Tong, B. Sun, J. Wei, Z. Zuo, and S. Su, Eaau-net: Enhanced asymmetric attention u-net for infrared small target detection, Remote. Sens., vol. 13, no. 16, p. 3200, 2021.

[36]

Y. Li, Q. Huang, X. Pei, Y. Chen, L. Jiao, and R. Shang, Cross-layer attention network for small object detection in remote sensing imagery, IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 14, pp. 2148–2161, 2021.

[37]

A. Baccouche, B. Garcia-Zapirain, C. Castillo Olea, and A. S. Elmaghraby, Breast lesions detection and classification via YOLO-based fusion models, Comput. Mater. Continua, vol. 69, no. 1, pp. 1407–1425, 2021.

[38]
A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, YOLOv4: Optimal speed and accuracy of object detection, arXiv preprint arXiv: 2004.10934, 2020.
[39]
S. Zhang, C. Chi, Y. Yao, Z. Lei, and S. Z. Li, Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 9759–9768.
DOI
[40]

I. C. Moreira, I. Amaral, I. Domingues, A. Cardoso, M. J. Cardoso, and J. S. Cardoso, INbreast: Toward a full-field digital mammographic database, Acad. Radiol., vol. 19, no. 2, pp. 236–248, 2012.

[41]

R. S. Lee, F. Gimenez, A. Hoogi, K. K. Miyake, M. Gorovoy, and D. L. Rubin, A curated mammography data set for use in computer-aided detection and diagnosis research, Sci. Data, vol. 4, p. 170177, 2017.

[42]
M. Heath, K. Bowyer, D. Kopans, R. Moore, and P. Kegelmeyer Jr., The digital database for screening mammography, in Proc. 3th Int. Workshop Digit, Mammography, Toronto, Canada, 2000, pp. 9–12.
[43]
G. W. Eklund, The art of mammographic positioning, in Radiological Diagnosis of Breast Diseases, M. Friedrich and E. A. Sickles, eds. Berlin, Germany: Springer Berlin Heidelberg, 2000, pp. 75–88.
DOI
[44]
S. Sasikala, M. Bharathi, M. Ezhilarasi, and S. Arunkumar, Breast cancer detection based on medio-lateral ObliqueView and cranio-caudal view mammograms: An overview, in Proc. 2019 IEEE 10th Int. Conf. Awareness Science and Technology (iCAST), Morioka, Japan, 2019, pp. 1–6.
DOI
[45]
S. M. Pizer, R. E. Johnston, J. P. Ericksen, B. C. Yankaskas, and K. E. Muller, Contrast-limited adaptive histogram equalization: Speed and effectiveness, in Proc. 1st Conf. Visualization in Biomedical Computing, Atlanta, GA, USA, 1990, pp. 337–345.
[46]

H. Li, S. Zhuang, D. A. Li, J. Zhao, and Y. Ma, Benign and malignant classification of mammogram images based on deep learning, Biomed. Signal Process. Contr., vol. 51, pp. 347–354, 2019.

[47]

S. J. Frank, A deep learning architecture with an object-detection algorithm and a convolutional neural network for breast mass detection and visualization, Healthc. Anal., vol. 3, p. 100186, 2023.

[48]
Y. Su, Q. Liu, W. Xie, and P. Hu, YOLO-LOGO: A transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms, Comput. Meth. Programs Biomed., vol. 221, p. 106903, 2022.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 11 April 2023
Revised: 26 July 2023
Accepted: 17 October 2023
Published: 02 May 2024
Issue date: October 2024

Copyright

© The Author(s) 2024.

Acknowledgements

Acknowledgment

This work was supported by the National Natural Science Foundation of China (No. 62271264), the National Key Research and Development Program of China (No. 2021ZD0102100), and the Industry University Research Foundation of Jiangsu Province (No. BY2022459).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return