PDF (2.3 MB)
Collect
Submit Manuscript
Review | Open Access

Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review

Amirehsan Ghasemi1,2Soheil Hashtarkhani1David L. Schwartz3Arash Shaban‐Nejad1,2 ()
Department of Pediatrics, Center for Biomedical Informatics, College of Medicine, University of Tennessee Health Science Center, Memphis, Tennessee, USA
The Bredesen Center for Interdisciplinary Research and Graduate Education, University of Tennessee, Knoxville, Tennessee, USA
Department of Radiation Oncology, College of Medicine, University of Tennessee Health Science Center, Memphis, Tennessee, USA
Show Author Information

Graphical Abstract

View original image Download original image

Abstract

With the advances in artificial intelligence (AI), data‐driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision‐making by such algorithms is not trustworthy for clinicians and is considered a black‐box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer‐reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model‐agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree‐based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree‐based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI‐enabled health systems and medical devices and, ultimately, the quality of care and outcomes.

References

1

Zhang B, Vakanski A, Xian M. BI‐RADS‐NET‐V2: a composite multi‐task neural network for computer‐aided diagnosis of breast cancer in ultrasound images with semantic and quantitative explanations. IEEE Access. 2023;11:79480–94. https://doi.org/10.1109/ACCESS.2023.3298569

2

Hayes‐Roth F. Rule‐based systems. Commun ACM. 1985;28(9):921–32. https://doi.org/10.1145/4284.4286

3
Howard A, Zhang C, Horvitz E. Addressing bias in machine learning algorithms: a pilot study on emotion recognition for intelligent systems. In: 2017 IEEE workshop on advanced robotics and its social impacts (ARSO). Austin, TX, USA: IEEE, 2017 Workshop on Advanced Robotics and its Social Impacts (ARSO); 2017. p. 1–7.
4

van Giffen B, Herhausen D, Fahse T. Overcoming the pitfalls and perils of algorithms: a classification of machine learning biases and mitigation methods. J Bus Res. 2022;144:93–106. https://doi.org/10.1016/j.jbusres.2022.01.076

5

Shaban‐Nejad A, Michalowski M, Brownstein JS, Buckeridge DL. Guest editorial explainable AI: towards fairness, accountability, transparency and trust in healthcare. IEEE J Biomed Health Inform. 2021;25(7):2374–5. https://doi.org/10.1109/JBHI.2021.3088832

6

Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Asso. 2020;28(4):890–4. https://doi.org/10.1093/jamia/ocaa268

7

Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2010;22(10):1345–59. https://doi.org/10.1109/TKDE.2009.191

8
He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In: Leibe B, Matas J, Sebe N, Welling M, editors. Computer vision—ECCV 2016. Cham: Springer International Publishing; 2016. p. 630–45.
9
Mishkin D, Matas J. All you need is a good init. arXiv preprint arXiv:151106422. 2015.
10
Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach F, Blei D, editors. Proceedings of the 32nd international conference on machine learning vol. 37 of Proceedings of Machine Learning Research. Lille: PMLR; 2015. p. 448–56. Available from: https://proceedings.mlr.press/v37/ioffe15.html
11
Yu D, Xiong W, Droppo J, Stolcke A, Ye G, Li J, et al. Deep convolutional neural networks with layer‐wise context expansion and attention. In: Interspeech. San Francisco, California, USA: Proc. Interspeech; 2016. p. 17–21. https://doi.org/10.21437/Interspeech.2016-251
12

Gunning D, Aha D. DARPA's explainable artificial intelligence (XAI) program. AI Magazine. 2019;40(2):44–58. https://doi.org/10.1609/aimag.v40i2.2850

13
Shaban‐Nejad A, Michalowski M, Buckeridge DL. Explainability and interpretability: Keys to deep medicine. In: Shaban‐Nejad A, Michalowski M, Buckeridge DL, editors. Explainable AI in healthcare and medicine: building a culture of transparency and accountability. Cham: Springer International Publishing; 2021. p. 1–10.
14

Shaban‐Nejad A, Michalowski M, Buckeridge D. Explainable AI in healthcare and medicine: building a culture of transparency and accountability. Stud Comp Intel. 2020;914:344. https://doi.org/10.1007/978-3-030-53352-6

15

Luo W, Phung D, Tran T, Gupta S, Rana S, Karmakar C, et al. Guidelines for developing and reporting machine learning predictive models in biomedical research: a multidisciplinary view. J Med Internet Res. 2016;18(12):e323. https://doi.org/10.2196/jmir.5870

16

Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intel. 2019;267:1–38. https://doi.org/10.1016/j.artint.2018.07.007

17
Bhattacharya A. Applied machine learning explainability techniques: make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more. Packt Publishing Ltd; 2022. Available from: https://download.packt.com/free-ebook/9781803246154
18

Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: a systematic review of the last decade (2011–2022). Comp Methods Prog Biomed. 2022;226:107161. https://doi.org/10.1016/j.cmpb.2022.107161

19

Bharati S, Mondal MRH, Podder P. A review on explainable artificial intelligence for healthcare: why, how, and when? IEEE Trans Artif Intel. 2023;5:1–15. https://doi.org/10.1109/TAI.2023.3266418

20

Di Martino F, Delmastro F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data. Artif Intel Rev. 2023;56(6):5261–315. https://doi.org/10.1007/s10462-022-10304-3

21

Hauser K, Kurz A, Haggenmüller S, Maron RC, von Kalle C, Utikal JS, et al. Explainable artificial intelligence in skin cancer recognition: a systematic review. Eur J Cancer. 2022;167:54–69. https://doi.org/10.1016/j.ejca.2022.02.025

22

James G, Witten D, Hastie T, Tibshirani R, et al. An introduction to statistical learning. vol. 112. New York, NY, USA: Springer; 2013.

23

Geurts P, Ernst D, Wehenkel L. Extremely randomized trees. Mach Learn. 2006;63:3–42. https://doi.org/10.1007/s10994-006-6226-1

24

Breiman L. Random forests. Mach Learn. 2001;45:5–32. https://doi.org/10.1023/A:1010933404324

25
Chen T, Guestrin C. XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. KDD '16. New York: Association for Computing Machinery; 2016. p. 785–94.
26
Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, et al. LightGBM: a highly efficient gradient boosting decision tree. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, et al., editors. Advances in neural information processing systems. vol. 30. Long Beach, CA, USA: Curran Associates, Inc.; 2017. p. 3149–57. Available from: https://proceedings.neurips.cc/paper_files/paper/2017/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf
27

Freund Y, Schapire RE. A decision‐theoretic generalization of on‐line learning and an application to boosting. J Comp Syst Sci. 1997;55(1):119–39. https://doi.org/10.1006/jcss.1997.1504

28
Dorogush AV, Ershov V, Gulin A. CatBoost: gradient boosting with categorical features support. arXiv preprint arXiv:181011363. 2018.
29

Sagi O, Rokach L. Ensemble learning: a survey. WIREs Data Min Knowl Disc. 2018;8(4):e1249. https://doi.org/10.1002/widm.1249

30

Shaban‐Nejad A, Michalowski M, Bianco S. Multimodal artificial intelligence: next wave of innovation in healthcare and medicine. Stud Comp Intel. 2022;1060:1–9.

31

Cybenko GV. Approximation by superpositions of a sigmoidal function. Math Control Signals Syst. 1989;2:303–14. https://doi.org/10.1007/BF02551274

32

Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989;2(5):359–66. https://doi.org/10.1016/0893-6080(89)90020-8

33

Baydin AG, Pearlmutter BA, Radul AA, Siskind JM. Automatic differentiation in machine learning: a survey. J Mach Learn Res. 2018;18:1–43. https://doi.org/10.48550/arXiv.1502.05767

34

Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient‐based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324. https://doi.org/10.1109/5.726791

35
Simonyan K, Zisserman A. Very deep convolutional networks for large‐scale image recognition. arXiv preprint arXiv:14091556. 2014.
36

Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90. https://doi.org/10.1145/3065386

37
Chollet F. Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). Honolulu, HI, USA: IEEE; 2017. p. 1800–7. https://doi.org/10.1109/CVPR.2017.195
38
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). Boston, MA, USA: IEEE; 2015. p. 1–9. https://doi.org/10.1109/CVPR.2015.7298594
39
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). Las Vegas, NV, USA: IEEE; 2016. p. 2818–26. https://doi.org/10.1109/CVPR.2016.308
40

Szegedy C, Ioffe S, Vanhoucke V, Alemi A. Inception‐v4, inception‐ResNet and the impact of residual connections on learning. Proc AAAI Conf Artif Intel. 2017;31(1):4278–84. https://doi.org/10.1609/aaai.v31i1.11231

41
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). Las Vegas, NV, USA: IEEE; 2016. p. 770–8. https://doi.org/10.1109/CVPR.2016.90
42
Xie S, Girshick R, Dollar P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). Honolulu, HI, USA: IEEE; 2017. p. 5987–95. https://doi.org/10.1109/CVPR.2017.634
43
Zhang H, Wu C, Zhang Z, Zhu Y, Lin H, Zhang Z, et al. ResNeSt: split‐attention networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) workshops; 2022. p. 2736–46.
44
Ronneberger O, Fischer P, Brox T. U‐Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical image computing and computer‐assisted intervention–MICCAI 2015. Cham: Springer International Publishing; 2015. p. 234–41.
45
Kipf TN, Welling M. Semi‐supervised classification with graph convolutional networks. arXiv. https://doi.org/10.48550/arXiv.1609.02907
46

Huang G, Liu Z, Pleiss G, Weinberger KQ. Convolutional networks with dense connectivity. IEEE Trans Pattern Anal Mach Intel. 2022;44(12):8704–16. https://doi.org/10.1109/TPAMI.2019.2918284

47
Tan M, Le Q. EfficientNet: rethinking model scaling for convolutional neural networks. In: Chaudhuri K, Salakhutdinov R, editors. Proceedings of the 36th international conference on machine learning vol. 97 of proceedings of machine learning research. Long Beach, CA, USA: PMLR; 2019. p. 6105–14. Available from: https://proceedings.mlr.press/v97/tan19a.html
48
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:170404861. 2017.
49
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). Salt Lake City, UT, USA: IEEE; 2018. p. 4510–20. https://doi.org/10.1109/CVPR.2018.00474
50
Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, et al. Searching for MobileNetV3. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) Seoul, South Korea: IEEE; 2019. p. 1314–24. https://doi.org/10.1109/ICCV.2019.00140
51
Zhang X, Zhou X, Lin M, Sun J. ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2018.
52
Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. SqueezeNet: AlexNet‐level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv:160207360. 2016.
53

Hochreiter S, Schmidhuber J. Long short‐term memory. Neural Comput. 1997;9(8):1735–80. https://doi.org/10.1162/neco.1997.9.8.1735

54

Schuster M, Paliwal KK. Bidirectional recurrent neural networks. IEEE Trans Signal Proces. 1997;45(11):2673–81. https://doi.org/10.1109/78.650093

55
Cho K, van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In: Moschitti A, Pang B, Daelemans W, editors. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). Doha, Qatar: Association for Computational Linguistics; 2014. p. 1724–34. Available from: https://aclanthology.org/D14-1179
56

van der Veer SN, Riste L, Cheraghi‐Sohi S, Phipps DL, Tully MP, Bozentko K, et al. Trading off accuracy and explainability in AI decision‐making: findings from 2 citizens' juries. J Am Med Inform Assoc. 2021;28(10):2128–38. https://doi.org/10.1093/jamia/ocab127

57

Pintelas E, Livieris IE, Pintelas P. A grey‐box ensemble model exploiting black‐box accuracy and white‐box intrinsic interpretability. Algorithms. 2020;13(1):1–17. https://doi.org/10.3390/a13010017

58

Bennetot A, Franchi G, Ser JD, Chatila R, Díaz‐Rodríguez N. Greybox XAI: a neural‐symbolic learning framework to produce interpretable predictions for image classification. Knowl Based Syst. 2022;258:109947. https://doi.org/10.1016/j.knosys.2022.109947

59

Wanner J, Herm LV, Heinrich K, Janiesch C, Zschech P. White, grey, black: effects of XAI augmentation on the confidence in ai‐based decision support systems. Forty‐First International Conference on Information Systems, India; 2020.

60

Bohlin TP. Practical grey‐box process identification: theory and applications. London, UK: Springer Science & Business Media; 2006.

61

Vilone G, Longo L. Classification of explainable artificial intelligence methods through their output formats. Mach Learn Knowl Extract. 2021;3(3):615–61. https://doi.org/10.3390/make3030032

62

Adadi A, Berrada M. Peeking inside the black‐box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138–60. https://doi.org/10.1109/ACCESS.2018.2870052

63

Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller KR. Explaining deep neural networks and beyond: a review of methods and applications. Proc IEEE. 2021;109(3):247–78. https://doi.org/10.1109/JPROC.2021.3060483

64

Clement T, Kemmerzell N, Abdelaal M, Amberg M. XAIR: a systematic metareview of explainable AI (XAI) aligned to the software development process. Mach Learn Knowl Extract. 2023;5(1):78–108. https://doi.org/10.3390/make5010006

65
Molnar C. Interpretable machine learning. 2nd ed.; 2022 [cited 2024 Jan 15]. Available from: https://christophm.github.io/interpretable-ml-book
66
Zilke JR, Loza Mencía E, Janssen F. DeepRED—rule extraction from deep neural networks. In: Calders T, Ceci M, Malerba D, editors. Discovery science. Cham: Springer International Publishing; 2016. p. 457–73.
67
Craven M, Shavlik J. Extracting tree‐structured representations of trained networks. In: Touretzky D, Mozer MC, Hasselmo M, editors. Advances in neural information processing systems. vol. 8. MIT Press; 1995. Available from: https://proceedings.neurips.cc/paper_files/paper/1995/file/45f31d16b1058d586fc3be7207b58053-Paper.pdf
68
Werbos PJ. Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD thesis, Committee on Applied Mathematics, Harvard University, Cambridge, MA. 1974.
69

Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. San Diego, CA, USA: California University of San Diego La Jolla Institute for Cognitive Science; 1985.

70

LeCun Y. A learning scheme for asymmetric threshold networks. Proc Cognit. 1985;85(537):599–604.

71

Parker DB. Learning‐logic: casting the cortex of the human brain in silicon. Tech Rep. 1985;47.

72
Moher D, Liberati A, Tetzlaff J, Altman DG. the PRISMA Group. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. Ann Int Med. 2009;151(4):264–269. https://doi.org/10.7326/0003-4819-151-4-200908180-00135
73

Tober M. PubMed, ScienceDirect, Scopus or Google Scholar—which is the best search engine for an effective literature research in laser medicine? Med Laser Appl. 2011;26(3):139–44. https://doi.org/10.1016/j.mla.2011.05.006

74

Chakraborty D, Ivan C, Amero P, Khan M, Rodriguez‐Aguayo C, Başağaoğlu H, et al. Explainable artificial intelligence reveals novel insight into tumor microenvironment conditions linked with better prognosis in patients with breast cancer. Cancers. 2021;13(14):1–15. https://doi.org/10.3390/cancers13143450

75

Moncada‐Torres A, van Maaren MC, Hendriks MP, Siesling S, Geleijnse G. Explainable machine learning can outperform cox regression predictions and provide insights in breast cancer survival. Sci Rep. 2021;11. https://doi.org/10.1038/s41598-021-86327-7

76

Rezazadeh A, Jafarian Y, Kord A. Explainable ensemble machine learning for breast cancer diagnosis based on ultrasound image texture features. Forecasting. 2022;4(1):262–74. https://doi.org/10.3390/forecast4010015

77

Al‐Dhabyani W, Gomaa M, Khaled H, Fahmy A. Dataset of breast ultrasound images. Data Brief. 2020;28:104863. https://doi.org/10.1016/j.dib.2019.104863

78

Nahid AA, Raihan MJ, Bulbul AAM. Breast cancer classification along with feature prioritization using machine learning algorithms. Health Technol. 2022;12(6):1061–9. https://doi.org/10.1007/s12553-022-00710-6

79

Yu H, Chen F, Lam KO, Yang L, Wang Y, Jin JY, et al. Potential determinants for radiation‐induced lymphopenia in patients with breast cancer using interpretable machine learning approach. Front Immunol. 2022;13. https://doi.org/10.3389/fimmu.2022.768811

80

Meshoul S, Batouche A, Shaiba H, AlBinali S. Explainable multi‐class classification based on integrative feature selection for breast cancer subtyping. Mathematics. 2022;10(22). https://doi.org/10.3390/math10224271

81

Kumar S, Das A. Peripheral blood mononuclear cell derived biomarker detection using explainable artificial intelligence (XAI) provides better diagnosis of breast cancer. Comp Biol Chem. 2023;104:107867. https://doi.org/10.1016/j.compbiolchem.2023.107867

82

Silva‐Aravena F, NúñezDelafuente H, Gutiérrez‐Bahamondes JH, Morales J. A hybrid algorithm of ML and XAI to prevent breast cancer: a strategy to support decision making. Cancers. 2023;15(9):1–18. https://doi.org/10.3390/cancers15092443

83

Nindrea RD, Usman E, Katar Y, Darma IY, Warsiti, Hendriyani H, et al. Dataset of Indonesian women's reproductive, high‐fat diet and body mass index risk factors for breast cancer. Data in Brief. 2021;36:107107. https://doi.org/10.1016/j.dib.2021.107107

84

Massafra R, Fanizzi A, Amoroso N, Bove S, Comes MC, Pomarico D, et al. Analyzing breast cancer invasive disease event classification through explainable artificial intelligence. Front Med. 2023;10. https://doi.org/10.3389/fmed.2023.1116354

85

Vrdoljak J, Boban Z, Baric D, Segvic D, Kumric M, Avirovic M, et al. Applying explainable machine learning models for detection of breast cancer lymph node metastasis in patients eligible for neoadjuvant treatment. Cancers. 2023;15(3). https://doi.org/10.3390/cancers15030634

86

Mohi Uddin KM, Biswas N, Rikta ST, Dey SK, Qazi A. XML‐LightGBMDroid: a self‐driven interactive mobile application utilizing explainable machine learning for breast cancer diagnosis. Eng Rep. 2023;5:e12666. https://doi.org/10.1002/eng2.12666

87
Wolberg MOSN William, Street W. Breast Cancer Wisconsin (Diagnostic). UCI Machine Learning Repository. 1995 [cited 2024 Jan 15]. Available from: https://doi.org/10.24432/C5DW2B.
88

Zhao X, Jiang C. The prediction of distant metastasis risk for male breast cancer patients based on an interpretable machine learning model. BMC Med Informat Decision Making. 2023;23(1):74. https://doi.org/10.1186/s12911-023-02166-8

89

Cordova C, Muñoz R, Olivares R, Minonzio JG, Lozano C, Gonzalez P, et al. HER2 classification in breast cancer cells: a new explainable machine learning application for immunohistochemistry. Oncol Lett. 2023;25(2):1–9. https://doi.org/10.3892/ol.2022.13630

90

Kaplun D, Krasichkov A, Chetyrbok P, Oleinikov N, Garg A, Pannu HS. Cancer cell profiling using image moments and neural networks with model agnostic explainability: a case study of breast cancer histopathological (BreakHis) database. Mathematics. 2021;9(20):1–20. https://doi.org/10.3390/math9202616

91

Spanhol FA, Oliveira LS, Petitjean C, Heutte L. A dataset for breast cancer histopathological image classification. IEEE Trans Biomed Eng. 2016;63(7):1455–62. https://doi.org/10.1109/TBME.2015.2496264

92

Saarela M, Jauhiainen S. Comparison of feature importance measures as explanations for classification models. SN Appl Sci. 2021;3. https://doi.org/10.1007/s42452-021-04148-9

93

Adnan N, Zand M, Huang THM, Ruan J. Construction and evaluation of robust interpretation models for breast cancer metastasis prediction. IEEE/ACM Trans Comp Biol Bioinform. 2022;19(3):1344–53. https://doi.org/10.1109/TCBB.2021.3120673

94

Staiger C, Cadot S, Györffy B, Wessels L, Klau G. Current composite‐feature classification methods do not outperform simple single‐genes classifiers in breast cancer prognosis. Front Genet. 2013;4. https://doi.org/10.3389/fgene.2013.00289

95

Maouche I, Terrissa LS, Benmohammed K, Zerhouni N. An explainable AI approach for breast cancer metastasis prediction based on clinicopathological data. IEEE Trans Biomed Eng. 2023;70:1–9. https://doi.org/10.1109/TBME.2023.3282840

96

Slaoui M, Mouh FZ, Ghanname I, Razine R, El Mzibri M, Amrani M. Outcome of breast cancer in Moroccan young women correlated to clinic‐pathological features, risk factors and treatment: a comparative study of 716 cases in a single institution. PLoS One. 2016;11(10):1–14. https://doi.org/10.1371/journal.pone.0164841

97

Deshmukh S, Behera BK, Mulay P, Ahmed EA, Al‐Kuwari S, Tiwari P, et al. Explainable quantum clustering method to model medical data. Knowl Based Syst. 2023;267:110413. https://doi.org/10.1016/j.knosys.2023.110413

98

Qi X, Zhang L, Chen Y, Pi Y, Chen Y, Lv Q, et al. Automated diagnosis of breast ultrasonography images using deep neural networks. Med Image Anal. 2019;52:185–98. https://doi.org/10.1016/j.media.2018.12.006

99

Zhou LQ, Wu XL, Huang SY, Wu GG, Ye HR, Wei Q, et al. Lymph node metastasis prediction from primary breast cancer US images using deep learning. Radiology. 2020;294(1):19–28. https://doi.org/10.1148/radiol.2019190372

100

Huang Z, Zhu X, Ding M, Zhang X. Medical image classification using a light‐weighted hybrid neural network based on PCANet and DenseNet. IEEE Access. 2020;8:24697–712. https://doi.org/10.1109/ACCESS.2020.2971225

101

Xi P, Guan H, Shu C, Borgeat L, Goubran R. An integrated approach for medical abnormality detection using deep patch convolutional neural networks. Vis Comput. 2020;36(9):1869–82. https://doi.org/10.1007/s00371-019-01775-7

102

Kim J, Kim HJ, Kim C, Lee JH, Kim KW, Park YM, et al. Weakly‐supervised deep learning for ultrasound diagnosis of breast cancer. Sci Rep. 2021;11:24382. https://doi.org/10.1038/s41598-021-03806-7

103

El Adoui M, Drisis S, Benjelloun M. Multi‐input deep learning architecture for predicting breast tumor response to chemotherapy using quantitative MR images. Int J Comp Assist Radiol Surg. 2020;15. https://doi.org/10.1007/s11548-020-02209-9

104

Hussain SM, Buongiorno D, Altini N, Berloco F, Prencipe B, Moschetta M, et al. Shape‐based breast lesion classification using digital tomosynthesis images: the role of explainable artificial intelligence. Appl Sci. 2022;12(12). https://doi.org/10.3390/app12126230

105

Bevilacqua V, Brunetti A, Guerriero A, Trotta GF, Telegrafo M, Moschetta M. A performance comparison between shallow and deeper neural networks supervised classification of tomosynthesis breast lesions images. Cogn Syst Res. 2019;53:3–19. https://doi.org/10.1016/j.cogsys.2018.04.011

106

Agbley BLY, Li JP, Haq AU, Bankas EK, Mawuli CB, Ahmad S, et al. Federated fusion of magnified histopathological images for breast tumor classification in the Internet of medical things. IEEE J Biomed Health Inform. 2023:1–12. https://doi.org/10.1109/JBHI.2023.3256974

107

Gerbasi A, Clementi G, Corsi F, Albasini S, Malovini A, Quaglini S, et al. DeepMiCa: automatic segmentation and classification of breast MIcroCAlcifications from mammograms. Comp Methods Prog Biomed. 2023;235:107483. https://doi.org/10.1016/j.cmpb.2023.107483

108

Moreira IC, Amaral I, Domingues I, Cardoso A, Cardoso MJ, Cardoso JS. INbreast: toward a full‐field digital mammographic database. Acad Radiol. 2012;19(2):236–48. https://doi.org/10.1016/j.acra.2011.09.014

109
Lee RP, Markantonakis K, Akram RN. Provisioning software with hardware‐software binding. In: Proceedings of the 12th international conference on availability, reliability and security. ARES '17. New York, NY, USA: Association for Computing Machinery; 2017. Available from: https://doi.org/10.1145/3098954.3103158
110

To T, Lu T, Jorns JM, Patton M, Schmidt TG, Yen T, et al. Deep learning classification of deep ultraviolet fluorescence images toward intra‐operative margin assessment in breast cancer. Front Oncol. 2023;13. https://doi.org/10.3389/fonc.2023.1179025

111

Lu T, Jorns JM, Patton M, Fisher R, Emmrich A, Doehring T, et al. Rapid assessment of breast tumor margins using deep ultraviolet fluorescence scanning microscopy. J Biomed Opt. 2020;25(12):126501. https://doi.org/10.1117/1.JBO.25.12.126501

112

Grisci BI, Krause MJ, Dorn M. Relevance aggregation for neural networks interpretability and knowledge discovery on tabular data. Inform Sci. 2021;559:111–29. https://doi.org/10.1016/j.ins.2021.01.052

113

Feltes BC, Chandelier EB, Grisci BI, Dorn M. CuMiDa: an extensively curated microarray database for benchmarking and testing of machine learning approaches in cancer research. J Comp Biol. 2019;26(4):376–86. https://doi.org/10.1089/cmb.2018.0238

114

Chereda H, Bleckmann A, Menck K, Perera‐Bel J, Stegmaier P, Auer F, et al. Explaining decisions of graph convolutional neural networks: patient‐specific molecular subnetworks responsible for metastasis prediction in breast cancer. Genome Med. 2021;13. https://doi.org/10.1186/s13073-021-00845-7

115

Barrett T, Wilhite SE, Ledoux P, Evangelista C, Kim IF, Tomashevsky M, et al. NCBI GEO: archive for functional genomics data sets—update. Nucl Acids Res. 2012;41(D1):D991–5. https://doi.org/10.1093/nar/gks1193

116
Lundberg SM, Lee SI. A unified approach to interpreting model predictions. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, et al., editors. Advances in neural information processing systems. vol. 30. Curran Associates, Inc.; 2017. p. 1–10. Available from: https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
117
Shapley LS. A value for n‐person games. Princeton: Princeton University Press; 1953. p. 307–18. Available from: https://doi.org/10.1515/9781400881970-018
118
Winter E. Chapter 53 The shapley value vol. 3 of handbook of game theory with economic applications. Elsevier; 2002. p. 2025–54. Available from: https://www.sciencedirect.com/science/article/pii/S1574000502030163
119
Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?”: explaining the predictions of any classifier. In: DeNero J, Finlayson M, Reddy S, editors. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. KDD '16. New York, NY, USA: Association for Computing Machinery; 2016. p. 1135–44. Available from: https://doi.org/10.1145/2939672.2939778
120
Sadeghi Z, Alizadehsani R, Cifci M, Kausar S, Rehman R, Mahanta P, et al. A brief review of explainable artificial intelligence in healthcare. Comput Electr Eng. 2024;118. https://doi.org/10.1016/j.compeleceng.2024.109370
121
Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). Las Vegas, NV, USA: IEEE; 2016. p. 2921–9.
122

Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad‐CAM: visual explanations from deep networks via gradient‐based localization. Int J Comput Vision. 2020;128(2):336–59. https://doi.org/10.1007/s11263-019-01228-7

123
Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad‐CAM++: generalized gradient‐based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). Lake Tahoe, NV, USA: IEEE; 2018. p. 839–47.
124

Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W. On pixel‐wise explanations for non‐linear classifier decisions by layer‐wise relevance propagation. PLoS One. 2015;10(7):1–46. https://doi.org/10.1371/journal.pone.0130140

125

Brenas JH, Shaban‐Nejad A. Health intervention evaluation using semantic explainability and causal reasoning. IEEE Access. 2020;8:9942–52. https://doi.org/10.1109/ACCESS.2020.2964802

126

Brakefield WS, Ammar N, Shaban‐Nejad A. An urban population health observatory for disease causal pathway analysis and decision support: underlying explainable artificial intelligence model. JMIR Form Res. 2022;6(7):e36055. https://doi.org/10.2196/36055

127

Ammar N, Shaban‐Nejad A. Explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof‐of‐concept prototype development. JMIR Med Inform. 2020;8(11):e18752. https://doi.org/10.2196/18752

128

Chanda T, Hauser K, Hobelsberger S, Bucher TC, Garcia CN, Wies C, et al. Dermatologist‐like explainable AI enhances trust and confidence in diagnosing melanoma. Nat Commun. 2024;15(1):524. https://doi.org/10.1038/s41467-023-43095-4

129

Borole P, Rajan A. Building trust in deep learning‐based immune response predictors with interpretable explanations. Commun Biol. 2024;7(1):279. https://doi.org/10.1038/s42003-024-05968-2

130

Fania A, Monaco A, Amoroso N, Bellantuono L, Cazzolla Gatti R, Firza N, et al. Machine learning and XAI approaches highlight the strong connection between O3 and NO2 pollutants and Alzheimer's disease. Sci Rep. 2024;14(1):5385. https://doi.org/10.1038/s41598-024-55439-1

131

Ng MY, Youssef A, Miner AS, Sarellano D, Long J, Larson DB, et al. Perceptions of data set experts on important characteristics of health data sets ready for machine learning: a qualitative study. JAMA Netw Open. 2023;6(12):e2345892. https://doi.org/10.1001/jamanetworkopen.2023.45892

132
Ribeiro MT, Singh S, Guestrin C. Anchors: high‐precision model‐agnostic explanations. In: AAAI conference on artificial intelligence. New Orleans, LA, USA: AAAI; 2018. p. 1527–35. https://doi.org/10.1609/aaai.v32i1.11491
133
Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision—ECCV 2014. Cham: Springer International Publishing; 2014. p. 818–33.
134

Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Statist. 2001;29(5):1189–232. https://doi.org/10.1214/aos/1013203451

135

Wachter S, Mittelstadt B, Russell C. Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv JL Tech. 2017;31:841. https://doi.org/10.48550/arXiv.1711.00399

136
Buitinck L, Louppe G, Blondel M, Pedregosa F, Mueller A, Grisel O, et al. API design for machine learning software: experiences from the scikit‐learn project. In: ECML PKDD workshop: languages for data mining and machine learning; 2013. p. 108–22. https://doi.org/10.48550/arXiv.1309.0238
137
Klaise J, Looveren AV, Vacanti G, Coca A. Alibi explain: algorithms for explaining machine learning models. J Mach Learn Res. 2021;22(181):1–7. Available from: https://jmlr.org/papers/v22/21-0017.html
138
Sundararajan M, Taly A, Yan Q. Axiomatic attribution for deep networks. In: Precup D, Teh YW, editor. Proceedings of the 34th international conference on machine learning. vol. 70 of proceedings of machine learning research. PMLR; 2017. p. 3319–28. Available from: https://proceedings.mlr.press/v70/sundararajan17a.html
139
Shrikumar A, Greenside P, Kundaje A. Learning important features through propagating activation differences. ICML'17: Proceedings of the 34th International Conference on Machine Learning. Vo. 70. 2017. p. 3145–53.
140

Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn. 2017;65:211–22. https://doi.org/10.1016/j.patcog.2016.11.008

141
Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M. Striving for simplicity: the all convolutional net. arXiv. 2015. https://doi.org/10.48550/arXiv.1412.6806
142

Erhan D, Bengio Y, Courville A, Vincent P. Visualizing higher‐layer features of a deep network. Univ Montreal. 2009;1341(3):1.

143
Kim B, Wattenberg M, Gilmer J, Cai CJ, Wexler J, Viégas FB, et al. Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). Proceedings of the 35th International Conference on Machine Learning (PMLR 80, 2018), Stockholm, Sweden; 2017.
144

Huang Q, Yamada M, Tian Y, Singh D, Chang Y. GraphLIME: local interpretable model explanations for graph neural networks. IEEE Trans Knowl Data Eng. 2023;35(7):6968–72. https://doi.org/10.1109/TKDE.2022.3187455

Cancer Innovation
Article number: e136
Cite this article:
Ghasemi A, Hashtarkhani S, Schwartz DL, et al. Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review. Cancer Innovation, 2024, 3(5): e136. https://doi.org/10.1002/cai2.136
Metrics & Citations  
Article History
Copyright
Rights and Permissions
Return