Home iRADIOLOGY Article
PDF (4 MB)
Collect
Submit Manuscript
Show Outline
Figures (5)

Original Article | Open Access

Exploring the feasibility of integrating ultra‐high field magnetic resonance imaging neuroimaging with multimodal artificial intelligence for clinical diagnostics

Yifan Yuan1,2Kaitao Chen3Youjia Zhu4Yang Yu5Mintao Hu3Ying‐Hua Chu6Yi‐Cheng Hsu6Jie Hu1,2Qi Yue1,2()Mianxin Liu3()
Department of Neurosurgery, Huashan Hospital, Fudan University, Neurosurgical Institute of Fudan University, Shanghai, China
National Center for Neurological Disorders, Shanghai, China
Shanghai Artificial Intelligence Laboratory, Shanghai, China
Department of Intensive Care Medicine, Huashan Hospital, Fudan University, Shanghai, China
Department of Radiology, Shanghai United Family Hospital, Shanghai, China
MR Research Collaboration Team, Siemens Healthineers Ltd, Shanghai, China

Yifan Yuan, Kaitao Chen and Youjia Zhu contributed equally to this work and shared the co‐first authorship.

Show Author Information

Graphical Abstract

View original image Download original image

Abstract

Background

The integration of 7 Tesla (7T) magnetic resonance imaging (MRI) with advanced multimodal artificial intelligence (AI) models represents a promising frontier in neuroimaging. The superior spatial resolution of 7TMRI provides detailed visualizations of brain structure, which are crucial forunderstanding complex central nervous system diseases and tumors. Concurrently, the application of multimodal AI to medical images enables interactive imaging‐based diagnostic conversation.

Methods

In this paper, we systematically investigate the capacity and feasibility of applying the existing advanced multimodal AI model ChatGPT‐4V to 7T MRI under the context of brain tumors. First, we test whether ChatGPT‐4V has knowledge about 7T MRI, and whether it can differentiate 7T MRI from 3T MRI. In addition, we explore whether ChatGPT‐4V can recognize different 7T MRI modalities and whether it can correctly offer diagnosis of tumors based on single or multiple modality 7T MRI.

Results

ChatGPT‐4V exhibited accuracy of 84.4% in 3T‐vs‐7T differentiation and accuracy of 78.9% in 7T modality recognition. Meanwhile, in a human evaluation with three clinical experts, ChatGPT obtained average scores of 9.27/20 in single modality‐based diagnosis and 21.25/25 in multiple modality‐based diagnosis. Our study indicates that single‐modality diagnosis and the interpretability of diagnostic decisions in clinical practice should be enhanced when ChatGPT‐4V is applied to 7T data.

Conclusions

In general, our analysis suggests that such integration has promise as a tool to improve the workflow of diagnostics in neurology, with a potentially transformative impact in the fields of medical image analysis and patient management.

References

[1]

Feinberg DA, Beckett AJS, Vu AT, Stockmann J, Huber L, Ma S, et al. Next‐generation MRI scanner designed for ultra‐high‐resolution human brain imaging at 7 Tesla. Nat Methods. 2023;20(12):2048–57. https://doi.org/10.1038/s41592‐023‐02068‐7

[2]

Guo C, Wang B, Huo Y, Shan L, Qiao T, Yang Z, et al. The effects of P2 segment of posterior cerebral artery to thalamus blood supply pattern on gait in cerebral small vessel disease: a 7 T MRI based study. Neurobiol Dis. 2024;190:106372. https://doi.org/10.1016/j.nbd.2023.106372

[3]

Bhayana R, Krishna S, Bleakney RR. Performance of ChatGPT on a radiology board‐style examination: insights into current strengths and limitations. Radiology. 2023;307(5):e230582. https://doi.org/10.1148/radiol.230582

[4]

Zhang S, Metaxas D. On the challenges and perspectives of foundation models for medical image analysis. Med Image Anal. 2024;91:102996. https://doi.org/10.1016/j.media.2023.102996

[5]

Zhu L, Mou W, Lai Y, Chen J, Lin S, Xu L, et al. Step into the era of large multimodal models: a pilot study on ChatGPT‐4V (ision)’s ability to interpret radiological images. Int J Surg. 2024;110(7):4096–102. https://doi.org/10.1097/JS9.0000000000001359

[6]

Lin Z, Zhang D, Tao Q, Shi D, Haffari G, Wu Q, et al. Medical visual question answering: a survey. Artif Intell Med. 2023;143:102611. https://doi.org/10.1016/j.artmed.2023.102611

[7]

Liu B, Zhan LM, Xu L, Wu XM. Medical visual question answering via conditional reasoning and contrastive learning. IEEE Trans Med Imag. 2023;42(5):1532–45. https://doi.org/10.1109/TMI.2022.3232411

[8]
Hartsock I, Rasool G. Vision‐language models for medical report generation and visual question answering: a review. 2024. https://arxiv.org/abs/2403.02469v2.240302469.
[9]

Yuan Y, Yu Y, Guo Y, Chu Y, Chang J, Hsu Y, et al. Noninvasive delineation of glioma infiltration with combined 7T chemical exchange saturation transfer imaging and MR spectroscopy: a diagnostic accuracy study. Metabolites. 2022;12(10):901. https://doi.org/10.3390/metabo12100901

[10]

Zhou W, Wen J, Huang Q, Zeng Y, Zhou Z, Zhu Y, et al. Development and validation of clinical‐radiomics analysis for preoperative prediction of IDH mutation status and WHO grade in diffuse gliomas: a consecutive L‐[methyl‐11C]methionine cohort study with two PET scanners. Eur J Nucl Med Mol Imag. 2024;51(5):1423–35. https://doi.org/10.1007/s00259‐023‐06562‐0

[11]

Schubert MC, Wick W, Venkataramani V. Performance of large language models on a neurology board‐style examination. JAMA Netw Open. 2023;6(12):e2346721. https://doi.org/10.1001/jamanetworkopen.2023.46721

[12]

Haver HL, Ambinder EB, Bahl M, Oluyemi ET, Jeudy J, Yi PH. Appropriateness of breast cancer prevention and screening recommendations provided by ChatGPT. Radiology. 2023;307(4):e230424. https://doi.org/10.1148/radiol.230424

[13]
Li Y, Liu Y, Wang Z, Liang X, Wang L, Liu L, et al. A systematic evaluation of GPT‐4V's multimodal capability for medical image analysis. ArXiv. 2023. arXiv:2310.20381.
[14]
Jin Q, Chen F, Zhou Y, Xu Z, Cheung JM, Chen R, et al. Hidden flaws behind expert‐level accuracy of multimodal GPT‐4 vision in medicine. 2024. https://arxiv.org/abs/2401.08396v4.240108396.
[15]

Hirano Y, Hanaoka S, Nakao T, Miki S, Kikuchi T, Nakamura Y, et al. GPT‐4 turbo with vision fails to outperform text‐only GPT‐4 turbo in the Japan diagnostic radiology board examination. Jpn J Radiol. 2024;42(8):918–26. https://doi.org/10.1007/s11604‐024‐01561‐z

[16]

Ramakrishnan D, Jekel L, Chadha S, Janas A, Moy H, Maleki N, et al. A large open access dataset of brain metastasis 3D segmentations on MRI with clinical and imaging information. Sci Data. 2024;11(1):254. https://doi.org/10.1038/s41597‐024‐03021‐9

[17]

Imrie F, Davis R, van der Schaar M. Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare. Nat Mach Intell. 2023;5(8):824–9. https://doi.org/10.1038/s42256‐023‐00698‐2

[18]

Salahuddin Z, Woodruff HC, Chatterjee A, Lambin P. Transparency of deep neural networks for medical image analysis: a review of interpretability methods. Comput Biol Med. 2022;140:105111. https://doi.org/10.1016/j.compbiomed.2021.105111

[19]

Williams SN, McElhinney P, Gunamony S. Ultra‐high field MRI: parallel‐transmit arrays and RF pulse design. Phys Med Biol. 2023;68(2):02TR02. https://doi.org/10.1088/1361‐6560/aca4b7

[20]

Tustison NJ, Avants BB, Cook PA, Zheng Y, Egan A, Yushkevich PA, et al. N4ITK: improved N3 bias correction. IEEE Trans Med Imag. 2010;29(6):1310–20. https://doi.org/10.1109/TMI.2010.2046908

[21]

Adriany G, Van de Moortele PF, Wiesinger F, Moeller S, Strupp JP, Andersen P, et al. Transmit and receive transmission line arrays for 7 Tesla parallel imaging. Magn Reson Med. 2005;53(2):434–45. https://doi.org/10.1002/mrm.20321

[22]

Katscher U, Börnert P, Leussler C, van den Brink JS. Transmit SENSE. Magn Reson Med. 2003;49(1):144–50. https://doi.org/10.1002/mrm.10353

iRADIOLOGY
Pages 498-509
Cite this article:
Yuan Y, Chen K, Zhu Y, et al. Exploring the feasibility of integrating ultra‐high field magnetic resonance imaging neuroimaging with multimodal artificial intelligence for clinical diagnostics. iRADIOLOGY, 2024, 2(5): 498-509. https://doi.org/10.1002/ird3.102
Metrics & Citations  
Article History
Copyright
Rights and Permissions
Return