Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Automated diagnosis of chest X-rays is pivotal in radiology, aiming to alleviate the workload of radiologists. Traditional methods primarily rely on visual features or label dependence, which is a limitation in detecting nuanced or rare lesions. To address this, we present KEXNet, a pioneering knowledge-enhanced X-ray lesion detection model. KEXNet employs a unique strategy akin to expert radiologists, integrating a knowledge graph based on expert annotations with an interpretable graph learning approach. This novel method combines object detection with a graph neural network, facilitating precise local lesion detection. For global lesion detection, KEXNet synergizes knowledge-enhanced local features with global image features, enhancing diagnostic accuracy. Our evaluations on three benchmark datasets demonstrate that KEXNet outshines existing models, particularly in identifying small or infrequent lesions. Notably, on the Chest ImaGenome dataset, KEXNet’s AUC for local lesion detection surpasses 8.9% compared to the state-of-the-art method AnaXNet, showcasing its potential in revolutionizing automated chest X-ray diagnostics.
E. W. Johnson, T. J. Pollard, S. J. Berkowitz, N. R. Greenbaum, M. P. Lungren, C. ying Deng, R. G. Mark, and S. Horng, Mimic-CXR, a deidentified publicly available database of chest radiographs with free-text reports, Scientific Data, vol. 6, no. 1, p. 317, 2019.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. F. Li, Imagenet large scale visual recognition challenge, International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2014.
S. Ren, K. He, R. B. Girshick, and J. Sun, Faster R-CNN: Towards realtime object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2015.
H. Q. Nguyen, K. Lam, L. T. Le, H. Pham, D. Q. Tran, D. B. Nguyen, D. D. Le, C. M. Pham, H. Tong, D. H. Dinh, et al., Vindr-CXR: An open dataset of chest X-rays with radiologist’s annotations, Scientific Data, vol. 9, no. 1, p. 429, 2022.
L. K. Singh, M. Khanna, S. Thawkar, and R. Singh, Deep-learning based system for effective and automatic blood vessel segmentation from retinal fundus images, Multimedia Tools and Applications, vol. 83, no. 2, pp. 6005–6049, 2024.
M. Khanna, L. K. Singh, S. Thawkar, and M. Goyal, PlaNet: A robust deep convolutional neural network model for plant leaves disease recognition, Multimedia Tools and Applications, vol. 83, no. 2, pp. 4465–4517, 2024.
B. Chen, J. Li, G. Lu, H. Yu, and D. Zhang, Label cooccurrence learning with graph convolutional networks for multi-label chest X-ray image classification, IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 8, pp. 2292–2302, 2020.
X. Xie, J. Niu, X. Liu, Z. Chen, S. Tang, and S. Yu, A survey on incorporating domain knowledge into deep learning for medical image analysis, Medical Image Analysis, vol. 69, p. 101985, 2021.
Q. Liao, Y. Ding, Z. L. Jiang, X. Wang, C. Zhang, and Q. Zhang, Multi-task deep convolutional neural network for cancer diagnosis, Neurocomputing, vol. 348, pp. 66–73, 2019.
H. Y. Zhou, X. Chen, Y. Zhang, R. Luo, L. Wang, and Y. Yu, Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports, Nature Machine Intelligence, vol. 4, no. 1, pp. 32–40, 2022.
D. Hong, B. Zhang, H. Li, Y. Li, J. Yao, C. Li, X. X. Zhu, Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks, Remote Sensing of Environment, vol. 299, pp. 113856, 2023.
M. R. S. B. DATA, Multimodal artificial intelligence foundation models: Unleashing the power of remote sensing big data in earth observation, Innovation, vol. 2, no. 1, p. 100055, 2024.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).