AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

KEXNet: A Knowledge-Enhanced Model for Improved Chest X-Ray Lesion Detection

School of Computer Science and Engineering and Hunan Provincial Key Lab on Bioinformatics, Central South University, Changsha 410083, China
Show Author Information

Abstract

Automated diagnosis of chest X-rays is pivotal in radiology, aiming to alleviate the workload of radiologists. Traditional methods primarily rely on visual features or label dependence, which is a limitation in detecting nuanced or rare lesions. To address this, we present KEXNet, a pioneering knowledge-enhanced X-ray lesion detection model. KEXNet employs a unique strategy akin to expert radiologists, integrating a knowledge graph based on expert annotations with an interpretable graph learning approach. This novel method combines object detection with a graph neural network, facilitating precise local lesion detection. For global lesion detection, KEXNet synergizes knowledge-enhanced local features with global image features, enhancing diagnostic accuracy. Our evaluations on three benchmark datasets demonstrate that KEXNet outshines existing models, particularly in identifying small or infrequent lesions. Notably, on the Chest ImaGenome dataset, KEXNet’s AUC for local lesion detection surpasses 8.9% compared to the state-of-the-art method AnaXNet, showcasing its potential in revolutionizing automated chest X-ray diagnostics.

References

[1]
X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, Chestx-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases, in Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 3462–3471.
[2]

E. W. Johnson, T. J. Pollard, S. J. Berkowitz, N. R. Greenbaum, M. P. Lungren, C. ying Deng, R. G. Mark, and S. Horng, Mimic-CXR, a deidentified publicly available database of chest radiographs with free-text reports, Scientific Data, vol. 6, no. 1, p. 317, 2019.

[3]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. F. Li, Imagenet large scale visual recognition challenge, International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2014.

[4]
J. T. Wu, N. Agu, I. Lourentzou, A. Sharma, J. A. Paguio, J. S. Yao, E. C. Dee, W. Mitchell, S. Kashyap, A. Giovannini, et al., Chest imagenome dataset for clinical reasoning, arXiv preprint arXiv:2108.00316, 2021.
[5]

S. Ren, K. He, R. B. Girshick, and J. Sun, Faster R-CNN: Towards realtime object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2015.

[6]
M. Schlichtkrull, T. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling, Modeling relational data with graph convolutional networks, in Proc. The Semantic Web—15th International Conference, Heraklion, Greece, 2017, pp. 593–607.
[7]
J. Rubin, S. Parvaneh, A. Rahman, B. R. Conroy, and S. Babaeizadeh, Densenly connected convolutional networks and signal quality analysis to detect atrial fibrillation using short single-lead ECG recordings, in Proc. Computing in Cardiology, Rennes, France, 2017, pp. 1–4.
[8]

H. Q. Nguyen, K. Lam, L. T. Le, H. Pham, D. Q. Tran, D. B. Nguyen, D. D. Le, C. M. Pham, H. Tong, D. H. Dinh, et al., Vindr-CXR: An open dataset of chest X-rays with radiologist’s annotations, Scientific Data, vol. 9, no. 1, p. 429, 2022.

[9]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[10]
P. Rajpurkar, J. A. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Y. Ding, A. Bagul, C. Langlotz, K. S. Shpanskaya, et al., Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning, CoRR, vol. abs/1711.05225, 2017.
[11]
L. K. Singh, M. Khanna, D. Mansukhani, S. Thawkar, and R. Singh, Features fusion based novel approach for efficient blood vessel segmentation from fundus images, Multimedia Tools and Applications, vol. 83, pp. 55109–55145, 2024.
[12]

L. K. Singh, M. Khanna, S. Thawkar, and R. Singh, Deep-learning based system for effective and automatic blood vessel segmentation from retinal fundus images, Multimedia Tools and Applications, vol. 83, no. 2, pp. 6005–6049, 2024.

[13]

M. Khanna, L. K. Singh, S. Thawkar, and M. Goyal, PlaNet: A robust deep convolutional neural network model for plant leaves disease recognition, Multimedia Tools and Applications, vol. 83, no. 2, pp. 4465–4517, 2024.

[14]
X. Wang, Y. Peng, L. Lu, Z. Lu, and R. M. Summers, Tienet: Text image embedding network for common thorax disease classification and reporting in chest X-rays, in Proc. 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 9049–9058.
[15]
K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio, Show, attend and tell: Neural image caption generation with visual attention, in Proc. 32nd International Conference on Machine Learning, Lille, France, 2015, pp. 2048–2057.
[16]
B. Chen, J. Li, X. Guo, and G. Lu, Dualchexnet: Dual asymmetric feature learning for thoracic disease classification in chest X-rays, Biomed. Signal Process. Control., vol. 53, p. 101554, 2019.
[17]

B. Chen, J. Li, G. Lu, H. Yu, and D. Zhang, Label cooccurrence learning with graph convolutional networks for multi-label chest X-ray image classification, IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 8, pp. 2292–2302, 2020.

[18]
N. N. Agu, J. T. Wu, H. Chao, I. Lourentzou, A. Sharma, M. Moradi, P. Yan, and J. A. Hendler, AnaXNet: Anatomy aware multi-label finding classification in chest X-ray, in Proc. Medical Image Computing and Computer Assisted Intervention– MICCAI 2021–24th International Conference, Strasbourg, France, 2021, pp. 804–813.
[19]
C. Wu, X. Zhang, Y. Zhang, Y. Wang, and W. Xie, MedKLIP: Medical knowledge enhanced language-image pre-training in radiology, arXiv preprint arXiv:2301. 02228, 2023.
[20]
X. Zhang, C. Wu, Y. Zhang, Y. Wang, and W. Xie, Knowledge enhanced pre-training for auto-diagnosis of chest radiology images, arXiv preprint arXiv:2302.14042, 2023.
[21]

X. Xie, J. Niu, X. Liu, Z. Chen, S. Tang, and S. Yu, A survey on incorporating domain knowledge into deep learning for medical image analysis, Medical Image Analysis, vol. 69, p. 101985, 2021.

[22]

Q. Liao, Y. Ding, Z. L. Jiang, X. Wang, C. Zhang, and Q. Zhang, Multi-task deep convolutional neural network for cancer diagnosis, Neurocomputing, vol. 348, pp. 66–73, 2019.

[23]
T. Majtner, S. Yildirim-Yildirim, and J. Y. Hardeberg, Combining deep learning and hand-crafted features for skin lesion classification, in Proc. Sixth International Conference on Image Processing Theory, Tools and Applications, Oulu, Finland, 2016, pp. 1–6.
[24]
L. Li, M. Xu, X. Wang, L. Jiang, and H. Liu, Attention based glaucoma detection: A large-scale database and CNN model, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 10563–10572.
[25]

H. Y. Zhou, X. Chen, Y. Zhang, R. Luo, L. Wang, and Y. Yu, Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports, Nature Machine Intelligence, vol. 4, no. 1, pp. 32–40, 2022.

[26]
Z. Wang, J. Zhang, J. Feng, and Z. Chen, Knowledge graph and text jointly embedding, in Proc. the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 2014, pp. 1591–1601.
[27]
Y. Zhang, X. Wang, Z. Xu, Q. Yu, A. L. Yuille, and D. Xu, When radiology report generation meets knowledge graph, in Proc. The Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 2020, pp. 12910–12917.
[28]
F. Liu, X. Wu, S. Ge, W. Fan, and Y. Zou, Exploring and distilling posterior and prior knowledge for radiology report generation, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Virtual Event, 2021, pp. 13748–13757.
[29]
S. Wang, L. Tang, M. Lin, G. L. Shih, Y. Ding, and Y. Peng, Prior knowledge enhances radiology report generation, in Proc. AMIA Annual Symposium, Virtual Event, 2022, pp. 486–495.
[30]

D. Hong, B. Zhang, H. Li, Y. Li, J. Yao, C. Li, X. X. Zhu, Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks, Remote Sensing of Environment, vol. 299, pp. 113856, 2023.

[31]
D. Hong, B. Zhang, X. Li, Y. Li, C. Li, J Yao, and J. Chanussot, SpectralGPT: Spectral remote sensing foundation model, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 8, pp.5227–5244, 2024.
[32]

M. R. S. B. DATA, Multimodal artificial intelligence foundation models: Unleashing the power of remote sensing big data in earth observation, Innovation, vol. 2, no. 1, p. 100055, 2024.

[33]
Y. Wu, A. Kirillov, F. Massa, W. Lo, and R. Girshick, Detectron2, https://github.com/facebookresearch/detectron2, 2019.
[34]
M. Wang, D. Zheng, Z. Ye, Q. Gan, M. Li, X. Song, J. Zhou, C. Ma, L. Yu, Y. Gai, et al., Deep graph library: A graph-centric, highly-performant package for graph neural networks, arXiv preprint arXiv:1909.01315, 2019.
Big Data Mining and Analytics
Pages 1187-1198
Cite this article:
Yan Q, Duan J, Wang J. KEXNet: A Knowledge-Enhanced Model for Improved Chest X-Ray Lesion Detection. Big Data Mining and Analytics, 2024, 7(4): 1187-1198. https://doi.org/10.26599/BDMA.2024.9020045

44

Views

2

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 26 January 2024
Revised: 20 May 2024
Accepted: 03 June 2024
Published: 04 December 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return