Automated diagnosis of chest X-rays is pivotal in radiology, aiming to alleviate the workload of radiologists. Traditional methods primarily rely on visual features or label dependence, which is a limitation in detecting nuanced or rare lesions. To address this, we present KEXNet, a pioneering knowledge-enhanced X-ray lesion detection model. KEXNet employs a unique strategy akin to expert radiologists, integrating a knowledge graph based on expert annotations with an interpretable graph learning approach. This novel method combines object detection with a graph neural network, facilitating precise local lesion detection. For global lesion detection, KEXNet synergizes knowledge-enhanced local features with global image features, enhancing diagnostic accuracy. Our evaluations on three benchmark datasets demonstrate that KEXNet outshines existing models, particularly in identifying small or infrequent lesions. Notably, on the Chest ImaGenome dataset, KEXNet’s AUC for local lesion detection surpasses 8.9% compared to the state-of-the-art method AnaXNet, showcasing its potential in revolutionizing automated chest X-ray diagnostics.


Medical knowledge graphs (MKGs) are the basis for intelligent health care, and they have been in use in a variety of intelligent medical applications. Thus, understanding the research and application development of MKGs will be crucial for future relevant research in the biomedical field. To this end, we offer an in-depth review of MKG in this work. Our research begins with the examination of four types of medical information sources, knowledge graph creation methodologies, and six major themes for MKG development. Furthermore, three popular models of reasoning from the viewpoint of knowledge reasoning are discussed. A reasoning implementation path (RIP) is proposed as a means of expressing the reasoning procedures for MKG. In addition, we explore intelligent medical applications based on RIP and MKG and classify them into nine major types. Finally, we summarize the current state of MKG research based on more than 130 publications and future challenges and opportunities.

In general, physicians make a preliminary diagnosis based on patients’ admission narratives and admission conditions, largely depending on their experiences and professional knowledge. An automatic and accurate tentative diagnosis based on clinical narratives would be of great importance to physicians, particularly in the shortage of medical resources. Despite its great value, little work has been conducted on this diagnosis method. Thus, in this study, we propose a fusion model that integrates the semantic and symptom features contained in the clinical text. The semantic features of the input text are initially captured by an attention-based Bidirectional Long Short-Term Memory (BiLSTM) network. The symptom concepts, recognized from the input text, are then vectorized by using the term frequency-inverse document frequency method based on the relations between symptoms and diseases. Finally, two fusion strategies are utilized to recommend the most potential candidate for the international classification of diseases code. Model training and evaluation are performed on a public clinical dataset. The results show that both fusion strategies achieved a promising performance, in which the best performance obtained a top-3 accuracy of 0.7412.