Publications
Article type
Sort:
Open Access Article Issue
ADGAN: Adaptive Domain Medical Image Synthesis Based on Generative Adversarial Networks
CAAI Artificial Intelligence Research 2024, 3: 9150035
Published: 12 June 2024
Abstract PDF (5.4 MB) Collect
Downloads:90

Multimodal medical imaging of human pathological tissues provides comprehensive information to assist in clinical diagnosis. However, due to the high cost of imaging, physiological incompatibility, and the harmfulness of radioactive tracers, multimodal medical image data remains scarce. Currently, cross-modal medical synthesis methods can generate desired modal images from existing modal images. However, most existing methods are limited to specific domains. This paper proposes an Adaptive Domain Medical Image Synthesis Method based on Generative Adversarial Networks (ADGAN) to address this issue. ADGAN achieves multidirectional medical image synthesis and ensures pathological consistency by constructing a single generator to learn the latent shared representation of multiple domains. The generator employs dense connections in shallow layers to preserve edge details and incorporates auxiliary information in deep layers to retain pathological features. Additionally, spectral normalization is introduced into the discriminator to control discriminative performance and indirectly enhance the image synthesis ability of the generator. Theoretically, it can be proved that the proposed method can be trained quickly, and spectral normalization contributes to adaptive and multidirectional synthesis. In practice, comparing with recent state-of-the-art methods, ADGAN achieves average increments of 4.7% SSIM, 6.7% MSIM, 7.3% PSNR, and 9.2% VIF.

Total 1