AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (9.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

A Disentangled Representation-Based Multimodal Fusion Framework Integrating Pathomics and Radiomics for KRAS Mutation Detection in Colorectal Cancer

School of Computer Science and Technology, Xidian University, Xi’an 710071, China
School of Biomedical Engineering, University of Science and Technology of China, Hefei 230026, China
Department of General Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
Department of Pathology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
Show Author Information

Abstract

Kirsten rat sarcoma viral oncogene homolog (namely KRAS) is a key biomarker for prognostic analysis and targeted therapy of colorectal cancer. Recently, the advancement of machine learning, especially deep learning, has greatly promoted the development of KRAS mutation detection from tumor phenotype data, such as pathology slides or radiology images. However, there are still two major problems in existing studies: inadequate single-modal feature learning and lack of multimodal phenotypic feature fusion. In this paper, we propose a Disentangled Representation-based Multimodal Fusion framework integrating Pathomics and Radiomics (DRMF-PaRa) for KRAS mutation detection. Specifically, the DRMF-PaRa model consists of three parts: (1) the pathomics learning module, which introduces a tissue-guided Transformer model to extract more comprehensive and targeted pathological features; (2) the radiomics learning module, which captures the generic hand-crafted radiomics features and the task-specific deep radiomics features; (3) the disentangled representation-based multimodal fusion module, which learns factorized subspaces for each modality and provides a holistic view of the two heterogeneous phenotypic features. The proposed model is developed and evaluated on a multi modality dataset of 111 colorectal cancer patients with whole slide images and contrast-enhanced CT. The experimental results demonstrate the superiority of the proposed DRMF-PaRa model with an accuracy of 0.876 and an AUC of 0.865 for KRAS mutation detection.

References

[1]

S. Alzahrani, H. Al Doghaither, and A. Al-Ghafari, General insight into cancer: An overview of colorectal cancer, Mol. Clin. Oncol., vol. 15, no. 6, p. 271, 2021.

[2]

H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA: A Cancer J. Clin., vol. 71, no. 3, pp. 209–249, 2021.

[3]
J. Ferlay, M. Laversanne, M. Ervik, F. Lam, M. Colombet, L. Mery, M. Pi˜neros, A. Znaor, I. Soerjomataram, and F. Bray, Global cancer observatory: Cancer tomorrow, https://gco.iarc.fr/tomorrow, 2022.
[4]

W. M. Grady and J. M. Carethers, Genomic and epigenetic instability in colorectal cancer pathogenesis, Gastroenterology, vol. 135, no. 4, pp. 1079–1099, 2008.

[5]

S. Ogino and A. Goel, Molecular classification and correlates in colorectal cancer, J. Mol. Diagn., vol. 10, no. 1, pp. 13–27, 2008.

[6]

D. Senft, M. D. M. Leiserson, E. Ruppin, and Z. A. Ronai, Precision oncology: The road ahead, Trends Mol. Med., vol. 23, no. 10, pp. 874–898, 2017.

[7]

C. P. Vaughn, S. D. ZoBell, L. V. Furtado, C. L. Baker, and W. S. Samowitz, Frequency of KRAS, BRAF, and NRAS mutations in colorectal cancer, Genes Chromosom. Cancer, vol. 50, no. 5, pp. 307–312, 2011.

[8]

I. A. Prior, P. D. Lewis, and C. Mattos, A comprehensive survey of ras mutations in cancer, Cancer Res., vol. 72, no. 10, pp. 2457–2467, 2012.

[9]

M. Meng, K. Zhong, T. Jiang, Z. Liu, H. Y. Kwan, and T. Su, The Current understanding on the impact of KRAS on colorectal cancer, Biomed. Pharmacother., vol. 140, p. 111717, 2021.

[10]

C. De Divitiis, Prognostic and predictive response factors in colorectal cancer patients: Between hope and reality, World J. Gastroenterol., vol. 20, no. 41, p. 15049, 2014.

[11]

K. Knickelbein and L. Zhang, Mutant KRAS as a critical determinant of the therapeutic response of colorectal cancer, Genes Dis., vol. 2, no. 1, pp. 4–12, 2015.

[12]

C. Tan and X. Du, KRAS mutation testing in metastatic colorectal cancer, World J. Gastroenterol., vol. 18, no. 37, pp. 5171–5180, 2012.

[13]

N. Coudray, P. S. Ocampo, T. Sakellaropoulos, N. Narula, M. Snuderl, D. Fenyö, A. L. Moreira, N. Razavian, and A. Tsirigos, Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning, Nat. Med., vol. 24, no. 10, pp. 1559–1567, 2018.

[14]

L. Yang, D. Dong, M. Fang, Y. Zhu, Y. Zang, Z. Liu, H. Zhang, J. Ying, X. Zhao, and J. Tian, Can CT-based radiomics signature predict KRAS/NRAS/BRAF mutations in colorectal cancer? Eur. Radiol., vol. 28, no. 5, pp. 2058–2067, 2018.

[15]

O. Reyes, C. Morell, and S. Ventura, Scalable extensions of the Relief F algorithm for weighting and selecting features on the multi-label learning context, Neuro Computing, vol. 161, pp. 168–182, 2015.

[16]

H.-J. Jang, A. Lee, J. Kang, I. H. Song, and S. H. Lee, Prediction of clinically actionable genetic alterations from colorectal cancer histopathology images using deep learning, World J. Gastroenterol., vol. 26, no. 40, pp. 6207–6223, 2020.

[17]

Y. Jiang, C. K. W. Chan, R. C. K. Chan, X. Wang, N. Wong, K. F. To, S. S. M. Ng, J. Y. W. Lau, and C. C. Y. Poon, Identification of tissue types and gene mutations from histopathology images for advancing colorectal cancer biology, IEEE Open J. Eng. Med. Biol., vol. 3, pp. 115–123, 2022.

[18]

P. L. Schrammen, N. Ghaffari Laleh, A. Echle, D. Truhn, V. Schulz, T. J. Brinker, H. Brenner, J. Chang-Claude, E. Alwers, A. Brobeil, et al., Weakly supervised annotation-free cancer detection and prediction of genotype in routine histopathology, J. Pathol., vol. 256, no. 1, pp. 50–60, 2022.

[19]

K. Ding, M. Zhou, H. Wang, S. Zhang, and D. N. Metaxas, Spatially aware graph neural networks and cross-level molecular profile prediction in colon cancer histopathology: A retrospective multi-cohort study, Lancet Digit. Health, vol. 4, no. 11, pp. e787–e795, 2022.

[20]

S. J. Wagner, D. Reisenbüchler, N. P. West, J. M. Niehues, J. Zhu, S. Foersch, G. P. Veldhuizen, P. Quirke, H. I. Grabsch, P. A. van den Brandt, et al., Transformer-based biomarker prediction from colorectal cancer histology: A large-scale multicentric study, Cancer Cell, vol. 1650, no. 41, pp. 1650–1661.e4, 2023.

[21]

M. G. Lubner, N. Stabo, S. J. Lubner, A. M. del Rio, C. Song, R. B. Halberg, and P. J. Pickhardt, CT textural analysis of hepatic metastatic colorectal cancer: Pre-treatment tumor heterogeneity correlates with pathology and clinical outcomes, Abdom. Imag., vol. 40, no. 7, pp. 2331–2337, 2015.

[22]

N. Taguchi, S. Oda, Y. Yokota, S. Yamamura, M. Imuta, T. Tsuchigame, Y. Nagayama, M. Kidoh, T. Nakaura, S. Shiraishi, et al., CT texture analysis for the prediction of KRAS mutation status in colorectal cancer via a machine learning approach, Eur. J. Radiol., vol. 118, pp. 38–43, 2019.

[23]

R. Shi, W. Chen, B. Yang, J. Qu, Y. Cheng, Z. Zhu, Y. Gao, Q. Wang, Y. Liu, Z. Li, et al., NRAS and BRAF status in colorectal cancer patients with liver metastasis using a deep artificial neural network based on radiomics and semantic features, Am. J. Cancer Res., vol. 10, no. 12, pp. 4513–4526, 2020.

[24]

K. He, X. Liu, M. Li, X. Li, H. Yang, and H. Zhang, Noninvasive KRAS mutation estimation in colorectal cancer using a deep learning method based on CT imaging, BMC Med. Imag., vol. 20, no. 1, p. 59, 2020.

[25]

X. Wu, Y. Li, X. Chen, Y. Huang, L. He, K. Zhao, X. Huang, W. Zhang, Y. Huang, Y. Li, et al., Deep learning features improve the performance of a radiomics signature for predicting KRAS status in patients with colorectal cancer, Acad. Radiol., vol. 27, no. 11, pp. e254–e262, 2020.

[26]
A. Zadeh, M. Chen, S. Poria, E. Cambria, and L.-P. Morency, Tensor fusion network for multimodal sentiment analysis, in Proc. 2017 Conf. Empirical Methods in Natural Language Processing, arXiv preprint arXiv: 1707.07250.
[27]
Z. Liu, Y. Shen, V. B. Lakshminarasimhan, P. P. Liang, A. B. Zadeh, and L.-P. Morency, Efficient low-rank multimodal fusion with modality-specific factors, in Proc. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Austratia, 2018, pp. 2247–2256.
[28]

H. Hotelling, Relations between two sets of variates, Biometrika, vol. 28, pp. 321−377, 1935.

[29]

P. Lai, Kernel and nonlinear canonical correlation analysis, Int. J. Neural Syst., vol. 10, no. 5, pp. 365–377, 2000.

[30]
G. Andrew, R. Arora, J. Bilmes, and K. Livescu, Deep canonical correlation analysis, in Proc. International Conference on Machine Learning, Atlanta, GA, USA, 2013, pp. 1247–1255.
[31]
Y. Dai, F. Gieseke, S. Oehmcke, Y. Wu, and K. Barnard, Attentional feature fusion, in Proc. IEEE Winter Conf. Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2021, pp. 3559–3568.
[32]
J.-H. Kim, J. Jun, and B.-T. Zhang, Bilinear attention networks, arXiv preprint arXiv: 1805.07932.
[33]
J. Lu, J. Yang, D. Batra, and D. Parikh, Hierarchical question-image co-attention for visual question answering, in Proc. 30th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 289–297.
[34]
Y.-C. Chen, L. Li, L. Yu, A. El Kholy, F. Ahmed, Z. Gan, Y. Cheng, and J. Liu, UNITER: UNiversal image-TExt representation learning, in Proc. 16th European Conference on Computer Vision, Glasgow, UK, 2020, pp. 104−120.
[35]

Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, 2013.

[36]
F. Locatello, S. Bauer, M. Lucic, S. Gelly, and O. Bachem, Challenging common assumptions in the unsupervised learning of disentangled representations, in Proc. of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 2019, pp. 4114−4124.
[37]
D. Hazarika, R. Zimmermann, and S. Poria, MISA: Modality-invariant and-specific representations for multimodal sentiment analysis, in Proc. 28th ACM Int. Conf. Multimedia, Seattle, WA, USA, 2020, pp. 1122–1131.
[38]
D. Yang, S. Huang, H. Kuang, Y. Du, and L. Zhang, Disentangled representation learning for multimodal emotion recognition, in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 1642–1651.
[39]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778.
[40]

J. N. Kather, J. Krisam, P. Charoentong, T. Luedde, E. Herpel, C.-A. Weis, T. Gaiser, A. Marx, N. A. Valous, D. Ferber, et al., Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study, PLoS Med., vol. 16, no. 1, p. e1002730, 2019.

[41]
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16words: Transformers for image recognition at scale, in Proc. International Conference on Learning Representations, arXiv preprint arXiv: 2010.11929.
[42]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Proc. Advances in Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5998–6008.
[43]

J. J. M. van Griethuysen, A. Fedorov, C. Parmar, A. Hosny, N. Aucoin, V. Narayan, R. G. H. Beets-Tan, J.-C. Fillion-Robin, S. Pieper, and H. J. W. L. Aerts, Computational radiomics system to decode the radiographic phenotype, Cancer Res., vol. 77, no. 21, pp. e104–e107, 2017.

[44]

A. Zwanenburg, M. Vallières, M. A. Abdalah, H. J. W. L. Aerts, V. Andrearczyk, A. Apte, S. Ashrafinia, S. Bakas, R. J. Beukinga, R. Boellaard, et al., The image biomarker standardization initiative: Standardized quantitative radiomics for high-throughput image-based phenotyping, Radiology, vol. 295, no. 2, pp. 328–338, 2020.

[45]

P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, Supervised contrastive learning, Advances in Neural Information Processing Systems, vol. 33, pp. 18661–18673, 2020.

[46]
X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, and J. Pei, AM-GCN: Adaptive multi-channel graph convolutional networks, in Proc. 26th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, Virtual Event, 2020, pp. 1243–1253.
[47]
C. Saillard, O. Dehaene, T. Marchand, O. Moindrot, A. Kamoun, B. Schmauch, and S. Jegou, Self-supervised learning improves dMMR/MSI detection from histology slides across multiple cancers, in Proc. MICCAI Workshop on Computational Pathology, Virtual Event, 2021, pp. 191–205.
[48]

Y. Schirris, E. Gavves, I. Nederlof, H. M. Horlings, and J. Teuwen, DeepSMILE: Contrastive self-supervised pre-training benefits MSI and HRD classification directly from H&E whole-slide images in colorectal and breast cancer, Med. Image Anal., vol. 79, p. 102464, 2022.

Big Data Mining and Analytics
Pages 590-602
Cite this article:
Lv Z, Yan R, Lin Y, et al. A Disentangled Representation-Based Multimodal Fusion Framework Integrating Pathomics and Radiomics for KRAS Mutation Detection in Colorectal Cancer. Big Data Mining and Analytics, 2024, 7(3): 590-602. https://doi.org/10.26599/BDMA.2024.9020012

284

Views

29

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 08 January 2024
Revised: 27 January 2024
Accepted: 28 February 2024
Published: 16 April 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return