PDF (20 MB)
Collect
Submit Manuscript
Research Article | Open Access

FCDFusion: A fast, low color deviation method for fusing visible and infrared image pairs

School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Show Author Information

Graphical Abstract

View original image Download original image

Abstract

Visible and infrared image fusion (VIF) aims to combine information from visible and infrared images into a single fused image. Previous VIF methods usually employ a color space transformation to keep the hue and saturation from the original visible image. However, for fast VIF methods, this operation accounts for the majority of the calculation and is the bottleneck preventing faster processing. In this paper, we propose a fast fusion method, FCDFusion, with little color deviation. It preserves color information without color space transformations, by directly operating in RGB color space. It incorporates gamma correction at little extra cost, allowing color and contrast to be rapidly improved. We regard the fusion process as a scaling operation on 3D color vectors, greatly simplifying the calculations. A theoretical analysis and experiments show that our method can achieve satisfactory results in only 7 FLOPs per pixel. Compared to state-of-the-art fast, color-preserving methods using HSV color space, our method provides higher contrast at only half of the computational cost. We further propose a new metric, color deviation, to measure the ability of a VIF method to preserve color. It is specifically designed for VIF tasks with color visible-light images, and overcomes deficiencies of existing VIF metrics used for this purpose. Our code is available at https://github.com/HeasonLee/FCDFusion.

References

[1]

Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Information Fusion Vol. 45, 153–178, 2019.

[2]

Cao, Y.; Guan, D.; Huang, W.; Yang, J.; Cao, Y.; Qiao, Y. Pedestrian detection with unsupervised multispectral feature learning using deep neural networks. Information Fusion Vol. 46, 206–217, 2019.

[3]

Gao, S.; Cheng, Y.; Zhao, Y. Method of visual and infrared fusion for moving object detection. Optics Letters Vol. 38, No. 11, Article No. 1981, 2013.

[4]

Han, J.; Bhanu, B. Fusion of color and infrared video for moving human detection. Pattern Recognition Vol. 40, No. 6, 1771–1784, 2007.

[5]

Ulusoy, I.; Yuruk, H. New method for the fusion of complementary information from infrared and visual images for object detection. IET Image Processing Vol. 5, No. 1, 36–48, 2011.

[6]
Li, C.; Zhu, C.; Huang, Y.; Tang, J.; Wang, L. Cross-modal ranking with soft consistency and noisy labels for robust RGB-T tracking. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11217. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 831–847, 2018.
[7]

Liu, H.; Sun, F. Fusion tracking in color and infrared images using joint sparse representation. Science China Information Sciences Vol. 55, No. 3, 590–599, 2012.

[8]

Smith, D.; Singh, S. Approaches to multisensor data fusion in target tracking: A survey. IEEE Transactions on Knowledge and Data Engineering Vol. 18, No. 12, 1696–1710, 2006.

[9]
Hariharan, H.; Koschan, A.; Abidi, B.; Gribok, A.; Abidi, M. Fusion of visible and infrared images using empirical mode decomposition to improve face recognition. In: Proceedings of the International Conference on Image Processing, 2049–2052, 2006.
[10]
Heo, J.; Kong, S. G.; Abidi, B. R.; Abidi, M. A. Fusion of visual and thermal signatures with eyeglass removal for robust face recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop, 122–122, 2005.
[11]

Kong, S. G.; Heo, J.; Abidi, B. R.; Paik, J.; Abidi, M. A. Recent advances in visual and infrared face recognition—A review. Computer Vision and Image Understanding Vol. 97, No. 1, 103–135, 2005.

[12]
Conaire, C. O.; O'Connor, N. E.; Cooke, E.; Smeaton, A. F. Comparison of fusion methods for thermo-visual surveillance tracking. In: Proceedings of the 9th International Conference on Information Fusion, 1–7, 2006.
[13]
Kumar, P.; Mittal, A.; Kumar, P. Fusion of thermal infrared and visible spectrum video for robust surveillance. In: Computer Vision, Graphics and Image Processing. Lecture Notes in Computer Science, Vol. 4338. Kalra, P. K.; Peleg, S. Eds. Springer Berlin Heidelberg, 528–539, 2006.
[14]

Simone, G.; Farina, A.; Morabito, F. C.; Serpico, S. B.; Bruzzone, L. Image fusion techniques for remote sensing applications. Information Fusion Vol. 3, No. 1, 3–15, 2002.

[15]

Yin, S.; Cao, L.; Ling, Y.; Jin, G. One color contrast enhanced infrared and visible image fusion method. Infrared Physics & Technology Vol. 53, No. 2, 146–150, 2010.

[16]

Fu, M. Y.; Zhao, C. Fusion of infrared and visible images based on the second generation curvelet transform. Journal of Infrared and Millimeter Waves Vol. 28, No. 4, 254–258, 2009. (in Chinese)

[17]

Li, H.; Ding, W.; Cao, X.; Liu, C. Image registration and fusion of visible and infrared integrated camera for medium-altitude unmanned aerial vehicle remote sensing. Remote Sensing Vol. 9, No. 5, Article No. 441, 2017.

[18]
Al-Wassai, F. A.; Kalyankar, N. V.; Al-Zuky, A. A. The IHS transformations based image fusion. arXiv preprint arXiv: 1107.4396, 2011.
[19]
Patil, U.; Mudengudi, U. Image fusion using hierarchical PCA. In: Proceedings of the International Conference on Image Information Processing, 1–6, 2011.
[20]

Miao, Q. G.; Wang, B. S. Multi-sensor image fusion based on improved Laplacian pyramid transform. Acta Optica Sinica Vol. 27, No. 9, 1605–1610, 2007. (in Chinese)

[21]

Pajares, G.; Manuel de la Cruz, J. A wavelet-based image fusion tutorial. Pattern Recognition Vol. 37, No. 9, 1855–1872, 2004.

[22]
Xiao, Y.; Cao, Z.; Wang, K.; Xu, Z. Image fusion algorithm using nonsubsampled contourlet transform. In: Proceedings of the Multispectral Image Processing, 2007.
[23]

Naidu, V. P. S. Image fusion technique using multi-resolution singular value decomposition. Defence Science Journal Vol. 61, No. 5, Article No. 479, 2011.

[24]
Li, H.; Wu, X. J.; Li, H.; Wu, X. J.; Kittler, J. Infrared and visible image fusion using Latent Low-Rank Representation. arXiv preprint arXiv: 1804.08992, 2018.
[25]

Li, H.; Wu, X. J.; Durrani, T. S. Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Physics & Technology Vol. 102, Article No. 103039, 2019.

[26]

Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing Vol. 16, No. 3, Article No. 1850018, 2018.

[27]

Xu, H.; Gong, M.; Tian, X.; Huang, J.; Ma, J. CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition. Computer Vision and Image Understanding Vol. 218, Article No. 103407, 2022.

[28]

Tang, L.; Yuan, J.; Zhang, H.; Jiang, X.; Ma, J. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Information Fusion Vol. 83, 79–92, 2022.

[29]

Tang, L.; Yuan, J.; Ma, J. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Information Fusion Vol. 82, 28–42, 2022.

[30]

Wang, Z.; Gong, C. A multi-faceted adaptive image fusion algorithm using a multi-wavelet-based matching measure in the PCNN domain. Applied Soft Computing Vol. 61, 1113–1124, 2017.

[31]

Yin, M.; Duan, P.; Liu, W.; Liang, X. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation. Neurocomputing Vol. 226, 182–191, 2017.

[32]

Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion Vol. 24, 147–164, 2015.

[33]

Zhang, X.; Ma, Y.; Fan, F.; Zhang, Y.; Huang, J. Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition. Journal of the Optical Society of America A Vol. 34, No. 8, Article No. 1400, 2017.

[34]

Luo, Y.; He, K.; Xu, D.; Yin, W.; Liu, W. Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition. Optik Vol. 258, Article No. 168914, 2022.

[35]

Yin, W.; He, K.; Xu, D.; Luo, Y.; Gong, J. Significant target analysis and detail preserving based infrared and visible image fusion. Infrared Physics & Technology Vol. 121, Article No. 104041, 2022.

[36]

Van Aardt, J. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing Vol. 2, No. 1, Article No. 023522, 2008.

[37]

Cui, G.; Feng, H.; Xu, Z.; Li, Q.; Chen, Y. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications Vol. 341, 199–209, 2015.

[38]

Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.

[39]

Jagalingam, P.; Hegde, A. V. A review of quality metrics for fused image. Aquatic Procedia Vol. 4, 133–142, 2015.

[40]

Mu, Q.; Wang, X.; Wei, Y.; Li, Z. Low and non-uniform illumination color image enhancement using weighted guided image filtering. Computational Visual Media Vol. 7, No. 4, 529–546, 2021.

[41]
Li, P.; Huang, Y.; Yao, K. Multi-algorithm fusion of RGB and HSV color spaces for image enhancement. In: Proceedings of the 37th Chinese Control Conference, 9584–9589, 2018.
[42]

Schwarz, M. W.; Cowan, W. B.; Beatty, J. C. An experimental comparison of RGB, YIQ, LAB, HSV, and opponent color models. ACM Transactions on Graphics Vol. 6, No. 2, 123–158, 1987.

[43]

Smith, A. R. Color gamut transform pairs. ACM SIGGRAPH Computer Graphics Vol. 12, No. 3, 12–19, 1978.

[44]
Maller, J. FXScript reference: RGB and YUV color. 2003. Available at https://joemaller.com/fcp/fxscript_yuv_color.shtml
[45]

Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 34, No. 1, 94–109, 2012.

[46]

Bulanon, D. M.; Burks, T. F.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosystems Engineering Vol. 103, No. 1, 12–22, 2009.

[47]

Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electronics Letters Vol. 38, No. 7, 313–315, 2002.

[48]

Rajalingam, B.; Priya, R. Hybrid multimodality medical image fusion technique for feature enhancement in medical diagnosis. International Journal of Engineering Science Invention Vol. 2, 52–60, 2018.

[49]

Rao, Y. J. In-fibre Bragg grating sensors. Measurement Science and Technology Vol. 8, No. 4, 355–375, 1997.

[50]

Eskicioglu, A. M.; Fisher, P. S. Image quality measures and their performance. IEEE Transactions on Communications Vol. 43, No. 12, 2959–2965, 1995.

[51]

Xydeas, C. S.; Petrović, V. Objective image fusion performance measure. Electronics Letters Vol. 36, No. 4, 308–309, 2000.

[52]

Chen, Y.; Blum, R. S. A new automated quality assessment algorithm for image fusion. Image and Vision Computing Vol. 27, No. 10, 1421–1432, 2009.

[53]

Chen, H.; Varshney, P. K. A human perception inspired quality metric for image fusion based on regional information. Information Fusion Vol. 8, No. 2, 193–207, 2007.

[54]

Poynton, C. Digital Video and HD: Algorithms and Interfaces. Morgan Kaufmann Publishers Inc., 2012.

[55]
Zhang, X.; Ye, P.; Xiao, G. VIFB: A visible and infrared image fusion benchmark. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 468–478, 2020.
Computational Visual Media
Pages 195-211
Cite this article:
Li H, Fu Y. FCDFusion: A fast, low color deviation method for fusing visible and infrared image pairs. Computational Visual Media, 2025, 11(1): 195-211. https://doi.org/10.26599/CVM.2025.9450330
Metrics & Citations  
Article History
Copyright
Rights and Permissions
Return