PDF (3.4 MB)
Collect
Submit Manuscript
Article | Open Access

GasUpper: Gas-Aware Upsampling for Enhanced Gas Segmentation

Yuting Lu1,2Xiaoyu Wang1,2Jingyi Cui1,2Le Yang3Shunzhou Wang4Yongqiang Zhao2Binglu Wang1,2()
School of Astronautics, Northwestern Polytechnical University, Xi’an 710072, China
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
School of Electronics and Control Engineering, Chang’an University, Xi’an 710064, China
Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
Show Author Information

Abstract

Segmenting greenhouse gases from hyperspectral images can provide detailed information regarding their spatial distribution, which is significant for the monitoring of greenhouse gases. However, accurate segmentation of greenhouse gases is a challenging task due to two main reasons: (1) Diversity: greenhouse gases vary in concentration, size, and texture; (2) Camouflage: the boundaries between greenhouse gases and the surrounding background are blurred. Existing methods primarily focus on designing new modules to address the above challenges, often neglecting the design of the upsampling method within the model, which is crucial for achieving accurate segmentation. In this work, we propose Gas-Aware Upsampling (GasUpper), a novel and efficient upsampling method tailored for greenhouse gas segmentation. Specifically, we first generate a coarse segmentation mask during the upsampling process. Based on the roughly segmented gas and background, we then extract the global features of the gas and combine them with the original features to obtain de-camouflaged feature map that include both the global characteristics of the gas and the local details of the image. This de-camouflaged feature map serves as the foundation for subsequent point sampling. Finally, we utilize the de-camouflaged feature map to generate upsampling coordinate offsets, enabling the model to adaptively adjust the sampling regions based on the content during the sampling process. We conduct comprehensive evaluations by replacing the upsampling method in various segmentation approaches with GasUpper on two hyperspectral datasets. The results indicate that GasUpper consistently and significantly enhances the performance across all segmentation models (0.08%–9.44% Intersection over Union (IoU), 0.47%–6.26% Accuracy), outperforming other upsampling methods.

References

[1]
P. Mangal, A. Rajesh, and R. Misra, Big data in climate change research: Opportunities and challenges, in Proc. Int. Conf. Intelligent Engineering and Management (ICIEM), London, UK, 2020, pp. 321–326.
[2]

S. Kirschke, P. Bousquet, P. Ciais, M. Saunois, J. G. Canadell, E. J. Dlugokencky, P. Bergamaschi, D. Bergmann, D. R. Blake, L. Bruhwiler, et al., Three decades of global methane sources and sinks, Nat. Geosci., vol. 6, no. 10, pp. 813–823, 2013.

[3]

R. K. Pachauri and L. A. Meyer, Climate Change 2014: Synthesis Report. Geneva, Switzerland: IPCC, 2014.

[4]

E. D. Sherwin, J. S. Rutherford, Z. Zhang, Y. Chen, E. B. Wetherley, P. V. Yakovlev, E. S. F. Berman, B. B. Jones, D. H. Cusworth, A. K. Thorpe, et al., US oil and gas system emissions from nearly one million aerial site measurements, Nature, vol. 627, no. 8003, pp. 328–334, 2024.

[5]

D. Hong, Z. Han, J. Yao, L. Gao, B. Zhang, A. Plaza, and J. Chanussot, SpectralFormer: Rethinking hyperspectral image classification with transformers, IEEE Trans. Geosci. Remote Sens., vol. 60, p. 5518615, 2021.

[6]

H. Zhang, H. Chen, G. Yang, and L. Zhang, LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising, IEEE Trans. Image Process., vol. 30, pp. 8743–8758, 2021.

[7]

J. Zhang, Z. Cai, F. Chen, and D. Zeng, Hyperspectral image denoising via adversarial learning, Remote Sens., vol. 14, no. 8, p. 1790, 2022.

[8]

C. Chen, Y. Wang, N. Zhang, Y. Zhang, and Z. Zhao, A review of hyperspectral image super-resolution based on deep learning, Remote. Sens., vol. 15, no. 11, p. 2853, 2023.

[9]
M. Zhang, C. Zhang, Q. Zhang, J. Guo, X. Gao, and J. Zhang, ESSAformer: Efficient transformer for hyperspectral image super-resolution, in Proc. IEEE/CVF Int. Conf. Computer Vision, Paris, France, 2023, pp. 23016–23027.
[10]

M. Rostami and A. A. Beheshti Shirazi, Hyperspectral image super-resolution via learning an undercomplete dictionary and intra-algorithmic postprocessing, IEEE Trans. Geosci. Remote Sens., vol. 61, p. 5512115, 2023.

[11]

D. P. Fan, T. Zhou, G. P. Ji, Y. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, Inf-Net: Automatic COVID-19 lung infection segmentation from CT images, IEEE Trans. Med. Imag., vol. 39, no. 8, pp. 2626–2637, 2020.

[12]
D. P. Fan, G. P. Ji, T. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, PraNet: parallel reverse attention network for polyp segmentation, in Proc. 23rd Int. Conf. Medical Image Computing and Computer Assisted Intervention, virtual, 2020, pp. 263–273.
[13]
D. P. Fan, G. P. Ji, G. Sun, M. M. Cheng, J. Shen, and L. Shao, Camouflaged object detection, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 2777–2787.
[14]
H. Mei, G. P. Ji, Z. Wei, X. Yang, X. Wei, and D. P. Fan, Camouflaged object segmentation with distraction mining, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 8772–8781.
[15]
G. P. Ji, Y. C. Chou, D. P. Fan, G. Chen, H. Fu, D. Jha, and L. Shao, Progressively normalized self-attention network for video polyp segmentation, in Proc. 24th Int. Conf. Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, 2021, pp 142–152.
[16]
Y. Lv, J. Zhang, Y. Dai, A. Li, B. Liu, N. Barnes, and D. P. Fan, Simultaneously localize, segment and rank the camouflaged objects, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 11591–11601.
[17]
J. Pei, T. Cheng, D. P. Fan, H. Tang, C. Chen, and L. Van Gool, OSFormer: one-stage camouflaged instance segmentation with transformers, in Proc. 17th European Conf. Computer Vision (ECCV), Tel Aviv, Israel, 2022, pp. 19–37.
[18]

Y. Lv, J. Zhang, Y. Dai, A. Li, N. Barnes, and D. P. Fan, Toward deeper understanding of camouflaged object detection, IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 7, pp. 3462–3476, 2023.

[19]
H. Noh, S. Hong, and B. Han, Learning deconvolution network for semantic segmentation, in Proc. IEEE Int. Conf. Computer Vision (ICCV), Santiago, Chile, 2015, pp. 1520–1528.
[20]
W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 1874–1883.
[21]
M. Mommert, M. Sigel, M. Neuhausler, L. Scheibenreif, and D. Borth, Characterization of industrial smoke plumes from remote sensing data, arXiv preprint arXiv: 2011.11344, 2020.
[22]
J. Hanna, M. Mommert, L. M. Scheibenreif, and D. Borth. Multitask learning for estimating power plant greenhouse gas emissions from satellite imagery. in Proc. 35th Conf. Neural Information Processing Systems (NeurIPS), Montreal, Canada, 2021, pp. 739–758.
[23]

D. Li, G. Zhang, Z. Wu, and L. Yi, An edge embedded marker-based watershed algorithm for high spatial resolution remote sensing image segmentation, IEEE Trans. Image Process., vol. 19, no. 10, pp. 2781–2787, 2010.

[24]
B. Li, M. Pan, and Z. Wu, An improved segmentation of high spatial resolution remote sensing image using Marker-based Watershed Algorithm, in Proc. 20th Int. Conf. Geoinformatics, Hong Kong, China, 2012, pp. 1–5.
[25]
M. Kampffmeyer, A. B. Salberg, and R. Jenssen, Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks, in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA, 2016, pp. 680–688.
[26]

T. Su and S. Zhang, Local and global evaluation for remote sensing image segmentation, ISPRS J. Photogramm. Remote. Sens., vol. 130, pp. 256–276, 2017.

[27]

D. Cheng, G. Meng, S. Xiang, and C. Pan, FusionNet: edge aware deep convolutional networks for semantic segmentation of remote sensing harbor images, IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., vol. 10, no. 12, pp. 5769–5783, 2017.

[28]

R. Kemker, C. Salvaggio, and C. Kanan, Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning, ISPRS J. Photogramm. Remote. Sens., vol. 145, pp. 60–77, 2018.

[29]
B. Rusyn, R. Kosarevych, O. Lutsyk, and V. Korniy, Segmentation of atmospheric clouds images obtained by remote sensing, in Proc. 14th Int. Conf. Advanced Trends in Radioelecrtronics, Telecommunications and Computer Engineering (TCSET), Lviv-Slavske, Ukraine, 2018, pp. 213–216.
[30]
H. Motiyani, P. K. Mali, and A. Mehta, Hyperspectral image segmentation, feature reduction and clustering using k-means, in Proc. Int. Conf. Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 2022, pp. 389–393.
[31]
A. Medellin, D. Grabowsky, D. Mikulski, and R. Langari, Sam-Sam - A novel approach to hyperspectral image semantic segmentation, in Proc. 13th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Athens, Greece, 2023, pp. 1–5.
[32]

J. Wang and L. Zhang, The multiscale differential feature optimization networks for remote sensing image change detection, IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 17, pp. 16847–16859, 2024.

[33]

K. Nogueira, M. Dalla Mura, J. Chanussot, W. R. Schwartz, and J. A. dos Santos, Dynamic multicontext segmentation of remote sensing images based on convolutional networks, IEEE Trans. Geosci. Remote Sens., vol. 57, no. 10, pp. 7503–7520, 2019.

[34]
A. Bokhovkin and E. Burnaev, Boundary loss for remote sensing imagery semantic segmentation, in Proc. 16th Int. Symp. Neural Networks (ISNN 2019), Moscow, Russia, 2019, pp. 388–401.
[35]

F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data, ISPRS J. Photogramm. Remote Sens., vol. 162, pp. 94–114, 2020.

[36]

H. Su, S. Wei, S. Liu, J. Liang, C. Wang, J. Shi, and X. Zhang, HQ-ISNet: High-quality instance segmentation for remote sensing imagery, Remote Sens., vol. 12, no. 6, p. 989, 2020.

[37]

R. Li, S. Zheng, C. Zhang, C. Duan, J. Su, L. Wang, and P. M. Atkinson, Multiattention network for semantic segmentation of fine-resolution remote sensing images, IEEE Trans. Geosci. Remote Sens., vol. 60, p. 5607713, 2021.

[38]

L. Wang, R. Li, C. Zhang, S. Fang, C. Duan, X. Meng, and P. M. Atkinson, UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery, ISPRS J. Photogramm. Remote Sens., vol. 190, pp. 196–214, 2022.

[39]

H. Pan, Y. Hong, W. Sun, and Y. Jia, Deep dual-resolution networks for real-time and accurate semantic segmentation of traffic scenes, IEEE Trans. Intell. Transp. Syst., vol. 24, no. 3, pp. 3448–3460, 2023.

[40]

Y. Hua, D. Marcos, L. Mou, X. X. Zhu, and D. Tuia, Semantic segmentation of remote sensing images with sparse annotations, IEEE Geosci. Remote. Sens. Lett., vol. 19, p. 8006305, 2021.

[41]

H. Wang, C. Tao, J. Qi, R. Xiao, and H. Li, Avoiding negative transfer for semantic segmentation of remote sensing images, IEEE Trans. Geosci. Remote Sens., vol. 60, p. 4413215, 2022.

[42]

P. Thevenaz, T, Blu, and M, Unser. Interpolation revisited [medical images application], IEEE Trans. Med. Imaging, vol. 19, no. 7, pp. 739–758, 2000.

[43]

A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts, Distill, vol. 1, no. 10, p. e3, 2016.

[44]
M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional neural networks, in Proc. 13th European Conf. Computer Vision (ECCV), Zurich, Switzerland, 2019, pp. 818–833.
[45]
J. Wang, K. Chen, R. Xu, Z. Liu, C. C. Loy, and D. Lin, CARAFE: Content-aware ReAssembly of FEatures, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), Seoul, Republic of Korea, 2019, pp. 3007–3016.
[46]
H. Lu, W. Liu, H. Fu, and Z. Cao, FADE: Fusing the assets of decoder and encoder for task-agnostic upsampling, in Proc. 17th European Conf. Computer Vision (ECCV), Tel Aviv, Israel, 2022, pp. 231–247.
[47]
Hao Lu, Wenze Liu, Zixuan Ye, Hongtao Fu, Yuliang Liu, and Zhiguo Cao. Sapa: Similarity-aware point affiliation for feature upsampling. in Proc. 36th Conf. Neural Information Processing Systems (NeurIPS), New Orleans, LA, USA, 2022, pp. 20889–20901.
[48]
W. Liu, H. Lu, H. Fu, and Z. Cao, Learning to upsample by learning to sample, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), Paris, France, 2023, pp. 6027–6037.
[49]
O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in Proc. 18th Int. Conf. Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 2015, pp. 234–241.
[50]
L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in Proc. 15th European Conf. Computer Vision (ECCV), Munich, Germany, 2018 pp. 833–851.
[51]
E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. in Proc. 35th Conf. Neural Information Processing Systems (NeurIPS), virtual, 2021, pp. 12077–12090.
[52]
X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen, Twins: Revisiting the design of spatial attention in vision transformers. in Proc. 35th Conf. Neural Information Processing Systems (NeurIPS), Virtual, 2021, pp. 9355–9366.
[53]
T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 936–944.
[54]
Contributors. MMSegmentation, MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark, 2020.
[55]
A. Khan, M. Khan, W. Gueaieb, A. El Saddik, G. De Masi, and F. Karray, CamoFocus: Enhancing camouflage object detection with split-feature focal modulation and context refinement, in Proc. IEEE/CVF Winter Conf. Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2024, pp. 1423–1432.
[56]

G. P. Ji, D. P. Fan, Y. C. Chou, D. Dai, A. Liniger, and L. Van Gool, Deep gradient learning for efficient camouflaged object detection, Mach. Intell. Res., vol. 20, no. 1, pp. 92–108, 2023.

[57]
J. Zhao, X. Li, F. Yang, Q. Zhai, A. Luo, Z. Jiao, and H. Cheng, FocusDiffuser: perceiving local disparities for camouflaged object detection, in Proc. 18th European Conf. Computer Vision (ECCV), Milan, Italy, 2024, pp. 181–198.
[58]

D. P. Fan, G. P. Ji, M. M. Cheng, and L. Shao, Concealed object detection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 10, pp. 6024–6042, 2022.

[59]

D. P. Fan, G. P. Ji, P. Xu, M. M. Cheng, C. Sakaridis, and L. Van Gool, Advances in deep concealed scene understanding, Vis. Intell., vol. 1, no. 1, p. 16, 2023.

CAAI Artificial Intelligence Research
Article number: 9150046
Cite this article:
Lu Y, Wang X, Cui J, et al. GasUpper: Gas-Aware Upsampling for Enhanced Gas Segmentation. CAAI Artificial Intelligence Research, 2025, 4: 9150046. https://doi.org/10.26599/AIR.2025.9150046
Part of a topical collection:
Metrics & Citations  
Article History
Copyright
Rights and Permissions
Return