Poor illumination greatly affects the quality of obtained images. In this paper, a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement. DEANet combines the frequency and content information of images and is divided into three subnetworks: decomposition, enhancement, and adjustment networks, which perform image decomposition; denoising, contrast enhancement, and detail preservation; and image adjustment and generation, respectively. The model is trained on the public LOL dataset, and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.
- Article type
- Year
- Co-author
In the field of single remote sensing image Super-Resolution (SR), deep Convolutional Neural Networks (CNNs) have achieved top performance. To further enhance convolutional module performance in processing remote sensing images, we construct an efficient residual feature calibration block to generate expressive features. After harvesting residual features, we first divide them into two parts along the channel dimension. One part flows to the Self-Calibrated Convolution (SCC) to be further refined, and the other part is rescaled by the proposed Two-Path Channel Attention (TPCA) mechanism. SCC corrects local features according to their expressions under the deep receptive field, so that the features can be refined without increasing the number of calculations. The proposed TPCA uses the means and variances of feature maps to obtain accurate channel attention vectors. Moreover, a region-level nonlocal operation is introduced to capture long-distance spatial contextual information by exploring pixel dependencies at the region level. Extensive experiments demonstrate that the proposed residual feature calibration network is superior to other SR methods in terms of quantitative metrics and visual quality.