Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Light field (LF) cameras record multiple perspectives by a sparse sampling of real scenes, and these perspectives provide complementary information. This information is beneficial to LF super-resolution (LFSR). Compared with traditional single-imagesuper-resolution, LF can exploit parallax structure and perspective correlation among different LF views. Furthermore, the performance of existing methods are limited as they fail to deeply explore the complementary information across LF views. In this paper, we propose a novel network, called the light field complementary-view feature attention network (LF-CFANet), to improve LFSR by dynamically learning the complementary information in LF views. Specifically, we design a residual complementary-view spatial and channel attention module (RCSCAM) to effectively interact with complementary information between complementary views. Moreover, RCSCAM captures the relationships between different channels, and it is able to generate informative features for reconstructing LF images while ignoring redundant information. Then, a maximum-difference information supplementary branch (MDISB) is used to supplement information from the maximum-difference angular positions based on the geometric structure of LF images. This branch also can guide the process of reconstruction. Experimental results on both synthetic and real-world datasets demonstrate the superiority of our method. The proposed LF-CFANet has a more advanced reconstruction performance that displays faithful details with higher SR accuracy than state-of-the-art methods.
618
Views
15
Downloads
8
Crossref
9
Web of Science
11
Scopus
0
CSCD
Altmetrics
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.