Sort:
Open Access Research Article Issue
Flow-aware synthesis: A generic motion model for video frame interpolation
Computational Visual Media 2021, 7(3): 393-405
Published: 17 March 2021
Abstract PDF (4.3 MB) Collect
Downloads:33

A popular and challenging task in video research, frame interpolation aims to increase the frame rate of video. Most existing methods employ a fixed motion model, e.g., linear, quadratic, or cubic, to estimate the intermediate warping field. However,such fixed motion models cannot well represent the complicated non-linear motions in the real world or rendered animations. Instead, we present an adaptive flow prediction module to better approximate the complex motions in video. Furthermore, interpolating just one intermediate frame between consecutive input frames may be insufficient for complicated non-linear motions. To enable multi-frame interpolation, we introducethe time as a control variable when interpolating frames between original ones in our generic adaptive flow prediction module. Qualitative and quantitative experimental results show that our method can produce high-quality results and outperforms the existing state-of-the-art methods on popular public datasets.

Regular Paper Issue
DEMC: A Deep Dual-Encoder Network for Denoising Monte Carlo Rendering
Journal of Computer Science and Technology 2019, 34(5): 1123-1135
Published: 06 September 2019
Abstract Collect

In this paper, we present DEMC, a deep dual-encoder network to remove Monte Carlo noise efficiently while preserving details. Denoising Monte Carlo rendering is different from natural image denoising since inexpensive by-products (feature buffers) can be extracted in the rendering stage. Most of them are noise-free and can provide sufficient details for image reconstruction. However, these feature buffers also contain redundant information. Hence, the main challenge of this topic is how to extract useful information and reconstruct clean images. To address this problem, we propose a novel network structure, dual-encoder network with a feature fusion sub-network, to fuse feature buffers firstly, then encode the fused feature buffers and a noisy image simultaneously, and finally reconstruct a clean image by a decoder network. Compared with the state-of-the-art methods, our model is more robust on a wide range of scenes, and is able to generate satisfactory results in a significantly faster way.

Total 2
1/11GOpage