Sort:
Open Access Research Article Issue
Reference-guided structure-aware deep sketch colorization for cartoons
Computational Visual Media 2022, 8 (1): 135-148
Published: 27 October 2021
Abstract PDF (2.7 MB) Collect
Downloads:28

Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading. During colorization, the artist usually takes an existing cartoon image as color guidance, particularly when colorizing related characters or an animation sequence. Reference-guided colorization is more intuitive than colorization with other hints, such as color points or scribbles, or text-based hints. Unfortunately, reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading. In this paper, we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image. Our framework contains a color style extractor to extract the color feature from a color image, a colorization network to generate multi-scale output images by combining a sketch and a color feature, and a multi-scale discriminator to improve the reality of the output image. Extensive qualitative and quantitative evaluations show that our method outperforms existing methods, providing both superior visual quality and style reference consistency in the task of reference-based colorization.

Regular Paper Issue
Automatic Video Segmentation Based on Information Centroid and Optimized SaliencyCut
Journal of Computer Science and Technology 2020, 35 (3): 564-575
Published: 29 May 2020
Abstract Collect

We propose an automatic video segmentation method based on an optimized SaliencyCut equipped with information centroid (IC) detection according to level balance principle in physical theory. Unlike the existing methods, the image information of another dimension is provided by the IC to enhance the video segmentation accuracy. Specifically, our IC is implemented based on the information-level balance principle in the image, and denoted as the information pivot by aggregating all the image information to a point. To effectively enhance the saliency value of the target object and suppress the background area, we also combine the color and the coordinate information of the image in calculating the local IC and the global IC in the image. Then saliency maps for all frames in the video are calculated based on the detected IC. By applying IC smoothing to enhance the optimized saliency detection, we can further correct the unsatisfied saliency maps, where sharp variations of colors or motions may exist in complex videos. Finally, we obtain the segmentation results based on IC-based saliency maps and optimized SaliencyCut. Our method is evaluated on the DAVIS dataset, consisting of different kinds of challenging videos. Comparisons with the state-of-the-art methods are also conducted to evaluate our method. Convincing visual results and statistical comparisons demonstrate its advantages and robustness for automatic video segmentation.

Open Access Research Article Issue
Automatic texture exemplar extraction based on global and local textureness measures
Computational Visual Media 2018, 4 (2): 173-184
Published: 17 March 2018
Abstract PDF (31.5 MB) Collect
Downloads:18

Texture synthesis is widely used for modeling the appearance of virtual objects. However, traditional texture synthesis techniques emphasize creation of optimal target textures, and pay insufficient attention to choice of suitable input texture exemplars. Currently, obtaining texture exemplars from natural images is a labor intensive task for the artists, requiring careful photography and significant post-processing. In this paper, we present an automatic texture exemplar extraction method based on global and local textureness measures. To improve the efficiency of dominant texture identification, we first perform Poisson disk sampling to randomly and uniformly crop patches from a natural image. For global textureness assessment, we use a GIST descriptor to distinguish textured patches from non-textured patches, in conjunction with SVM prediction. To identify real texture exemplars consisting solely of the dominant texture, we further measure the local textureness of a patch by extracting and matching the local structure (using binary Gabor pattern (BGP)) and dominant color features (using color histograms) between a patch and its sub-regions. Finally, we obtain optimal texture exemplars by scoring and ranking extracted patches using these global and local textureness measures. We evaluate our method on a variety of images with different kinds of textures. A convincing visual comparison with textures manually selected by an artist and a statistical study demonstrate its effectiveness.

Total 3