Sort:
Open Access Research Article Issue
Towards natural object-based image recoloring
Computational Visual Media 2022, 8(2): 317-328
Published: 06 December 2021
Abstract PDF (6.1 MB) Collect
Downloads:27

Existing color editing algorithms enable users to edit the colors in an image according to their own aesthetics. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. We also provide an intuitive user interface for efficient color manipulation. After color selection, image matting is performed to ensure smoothness of the object boundary. Comprehensive evaluation on various color editing examples demonstrates that our approach outperforms existing state-of-the-art color editing algorithms.

Open Access Research Article Issue
A three-stage real-time detector for traffic signs in large panoramas
Computational Visual Media 2019, 5(4): 403-416
Published: 04 September 2019
Abstract PDF (21.6 MB) Collect
Downloads:26

Traffic sign detection is one of the key com-ponents in autonomous driving. Advanced autonomous vehicles armed with high quality sensors capture high definition images for further analysis. Detecting traffic signs, moving vehicles, and lanes is important for localization and decision making. Traffic signs, especially those that are far from the camera, are small, and so are challenging to traditional object detection methods. In this work, in order to reduce computational cost and improve detection performance, we split the large input images into small blocks and then recognize traffic signs in the blocks using another detection module. Therefore, this paper proposes a three-stage traffic sign detector, which connects a BlockNet with an RPN-RCNN detection network. BlockNet, which is composed of a set of CNN layers, is capable of performing block-level foreground detection, making inferences in less than 1 ms. Then, the RPN-RCNN two-stage detector is used to identify traffic sign objects in each block; it is trained on a derived dataset named TT100KPatch. Experiments show that our framework can achieve both state-of-the-art accuracy and recall; its fastest detection speed is 102 fps.

Total 2