The 3D object tracking from a monocular RGB image is a challenging task. Although popular color and edge-based methods have been well studied, they are only applicable to certain cases and new solutions to the challenges in real environment must be developed. In this paper, we propose a robust 3D object tracking method with adaptively weighted local bundles called AWLB tracker to handle more complicated cases. Each bundle represents a local region containing a set of local features. To alleviate the negative effect of the features in low-confidence regions, the bundles are adaptively weighted using a spatially-variant weighting function based on the confidence values of the involved energy terms. Therefore, in each frame, the weights of the energy items in each bundle are adapted to different situations and different regions of the same frame. Experiments show that the proposed method can improve the overall accuracy in challenging cases. We then verify the effectiveness of the proposed confidence-based adaptive weighting method using ablation studies and show that the proposed method overperforms the existing single-feature methods and multi-feature methods without adaptive weighting.
- Article type
- Year
- Co-author
Indoor visual localization, i.e., 6 Degree-of-Freedom camera pose estimation for a query image with respect to a known scene, is gaining increased attention driven by rapid progress of applications such as robotics and augmented reality. However, drastic visual discrepancies between an onsite query image and prerecorded indoor images cast a significant challenge for visual localization. In this paper, based on the key observation of the constant existence of planar surfaces such as floors or walls in indoor scenes, we propose a novel system incorporating geometric information to address issues using only pixelated images. Through the system implementation, we contribute a hierarchical structure consisting of pre-scanned images and point cloud, as well as a distilled representation of the planar-element layout extracted from the original dataset. A view synthesis procedure is designed to generate synthetic images as complementary to that of a sparsely sampled dataset. Moreover, a global image descriptor based on the image statistic modality, called block mean, variance, and color (BMVC), was employed to speed up the candidate pose identification incorporated with a traditional convolutional neural network (CNN) descriptor. Experimental results on a popular benchmark demonstrate that the proposed method outperforms the state-of-the-art approaches in terms of visual localization validity and accuracy.