AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Local Homography Estimation on User-Specified Textureless Regions

Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Show Author Information

Abstract

This paper presents a novel deep neural network for designated point tracking (DPT) in a monocular RGB video, VideoInNet. More concretely, the aim is to track four designated points correlated by a local homography on a textureless planar region in the scene. DPT can be applied to augmented reality and video editing, especially in the field of video advertising. Existing methods predict the location of four designated points without appropriately considering the point correlation. To solve this problem, VideoInNet predicts the motion of the four designated points correlated by a local homography within the heatmap prediction framework. Our network refines the heatmaps of designated points through two stages. On the first stage, we introduce a context-aware and location-aware structure to learn a local homography for the designated plane in a supervised way. On the second stage, we introduce an iterative heatmap refinement module to improve the tracking accuracy. We propose a dataset focusing on textureless planar regions, named ScanDPT, for training and evaluation. We show that the error rate of VideoInNet is about 29% lower than that of the state-of-the-art approach when testing in the first 120 frames of testing videos on ScanDPT.

Electronic Supplementary Material

Download File(s)
jcst-37-3-615-Highlights.pdf (523 KB)

References

[1]

Mémin É, Pérez P. Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Trans. Image Process., 1998, 7(5): 703-719. DOI: 10.1109/83.668027.

[2]
Dosovitskiy A, Fischer P, Ilg E et al. FlowNet: Learning optical flow with convolutional networks. In Proc. the 2015 IEEE International Conference on Computer Vision, December 2015, pp. 2758-2766. DOI: 10.1109/ICCV.2015.316.
[3]
Ilg E, Mayer N, Saikia T et al. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp. 1647-1655. DOI: 10.1109/CVPR.2017.179.
[4]
Ranjan A, Black M J. Optical flow estimation using a spatial pyramid network. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp. 2720-2729. DOI: 10.1109/CVPR.2017.291.
[5]
Sun D Q, Yang X D, Liu M Y, Kautz J. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In Proc. the 2018 IEEE Conference on Computer Vision and Pattern Recognition, June 2018, pp. 8934-8943. DOI: 10.1109/CVPR.2018.00931.
[6]
Zhao S Y, Sheng Y L, Dong Y et al. MaskFlownet: Asymmetric feature matching with learnable occlusion mask. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 6277-6286. 10.1109/CVPR42600.2020.00631.
[7]
Teed Z, Deng J. RAFT: Recurrent all-pairs field transforms for optical flow. In Proc. the 16th European Conference on Computer Vision, August 2020, pp. 402-419. DOI: 10.1007/978-3-030-58536-5_24.
[8]

Lowe D G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis., 2004, 60(2): 91-110. DOI: 10.1023/B:VISI.0000029664.99615.94.

[9]
DeTone D, Malisiewicz T, Rabinovich A. Superpoint: Self-supervised interest point detection and description. In Proc. the 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 2018, pp. 224-236. DOI: 10.1109/CVPRW.2018.00060.
[10]
Luo Z X, Zhou L, Bai X Y et al. ASLFeat: Learning local features of accurate shape and localization. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 6588-6597. DOI: 10.1109/CVPR42600.2020.00662.
[11]
Sarlin P E, DeTone D, Malisiewicz T, Rabinovich A. SuperGlue: Learning feature matching with graph neural networks. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 4937-4946. DOI: 10.1109/CVPR42600.2020.00499.
[12]
Jiang W, Trulls E, Hosang J et al. COTR: Correspondence transformer for matching across images. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, October 2021, pp. 6187-6197. DOI: 10.1109/ICCV48922.2021.00615.
[13]
Efe U, Ince K G, Alatan A A. DFM: A performance baseline for deep feature matching. In Proc. the 2021 IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 2021, pp. 4284-4293. DOI: 10.1109/CVPRW53098.2021.00484.
[14]

Evangelidis G D, Psarakis E Z. Parametric image alignment using enhanced correlation coefficient maximization. IEEE Trans. Pattern Anal. Mach. Intell., 2008, 30(10): 1858-1865. DOI: 10.1109/TPAMI.2008.113.

[15]
Benhimane S, Malis E. Real-time image-based tracking of planes using efficient second-order minimization. In Proc. the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 28-October 2, 2004, pp. 943-948. DOI: 10.1109/IROS.2004.1389474.
[16]
Chen L, Zhou F, Shen Y et al. Illumination insensitive efficient second-order minimization for planar object tracking. In Proc. the 2017 IEEE International Conference on Robotics and Automation, May 29-June 3, 2017, pp. 4429-4436. DOI: 10.1109/ICRA.2017.7989512.
[17]
DeTone D, Malisiewicz T, Rabinovich A. Deep image homography estimation. arXiv: 1606.03798, 2016. https://arxiv.org/pdf/1606.03798.pdf, Jan. 2022.
[18]
Dai A, Chang A X, Savva M et al. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp. 2432-2443. DOI: 10.1109/CVPR.2017.261.
[19]
Dai A, Niesner M, Zollhöfer M et al. BundleFusion: Real-time globally consistent 3D reconstruction using on-the-fly surface re-integration. arXiv: 1604.01093, 2016. https://arxiv.org/pdf/1604.01093.pdf, Jan. 2022.
[20]

Li J W, Gao W, Wu Y H et al. High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief review. Computational Visual Media, 2022, 8(3): 369-393. DOI: 10.1007/s41095-021-0250-8.

[21]
Muratov O, Slynko Y, Chernov V et al. 3DCapture: 3D reconstruction for a smartphone. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 26-July 1, 2016, pp. 893-900. DOI: 10.1109/CVPRW.2016.116.
[22]

Yang X B, Zhou L Y, Jiang H Q et al. Mobile3DRecon: Real-time monocular 3D reconstruction on a mobile phone. IEEE Trans. Vis. Comput. Graph., 2020, 26(12): 3446-3456. DOI: 10.1109/TVCG.2020.3023634.

[23]
Zhang S H, Li X L, Liu Y T. Scale-aware insertion of virtual objects in monocular videos. In Proc. the 2020 IEEE International Symposium on Mixed and Augmented Reality, November 2020, pp. 36-44. DOI: 10.1109/ISMAR50242.2020.00022.
[24]

Chen D, Tang F, Dong W M et al. SiamCPN: Visual tracking with the Siamese center-prediction network. Comput. Vis. Media, 2021, 7(2): 253-265. DOI: 10.1007/s41095-021-0212-1.

[25]

Xue Z X, Wu W. Anomaly detection by exploiting the tracking trajectory in surveillance videos. Sci. China: Inf. Sci., 2020, 63(5): Article No. 154101. DOI: 10.1007/s11432-018-9792-8.

[26]

Zhang D, Li T S, Chen C L. Target tracking algorithm based on a broad learning system. Science China: Information Sciences, 2022, 65(5): Article No. 154201. DOI: 10.1007/s11432-020-3272-y.

[27]

Li K, He F, Yu H. Robust visual tracking based on convolutional features with illumination and occlusion handing. J. Comput. Sci. Technol., 2018, 33(1): 223-236. DOI: 10.1007/s11390-017-1764-5.

[28]

Li J C, Zhong F, Xu S H, Qin X Y. 3D object tracking with adaptively weighted local bundles. J. Comput. Sci. Technol., 2021, 36(3): 555-571. DOI: 10.1007/s11390-021-1272-5.

[29]

Avidan S. Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26(8): 1064-1072. DOI: 10.1109/TPAMI.2004.53.

[30]

Ross D A, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking. Int. J. Comput. Vis., 2008, 77(1/2/3): 125-141. DOI: 10.1007/s11263-007-0075-7.

[31]
Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision. In Proc. the 7th International Joint Conference on Artificial Intelligence, August 1981, pp. 674-679.
[32]

Henriques J F, Caseiro R, Martins P, Batista J. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 37(3): 583-596. DOI: 10.1109/TPAMI.2014.2345390.

[33]

Arulampalam M S, Maskell S, Gordon N J, Clapp T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process., 2002, 50(2): 174-188. DOI: 10.1109/78.978374.

[34]
Li B, Wu W, Wang Q et al. SiamRPN++: Evolution of Siamese visual tracking with very deep networks. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2019, pp. 4282-4291. DOI: 10.1109/CVPR.2019.00441.
[35]
Guo D Y, Wang J, Cui Y et al. SiamCAR: Siamese fully convolutional classification and regression for visual tracking. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 6268-6276. DOI: 10.1109/CVPR42600.2020.00630.
[36]
Guo D Y, Shao Y Y, Cui Y et al. Graph attention tracking. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp. 9543-9552. DOI: 10.1109/CVPR46437.2021.00942.
[37]
Chen X, Yan B, Zhu J W et al. Transformer tracking. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp. 8126-8135. DOI: 10.1109/CVPR46437.2021.00803.
[38]
Wang N, Zhou W G, Wang J et al. Transformer meets tracker: Exploiting temporal context for robust visual tracking. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp. 1571-1580. DOI: 10.1109/CVPR46437.2021.00162.
[39]

Horn B K P, Schunck B G. Determining optical flow. Artif. Intell., 1981, 17(1/2/3): 185-203. DOI: 10.1016/0004-3702(81)90024-2.

[40]

Hartley R, Zisserman A. Multiple view geometry in computer vision. Robotica, 2001, 19(2): 233-236. DOI: 10.1017/S0263574700223217.

[41]

Muja M, Lowe D G. Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36(11): 2227-2240. DOI: 10.1109/TPAMI.2014.2321376.

[42]

Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 1981, 24(6): 381-395. DOI: 10.1145/358669.358692.

[43]
Barath D, Matas J, Noskova J. MAGSAC: Marginalizing sample consensus. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp. 10197-10205. DOI: 10.1109/CVPR.2019.01044.
[44]

Nguyen T, Chen S W, Shivakumar S S et al. Unsupervised deep homography: A fast and robust homography estimation model. IEEE Robotics Autom. Lett., 2018, 3(3): 2346-2353. DOI: 10.1109/LRA.2018.2809549.

[45]
Zhang J R, Wang C, Liu S C et al. Content-aware unsupervised deep homography estimation. In Proc. the 16th European Conference on Computer Vision, August 2020, pp. 653-669. DOI: 10.1007/978-3-030-58452-8_38.
[46]
He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp. 770-778. DOI: 10.1109/CVPR.2016.90.
[47]

Chen L C, Papandreou G, Kokkinos I et al. DeepLab: Semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell., 2018, 40(4): 834-848. DOI: 10.1109/TPAMI.2017.2699184.

[48]
Chung J Y, Gülçehre Ç, Cho K, Bengio Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv: 1412.3555, 2014. https://arxiv.org/pdf/1412.3555.pdf, Jan. 2022.
[49]
Sun X, Xiao B, Wei F Y et al. Integral human pose regression. In Proc. the 15th European Conference on Computer Vision, September 2018, pp. 536-553. DOI: 10.1007/978-3-030-01231-1_33.
[50]
Girshick R B. Fast R-CNN. In Proc. the 2015 IEEE International Conference on Computer Vision, December 2015, pp. 1440-1448. DOI: 10.1109/ICCV.2015.169.
[51]
Deng J, Dong W, Socher R et al. ImageNet: A largescale hierarchical image database. In Proc. the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2009, pp. 248-255. DOI: 10.1109/CVPR.2009.5206848.
[52]
Kingma D P, Ba J. Adam: A method for stochastic optimization. In Proc. the 3rd International Conference on Learning Representations, May 2015.
Journal of Computer Science and Technology
Pages 615-625
Cite this article:
Chen Z, Fang X-N, Zhang S-H. Local Homography Estimation on User-Specified Textureless Regions. Journal of Computer Science and Technology, 2022, 37(3): 615-625. https://doi.org/10.1007/s11390-022-2185-7

413

Views

2

Crossref

2

Web of Science

2

Scopus

0

CSCD

Altmetrics

Received: 25 January 2022
Accepted: 25 April 2022
Published: 31 May 2022
©Institute of Computing Technology, Chinese Academy of Sciences 2022
Return