AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (9.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Recurrent 3D attentional networks for end-to-end active object recognition

School of Computer, National University of Defense Technology, Changsha 410073, China.
Department of Computer Science and Electrical & Computer Engineering, University of Maryland, College Park, 20742, USA.
Visual Computing Research Center, Shenzhen University, Shenzhen 518060, China.
Show Author Information

Abstract

Active vision is inherently attention-driven: an agent actively selects views to attend in order to rapidly perform a vision task while improving its internal representation of the scene being observed. Inspired by the recent success of attention-based models in 2D vision tasks based on single RGB images, we address multi-view depth-based active object recognition using an attention mechanism, by use of an end-to-end recurrent 3D attentional network. The architecture takes advantage of a recurrent neural network to store and update an internal representation. Our model, trained with 3D shape datasets, is able to iteratively attend the best views targeting an object of interest for recognizing it. To realize 3D view selection, we derive a 3D spatial transformer network. It is differentiable, allowing training with backpropagation, and so achiev-ing much faster convergence than the reinforcement learning employed by most existing attention-based models. Experiments show that our method, with only depth input, achieves state-of-the-art next-best-view performance both in terms of time taken and recognition accuracy.

References

[1]
J. Denzler,; C. M. Brown, Information theoretic sensor data selection for active object recognition and state estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 24, No. 2, 145-157, 2002.
[2]
M. F. Huber,; T. Dencker,; M. Roschani,; J. Beyerer, Bayesian active object recognition via Gaussian process regression. In: Proceedings of the 15th International Conference on Information Fusion, 1718-1725, 2012.
[3]
Z. Wu,; S. Song,; A. Khosla,; F. Yu,; L. Zhang,; X. Tang,; J. Xiao, 3D ShapeNets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1912-1920, 2015.
[4]
D. Jayaraman,; K. Grauman, Look-ahead before you leap: End-to-end active recognition by forecasting the effect of motion. In: Computer Vision - ECCV 2016. Lecture Notes in Computer Science, Vol. 9909. B. Leibe,; J. Matas,; N. Sebe,; M. Welling, Eds. Springer Cham, 489-505, 2016.
[5]
K. Xu,; Y. Shi,; L. Zheng,; J. Zhang,; M. Liu,; H. Huang,; H. Su,; D. Cohen-Or,; B. Chen, 3D attention-driven depth acquisition for object identifiation. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 238, 2016.
[6]
S. Chen,; L. Zheng,; Y. Zhang,; Z. Sun,; K. Xu, VERAM: View-enhanced recurrent attention model for 3D shape classification. IEEE Transactions on Visualization and Computer Graphics , 2018.
[7]
V. Mnih,; N. Heess,; A. Graves,; K. Kavukcuoglu, Recurrent models of visual attention. In: Proceedings of the Advances in Neural Information Processing Systems 27, 2204-2212, 2014.
[8]
K. Xu,; J. L. Ba,; R. Kiros,; A. Courville,; R. Salakhutdinov,; R. S. Zemel,; Y. Bengio, Show, attend and tell: Neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on Machine Learning, Vol. 37, 2048-2057, 2015.
[9]
M. Corbetta,; G. L. Shulman, Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience Vol. 3, No. 3, 201-215, 2002.
[10]
M. Jaderberg,; K. Simonyan,; A. Zisserman,; K. Kavukcuoglu, Spatial transformer networks. In: Proceedings of the Advances in Neural Information Processing Systems 28, 2017-2025, 2015.
[11]
W. R. Scott,; G. Roth,; J.-F. Rivest, View planning for automated three-dimensional object reconstruction and inspection. ACM Computing Surveys Vol. 35, No. 1, 64-96, 2003.
[12]
S. Dutta Roy,; S. Chaudhury,; S. Banerjee, Active recognition through next view planning: A survey. Pattern Recognition Vol. 37, No. 3, 429-446, 2004.
[13]
C. R. Qi,; H. Su,; K. Mo,; L. J. Guibas, PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 652-660, 2017.
[14]
C. R. Qi,; L. Yi,; H. Su,; L. J. Guibas, PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the Advances in Neural Information Processing Systems 30, 5099-5108, 2017.
[15]
S. Xie,; S. Liu,; Z. Chen,; Z. Tu, Attentional ShapeContextNet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4606-4615, 2018.
[16]
Y. Feng,; Z. Zhang,; X. Zhao,; R. Ji,; Y. Gao, GVCNN: Group-view convolutional neural networks for 3D shape recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 264-272, 2018.
[17]
H. Borotschnig,; L. Paletta,; M. Prantl,; A. Pinz, Appearance-based active object recognition. Image and Vision Computing Vol. 8, No. 9, 715-727, 2000.
[18]
F. G. Callari,; F. P. Ferrie, Active object recognition: Looking for differences.International Journal of Computer Vision Vol. 43, No. 3, 189-204, 2001.
[19]
T. Arbel,; F. P. Ferrie, Entropy-based gaze planning. Image and Vision Computing Vol. 19, No. 11, 779-786, 2001.
[20]
L. Paletta,; A. Pinz, Active object recognition by view integration and reinforcement learning. Robotics and Autonomous Systems Vol. 31, No. 1, 71-86, 2000.
[21]
H. Kurniawati,; D. Hsu,; W. S. Lee, SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In: Proceedings of the Robotics: Science and Systems, Vol. 2008, 2008.
[22]
M. Lauri,; N. Atanasov,; G. Pappas,; R. Ritala, Active object recognition via Monte Carlo tree search. In: Proceedings of the Workshop on Beyond Geometric Constraints at the International Conference on Robotics and Automation, 2015.
[23]
S. Levine,; C. Finn,; T. Darrell,; P. Abbeel, End-to-end training of deep visuomotor policies. Journal of Machine Learning Research Vol. 17, No. 39, 1-40, 2016.
[24]
M. Malmir,; K. Sikka,; D. Forster,; J. Movellan,; G. W. Cottrell, Deep Q-learning for active recognition of germs: Baseline performance on a standardized dataset for active learning. In: Proceedings of the British Machine Vision Conference, 161-171, 2016.
[25]
A. Krizhevsky,; I. Sutskever,; G. E. Hinton, ImageNet classification with deep convolutional neural networks. In: Proceedings of the Advances in Neural Information Processing Systems 25, 1097-1105, 2012.
[26]
M. C. Mozer, A focused back-propagation algorithm for temporal pattern recognition. Complex Systems Vol. 3, No. 4, 349-381, 1989.
[27]
Z. Wu,; S. Song,; A. Khosla,; X. Tang,; J. Xiao, 3D ShapeNets for 2.5D object recognition and next-best-view prediction. arXiv preprint arXiv:1406.5670, 2014.
[28]
A. X. Chang,; T. Funkhouser,; L. Guibas,; P. Hanrahan,; Q. Huang,; Z. Li,; S. Savarese,; M. Savva,; S. Song,; H. Su,; J. Xiao,; L. Yi,; F. Yu, ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012, 2015.
[29]
E. Johns,; S. Leutenegger,; A. J. Davison, Pairwise decomposition of image sequences for active multi-view recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3813-3822, 2016.
[30]
H. Su,; S. Maji,; E. Kalogerakis,; E. Learned-Miller, Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of the IEEE International Conference on Computer Vision, 945-953, 2015.
[31]
R. Bajcsy, Active perception. Proceedings of the IEEE Vol. 76, No. 8, 966-1005, 1988.
[32]
T. Xiao,; Y. Xu,; K. Yang,; J. Zhang,; Y. Peng,; Z. Zhang, The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 842-850, 2015.
Computational Visual Media
Pages 91-104
Cite this article:
Liu M, Shi Y, Zheng L, et al. Recurrent 3D attentional networks for end-to-end active object recognition. Computational Visual Media, 2019, 5(1): 91-104. https://doi.org/10.1007/s41095-019-0135-2

692

Views

23

Downloads

9

Crossref

N/A

Web of Science

10

Scopus

1

CSCD

Altmetrics

Revised: 25 December 2018
Accepted: 28 January 2019
Published: 08 April 2019
© The Author(s) 2019

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from thecopyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return