AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (10.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Web3D Learning Framework for 3D Shape Retrieval Based on Hybrid Convolutional Neural Networks

School of Computer and Information, Anhui Normal University, Wuhu 241002, China.
School of Software Engineering, Tongji University, Shanghai 201804, China.
College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China.
School of Engineering and Computer Science, University of Hull, Hull, HU6 7RX, UK.
Show Author Information

Abstract

With the rapid development of Web3D technologies, sketch-based model retrieval has become an increasingly important challenge, while the application of Virtual Reality and 3D technologies has made shape retrieval of furniture over a web browser feasible. In this paper, we propose a learning framework for shape retrieval based on two Siamese VGG-16 Convolutional Neural Networks (CNNs), and a CNN-based hybrid learning algorithm to select the best view for a shape. In this algorithm, the AlexNet and VGG-16 CNN architectures are used to perform classification tasks and to extract features, respectively. In addition, a feature fusion method is used to measure the similarity relation of the output features from the two Siamese networks. The proposed framework can provide new alternatives for furniture retrieval in the Web3D environment. The primary innovation is in the employment of deep learning methods to solve the challenge of obtaining the best view of 3D furniture, and to address cross-domain feature learning problems. We conduct an experiment to verify the feasibility of the framework and the results show our approach to be superior in comparison to many mainstream state-of-the-art approaches.

References

[1]
T. Kato, T. Kurita, N. Otsu, and K. Hirata, A sketch retrieval method for full color image database-query by visual example, in Proc. 11th IAPR Int. Conf. Pattern Recognition, Hague, the Netherlands, 1992, pp. 530-533.
[2]
C. W. Niblack, R. Barber, W. Equitz, M. D. Flickner, E. H. Glasman, D. Petkovic, P. Yanker, C. Faloutsos, and G. Taubin, QBIC project: Querying images by content, using color, texture, and shape, in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology, San Jose, CA, USA, 1993, pp. 173-187.
[3]
R. Hu and J. Collomosse, A performance evaluation of gradient field HOG descriptor for sketch based image retrieval, Comput. Vis. Image Underst., vol. 117, no. 7, pp. 790-806, 2013.
[4]
Y. J. Liu, X. Luo, A. Joneja, C. X. Ma, X. L. Fu, and D. W. Song, User-adaptive sketch-based 3-D CAD model retrieval, IEEE Trans. Autom. Sci. Eng., vol. 10, no. 3, pp. 783-795, 2013.
[5]
M. Eitz, K. Hildebrand, T. Boubekeur, and M. Alexa, An evaluation of descriptors for large-scale image retrieval from sketched feature lines, Comput. Graph., vol. 34, no. 5, pp. 482-498, 2010.
[6]
M. Eitz, K. Hildebrand, T. Boubekeur, and M. Alexa, Sketch-based image retrieval: Benchmark and bag-of-features descriptors, IEEE Trans. Vis. Comput. Graph., vol. 17, no. 11, pp. 1624-1636, 2011.
[7]
M. Eitz, R. Richter, T. Boubekeur, K. Hildebrand, and M. Alexa, Sketch-based shape retrieval, ACM Trans. Graph., vol. 31, no. 4, p. 31, 2012.
[8]
T. Funkhouser, P. Min, M. Kazhdan, J. Chen, A. Halderman, D. Dobkin, and D. Jacobs, A search engine for 3D models, ACM Trans. Graph., vol. 22, no. 1, pp. 83-105, 2003.
[9]
B. Li, Y. Lu, A. Godil, T. Schreck, M. Aono, H. Johan, J. M. Saavedra, and S. Tashiro, SHREC’13 Track: Large scale sketch-based 3D shape retrieval, in Proc. 6th Eurographics Workshop on 3D Object Retrieval, Girona, Spain, 2013, pp. 89-96.
[10]
J. G. Snodgrass and M. Vanderwart, A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity, J. Exp. Psychol.: Hum. Learn. Memory, vol. 6, no. 2, pp. 174-215, 1980.
[11]
F. Cole, A. Golovinskiy, A. Limpaecher, H. S. Barros, A. Finkelstein, T. Funkhouser, and S. Rusinkiewicz, Where do people draw lines? ACM Trans. Graph., vol. 27, no. 3, p. 88, 2008.
[12]
J. M. Saavedra and B. Bustos, An improved histogram of edge local orientations for sketch-based image retrieval, in Proc. 32nd DAGM Symp. Pattern Recognition Symp., Darmstadt, Germany, 2010, pp. 432-441.
[13]
S. M. Yoon, M. Scherer, T. Schreck, and A. Kuijper, Sketch-based 3D model retrieval using diffusion tensor fields of suggestive contours, in Proc. 18th Int. Conf. Multimedia, Firenze, Italy, 2010, pp. 193-200.
[14]
J. Saavedra, Sketch based image retrieval using a soft computation of the histogram of edge local orientations (S-HELO), in Proc. 2014 IEEE Int. Conf. Image Processing, Paris, France, 2014, pp. 2998-3002.
[15]
N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 886-893.
[16]
H. Y. Fu, H. G. Zhao, X. W. Kong, and X. B. Zhang, BHoG: Binary descriptor for sketch-based image retrieval, Multimed. Syst., vol. 22, no. 1, pp. 127-136, 2016.
[17]
F. Wang, L. Kang, and Y. Li, Sketch-based 3D shape retrieval using convolutional neural networks, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1875-1883.
[18]
F. Zhu, J. Xie, and Y. Fang, Learning cross-domain neural networks for sketch-based 3D shape retrieval, in Proc. 30th AAAI Conf. Artificial Intelligence, Phoenix, AZ, USA, 2016, pp. 931-941.
[19]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, in Proc. 25th Int. Conf. Neural Information Processing Systems, Lake Tahoe, NV, USA, 2012, pp. 1097-1105.
[20]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556, 2014.
[21]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770-778.
[22]
L. Zhao, S. Liang, J. Y. Jia, and Y. C. Wei, Learning best views of 3D shapes from sketch contour, Vis. Comput., vol. 31, nos. 6–8, pp. 765-774, 2015.
[23]
M. Eitz, J. Hays, and M. Alexa, How do humans sketch objects? ACM Trans. Graph., vol. 31, no. 4, p. 44, 2012.
[24]
S. Chopra, R. Hadsell, and Y. LeCun, Learning a similarity metric discriminatively, with application to face verification, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 539-546.
[25]
H. Liu, L. Zhang, and H. Huang, Web-image driven best views of 3D shapes, Vis. Comput., vol. 28, no. 3, pp. 279-287, 2012.
[26]
H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, Multi-view convolutional neural networks for 3D shape recognition, in Proc. 2015 IEEE Int. Conf. Computer Vision, Santiago, Chile, 2016, pp. 945-953.
Tsinghua Science and Technology
Pages 93-102
Cite this article:
Zhou W, Jia J, Huang C, et al. Web3D Learning Framework for 3D Shape Retrieval Based on Hybrid Convolutional Neural Networks. Tsinghua Science and Technology, 2020, 25(1): 93-102. https://doi.org/10.26599/TST.2018.9010113

546

Views

69

Downloads

14

Crossref

N/A

Web of Science

18

Scopus

1

CSCD

Altmetrics

Revised: 02 July 2018
Accepted: 20 July 2018
Published: 22 July 2019
© The author(s) 2020

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return