AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (892.5 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

3D computational modeling and perceptual analysis of kinetic depth effects

Meng-Yao Cui1Shao-Ping Lu1( )Miao Wang2Yong-Liang Yang3Yu-Kun Lai4Paul L. Rosin4
TKLNDST, CS, Nankai University, Tianjin, China
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
Department of Computer Science, University of Bath, UK
School of Computer Science and Informatics, Cardiff University, Wales, UK
Show Author Information

Abstract

Humans have the ability to perceive kinetic depth effects, i.e., to perceived 3D shapes from 2D projections of rotating 3D objects. This process is based on a variety of visual cues such as lighting and shading effects. However, when such cues are weak or missing, perception can become faulty, as demonstrated by the famous silhouette illusion example of the spinning dancer. Inspired by this, we establish objective and subjective evaluation models of rotated 3D objects by taking their projected 2D images as input. We investigate five different cues: ambient luminance, shading, rotation speed, perspective, and color difference between the objects and background. In the objective evaluation model, we first apply 3D reconstruction algorithms to obtain an objective reconstruction quality metric, and then use quadratic stepwise regression analysis to determine weights of depth cues to represent the reconstruction quality. In the subjective evaluation model, we use a comprehensive user study to reveal correlations with reaction time and accuracy, rotation speed, and perspective. The two evaluation models are generally consistent, and potentially of benefit to inter-disciplinary research into visual perception and 3D reconstruction.

References

[1]
H. Pashler,; S. Yantis Stevens’ Handbook of Experimental Psychology, Volume 1: Sensation and Perception, 3rd edn. John Wiley & Sons, Inc., 2002.
[2]
M. Bach, The Pulfrich effect. 1922. Available at https://pulfrich.siu.edu/Pulfrich{\_}Pages/lit{\_}pulf/1922{\_}Pulfrich.htm.
[3]
N. Kayahara, Spinning dancer. Available at https://en.wikipedia.org/wiki/Spinning{\_}Dancer.
[4]
H. Wallach,; D. N. O’Connell, The kinetic depth effect. Journal of Experimental Psychology Vol. 45, No. 4, 205-217, 1953.
[5]
N. F. Troje,; M. Mcadam, The viewing-from-above bias and the silhouette illusion. i-Perception Vol. 1, No. 3, 143-148, 2010.
[6]
G. J. Andersen,; M. L. Braunstein, Dynamic occlusion in the perception of rotation in depth. Perception & Psychophysics Vol. 34, No. 4, 356-362, 1983.
[7]
J. Timothy Petersik, The effects of spatial and temporal factors on the perception of stroboscopic rotation simulations. Perception Vol. 9, No. 3, 271-283, 1980.
[8]
B. F. Green, Figure coherence in the kinetic depth effect. Journal of Experimental Psychology Vol. 62, No. 3, 272-282, 1961.
[9]
M. L. Braunstein, Perceived direction of rotation of simulated three-dimensional patterns. Perception & Psychophysics Vol. 21, No. 6, 553-557, 1977.
[10]
P. Moulon,; P. Monasse,; R. Marlet, Global fusion of relative motions for robust, accurate and scalable structure from motion. In: Proceedings of the IEEE International Conference on Computer Vision, 3248-3255, 2013.
[11]
J. T. Todd,; E. Mingolla, Perception of surface curvature and direction of illumination from patterns of shading. Journal of Experimental Psychology Human Perception & Performance Vol. 9, No. 4, 583-595, 1983.
[12]
I. P. Howard,; S. S. Bergström,; M. Ohmi, Shape from shading in different frames of reference. Perception Vol. 19, No. 4, 523-530, 1990.
[13]
P. Wisessing,; K. Zibrek,; D. W. Cunningham,; R. McDonnell, A psychophysical model to control the brightness and key-to-fill ratio in CG cartoon character lighting. In: Proceedings of the ACM Symposium on Applied Perception, Article No. 2, 2019.
[14]
M. L. Braunstein,; G. J. Andersen,; D. M. Riefer, The use of occlusion to resolve ambiguity in parallel projections. Perception & Psychophysics Vol. 31, No. 3, 261-267, 1982.
[15]
J. J. Gibsen, The Ecological Approach to Visual Perception. New York: Psychology Press, 1986.
[16]
L. R. Young,; C. M. Oman, Model for vestibular adaptation to horizontal rotation. Aerospace Medicine Vol. 40, No. 10, 1076-1080, 1969.
[17]
S. P. Lu,; B. Ceulemans,; A. Munteanu,; P. Schelkens, Spatio-temporally consistent color and structure optimization for multiview video color correction. IEEE Transactions on Multimedia Vol. 17, No. 5, 577-590, 2015.
[18]
S. P. Lu,; G. Dauphin,; G. Lafruit,; A. Munteanu, Color retargeting: Interactive time-varying color image composition from time-lapse sequences. Computational Visual Media Vol. 1, No. 4, 321-330, 2015.
[19]
B. Ceulemans,; S.-P. Lu,; G. Lafruit,; P. Schelkens,; A. Munteanu, Efficient MRF-based disocclusion inpainting in multiview video. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 1-6, 2016.
[20]
B. Ceulemans,; S. P. Lu,; G. Lafruit,; A. Munteanu, Robust multiview synthesis for wide-baseline camera arrays. IEEE Transactions on Multimedia Vol. 20, No. 9, 2235-2248, 2018.
[21]
H. Isono,; M. Yasuda, Stereoscopic depth perception of isoluminant color random-dot stereograms. Systems and Computers in Japan Vol. 19, No. 9, 32-40, 1988.
[22]
C. R. C. Guibal,; B. Dresp, Interaction of color and geometric cues in depth perception: When does “red” mean “near”? Psychological Research Psychologische Forschung Vol. 69, Nos. 1-2, 30-40, 2004.
[23]
A. P. Pisanpeeti,; E. Dinet, Transparent objects: Influence of shape and color on depth perception. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 1867-1871, 2017.
[24]
H.-K. Chu,; W.-H. Hsu,; N. J. Mitra,; D. Cohen-Or,; T.-T. Wong,; T.-Y. Lee, Camouflage images. ACM Transactions on Graphics Vol. 29, No. 4, Article No. 51, 2010.
[25]
Q. Tong,; S.-H. Zhang,; S.-M. Hu,; R. R. Martin, Hidden images. In: Proceedings of the NPAR, 27-34, 2011.
[26]
B. A. Dosher,; M. S. Landy,; G. Sperling, Ratings of kinetic depth in multidot displays. Journal of Experimental Psychology: Human Perception and Performance Vol. 15, No. 4, 816-825, 1989.
[27]
N. J. Mitra,; H.-K. Chu,; T.-Y. Lee,; L. Wolf,; H. Yeshurun,; D. Cohen-Or, Emerging images. ACM Transactions on Graphics Vol. 28, No. 5, Article No. 163, 2009.
[28]
P. F. Xu,; J. Q. Ding,; H. Zhang,; H. Huang, Discernible image mosaic with edge-aware adaptive tiles. Computational Visual Media Vol. 5, No. 1, 45-58, 2019.
[29]
Y. K. Lai,; S. M. Hu,; R. R. Martin, Surface mosaics. The Visual Computer Vol. 22, Nos. 9-11, 604-611, 2006.
[30]
L.-Q. Ma,; K. Xu,; T.-T. Wong,; B.-Y. Jiang,; S.-M. Hu, Change blindness images. IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 11, 1808-1819, 2013.
[31]
M. T. Chi,; T. Y., Qu, Y. G. Lee,; T. T. Wong, Self-animating images: Illusory motion using repeated asymmetric patterns. ACM Transactions on Graphics Vol. 27, No. 3, Article No. 62, 2008.
[32]
J. Tong,; L. Liu,; J. Zhou,; Z. Pan, Mona Lisa Alive: Create self-moving objects using hollow-face illusion. The Visual Computer Vol. 29, Nos. 6-8, 535-544, 2013.
[33]
G. Elber, Modeling (seemingly) impossible models. Computers & Graphics Vol. 35, No. 3, 632-638, 2011.
[34]
C.-F. W. Lai,; S.-K. Yeung,; X. Yan,; C.-W. Fu,; C.-K. Tang, 3D navigation on impossible figures via dynamically reconfigurable maze. IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 10, 2275-2288, 2016.
[35]
T.-P. Wu,; C.-W. Fu,; S.-K. Yeung,; J. Jia,; C.-K. Tang, Modeling and rendering of impossible figures. ACM Transactions on Graphics Vol. 29, No. 2, Article No. 13, 2010.
[36]
G. K. L. Tam,; Z.-Q. Cheng,; Y.-K. Lai,; F. C. Langbein,; Y. Liu,; A. D. Marshall,; R. R. Martin,; X. Sun,; P. L. Rosin, Registration of 3D point clouds and meshes: A survey from rigid to nonrigid. IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 7, 1199-1217, 2013.
[37]
K. Chen,; Y. K. Lai,; S. M. Hu, 3D indoor scene modeling from RGB-D data: A survey. Computational Visual Media Vol. 1, No. 4, 267-278, 2015.
[39]
P. Moulon,; P. Monasse,; R. Marlet, OpenMVG: An open multiple view geometry library. Available at https://github.com/openMVG/openMVG.
[40]
R. B. Rusu,; N. Blodow,; M. Beetz, Fast point feature histograms (FPFH) for 3D registration. In: Proceedings of the IEEE International Conference on Robotics and Automation 3212-3217, 2009.
[41]
P. J. Besl,; N. D. McKay, A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 14, No. 2. 239-256, 1992.
[42]
A. Agresti, Logistic regression models using cumulative logits. In: Analysis of Ordinal Categorical Data. A. Agresti, Ed. John Wiley & Sons, Inc., 44-87, 2012.
[43]
K. Tateno,; F. Tombari,; I. Laina,; N. Navab, CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6565-6574, 2017.
[44]
A. Kar,; C. Hne,; J. Malik, Learning a multi-view stereo machine. In: Proceedings of the 31st Conference on Neural Information Processing Systems, 2017.
Computational Visual Media
Pages 265-277
Cite this article:
Cui M-Y, Lu S-P, Wang M, et al. 3D computational modeling and perceptual analysis of kinetic depth effects. Computational Visual Media, 2020, 6(3): 265-277. https://doi.org/10.1007/s41095-020-0180-x

844

Views

41

Downloads

2

Crossref

N/A

Web of Science

2

Scopus

1

CSCD

Altmetrics

Received: 01 April 2020
Accepted: 08 May 2020
Published: 13 August 2020
© The Author(s) 2020

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return