AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (4.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

AdaPIP: Adaptive picture-in-picture guidance for 360° film watching

Department of Computer Science and Technology, Tsinghua University, Beijing, China, and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China
Key Laboratory of Space Utilization, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China
Victoria University of Wellington, Wellington, New Zealand
Show Author Information

Graphical Abstract

Abstract

360 videos enable viewers to watch freelyfrom different directions but inevitably prevent them from perceiving all the helpful information. To mitigate this problem, picture-in-picture (PIP) guidance was proposed using preview windows to show regions of interest (ROIs) outside the current view range. We identify several drawbacks of this representation and propose a new method for 360 film watching called AdaPIP. AdaPIP enhances traditional PIP by adaptively arranging preview windows with changeable view ranges and sizes. In addition, AdaPIP incorporates the advantage of arrow-based guidance by presenting circular windows with arrows attached to them to help users locate the corresponding ROIs more efficiently. We also adapted AdaPIP and Outside-In to HMD-based immersive virtual reality environments to demonstrate the usability of PIP-guided approaches beyond 2D screens. Comprehensive user experiments on 2D screens, as well as in VR environments, indicate that AdaPIP is superior to alternative methods in terms of visual experiences while maintaining a comparable degree of immersion.

Electronic Supplementary Material

Video
41095_0347_ESM.mp4

References

[1]
Rhee, T.; Petikam, L.; Allen, B.; Chalmers, A. MR360: Mixed reality rendering for 360 panoramic videos. IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 4, 1379–1388, 2017.
[2]
Lin, Y. C.; Chang, Y. J.; Hu, H. N.; Cheng, H. T.; Huang, C. W.; Sun, M. Tell me where to look: Investigating ways for assisting focus in 360 video. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, 2535–2545, 2017.
[3]
Baudisch, P.; Rosenholtz, R. Halo: A technique for visualizing off-screen objects. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 481–488, 2003.
[4]
Gustafson, S. G.; Irani, P. P. Comparing visualizations for tracking off-screen moving targets. In: Proceedings of the CHI ’07 Extended Abstracts on Human Factors in Computing Systems, 2399–2404, 2007.
[5]
Gustafson, S.; Baudisch, P.; Gutwin, C.; Irani, P. Wedge: Clutter-free visualization of off-screen locations. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 787–796, 2008.
[6]
Pavel, A.; Hartmann, B.; Agrawala, M. Shot orientation controls for interactive cinematography with 360 video. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 289–297, 2017.
[7]
Liu, S. J.; Agrawala, M.; DiVerdi, S.; Hertzmann, A. View-dependent video textures for 360 video. In: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, 249–262, 2019.
[8]
Lin, Y. T.; Liao, Y. C.; Teng, S. Y.; Chung, Y. J.; Chan, L.; Chen, B. Y. Outside-In: Visualizing out-of-sight regions-of-interest in a 360 video using spatial picture-in-picture previews. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 255–265, 2017.
[9]
Google Spotlight Stories. 360 Google Spotlight Stories: Rain or Shine. 2016. Available at https://www.youtube.com/watch?v=QXF7uGfopnY
[10]
Adam Cosco. Knives. 2019. Available at https://youtu.be/IrAXKwEKVGA?si=y9gyhtBvxzFY1v-S
[11]
AutoNavi Information Technology Co. Ltd. AutoNavi. 2021. Available at https://mobile.amap.com/
[12]
Rothe, S.; Buschek, D.; Hußmann, H. Guidance in cinematic virtual reality-taxonomy, research status and challenges. Multimodal Technologies and Interaction Vol. 3, No. 1, 19, 2019.
[13]
Adcock, M.; Feng, D.; Thomas, B. Visualization of off-surface 3D viewpoint locations in spatial augmented reality. In: Proceedings of the 1st Symposium on Spatial User Interaction, 1–8, 2013.
[14]
Van den Broeck, M.; Kawsar, F.; Schöning, J. It’s all around you: Exploring 360 video viewing experiences on mobile devices. In: Proceedings of the 25th ACM International Conference on Multimedia, 762–768, 2017.
[15]
Fonseca, D.; Kraus, M. A comparison of head-mounted and hand-held displays for 360 videos with focus on attitude and behavior change. In: Proceedings of the 20th International Academic Mindtrek Conference, 287–296, 2016.
[16]
iNFINITE Production. Crowd-Sourced Data. 2020. Available at https://www.infinite.cz/projects/HMD-tester-virtual-reality-headset-database-utility
[17]
Larson, A. M.; Loschky, L. C. The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision Vol. 9, No. 10, 6.1–6.16, 2009.
[18]
Millodot, M. Dictionary of Optometry and Visual Science E-Book. Butterworth-Heinemann, 2014.
[19]
Kit, D.; Katz, L.; Sullivan, B.; Snyder, K.; Ballard, D.; Hayhoe, M. Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One Vol. 9, No. 4, e94362, 2014.
[20]
Li, C. L.; Aivar, M. P.; Kit, D. M.; Tong, M. H.; Hayhoe, M. M. Memory and visual search in naturalistic 2D and 3D environments. Journal of Vision Vol. 16, No. 8, Article No. 9, 2016.
[21]
David, E.; Beitner, J.; Võ, M. L. H. Effects of transient loss of vision on head and eye movements during visual search in a virtual environment. Brain Sciences Vol. 10, No. 11, Article No. 841, 2020.
[22]
Nuthmann, A. On the visual span during object search in real-world scenes. Visual Cognition Vol. 21, No. 7, 803–837, 2013.
[23]
Cajar, A.; Engbert, R.; Laubrock, J. Spatial frequency processing in the central and peripheral visual field during scene viewing. Vision Research Vol. 127, 186–197, 2016.
[24]
David, E. J.; Lebranchu, P.; Perreira Da Silva, M.; Le Callet, P. Predicting artificial visual field losses: A gaze-based inference study. Journal of Vision Vol. 19, No. 14, Article No. 22, 2019.
[25]
Matsuzoe, S.; Jiang, S.; Ueki, M.; Okabayashi, K. Intuitive visualization method for locating off-screen objects inspired by motion perception in peripheral vision. In: Proceedings of the 8th Augmented Human International Conference, Article No. 29, 2017.
[26]
Kasahara, S.; Rekimoto, J. JackIn: Integrating first-person view with out-of-body vision generation for human-human augmentation. In: Proceedings of the 5th Augmented Human International Conference, Article No. 46, 2014.
[27]
Google Spotlight Stories. 360 Google Spotlight Stories: HELP. 2016. Available at https://www.youtube.com/watch?v=G-XZhKqQAHU
[28]
Corridor. 360 Wizard Battle. 2016. Available at https://youtu.be/bb5eETSspVI?si=Wayr9bbhRsVtrWSG
[29]
Iris. Invisible - Episode 5 - Into The Den. 2016. Available at https://youtu.be/qYxNCB678WQ?si=uJhsaetH-HytKyzY
[30]
The Rock. The Rock Presents: “Escape From Calypso Island” - A 360 VR Adventure. 2016. Available at https://youtu.be/G4w_MBMNMEQ?si=XGdQOCgb2-yy5XD8K
[31]
Google Spotlight Stories. Google Spotlight Stories: SpecialDelivery Trailer. 2015. Available at https://youtu.be/3QxZtQoAIOs?si=Wz2pRXtEvRwLr5E6
[32]
Google Spotlight Stories. 360 Google Doodles/Spotlight Stories: Back to the Moon. 2018. Available at https://youtu.be/BEePFpC9qG8?si=PxDQjkefXBOuUMd1
[33]
Sato, Y.; Sugano, Y.; Sugimoto, A.; Kuno, Y.; Koike, H. Sensing and controlling human gaze in daily living space for human-harmonized information environments. In: Human-Harmonized Information Technology, Volume 1. Nishida, T. Ed. Springer Tokyo, 199–237, 2016.
[34]
Tam, W. J.; Stelmach, L. B.; Corriveau, P. J. Psychovisual aspects of viewing stereoscopic video sequences. In: Proceedings of the SPIE 3295, Stereoscopic Displays and Virtual Reality Systems V, 226–235, 1998.
[35]
National Geographic. Lions 360. 2017. Available at https://youtu.be/sPyAQQklc1s?si=ztk3XKDkXchZqTCn
[36]
Zhou, F.; Kang, S. B.; Cohen, M. F. Time-mapping using space-time saliency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3358–3365, 2014.
[37]
Liu, C.; Yuen, J.; Torralba, A. SIFT flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 33, No. 5, 978–994, 2011.
Computational Visual Media
Pages 487-503
Cite this article:
Li Y-X, Luo G, Xu Y-K, et al. AdaPIP: Adaptive picture-in-picture guidance for 360° film watching. Computational Visual Media, 2024, 10(3): 487-503. https://doi.org/10.1007/s41095-023-0347-3

261

Views

17

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 18 February 2023
Accepted: 31 March 2023
Published: 02 May 2024
© The Author(s) 2024.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return