AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (10.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Fuzzy-based indoor scene modeling with differentiated examples

School of Digital Media and Design Arts, Beijing University of Posts and Telecommunications, Beijing, China
School of Creative Media, City University of Hong Kong, Hong Kong, China
Department of Computer Science, University of Houston, TX, USA
Show Author Information

Graphical Abstract

Abstract

Well-designed indoor scenes incorporate interior design knowledge, which has been an essential prior for most indoor scene modeling methods. However, the layout qualities of indoor scene datasets are often uneven, and most existing data-driven methods do not differentiate indoor scene examples in terms of quality. In this work, we aim to explore an approach that leverages datasets with differentiated indoor sceneexamples for indoor scene modeling. Our solution conducts subjective evaluations on lightweight datasets having various room configurations and furniture layouts, via pairwise comparisons based on fuzzy set theory. We also develop a system to use such examples to guide indoor scene modeling using user-specified objects. Specifically, we focus on object groups associated with certain human activities, and define room features to encode the relations between the position and direction of an object group and the room configuration. To perform indoor scene modeling, given an empty room, our system first assesses it in terms of the user-specified object groups, and then places associated objects in the room guided by the assessment results. A series of experimental results and comparisons to state-of-the-art indoor scene synthesis methods are presented to validate the usefulness and effectiveness of our approach.

References

[1]
Yu, L. F.; Yeung, S. K.; Tang, C. K.; Terzopoulos, D.; Chan, T. F.; Osher, S. J. Make it home: Automatic optimization of furniture arrangement. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 86, 2011.
[2]
Merrell, P.; Schkufza, E.; Li, Z. Y.; Agrawala, M.; Koltun, V. Interactive furniture layout using interior design guidelines. In: Proceedings of the ACM SIGGRAPH 2011 Papers, Article No. 87, 2011.
[3]
Chen, X. W.; Li, J. W.; Li, Q.; Gao, B.; Zou, D. Q.; Zhao, Q. P. Image2Scene: Transforming style of 3D room. In: Proceedings of the 23rd ACM International Conference on Multimedia, 321330, 2015.
[4]
Fisher, M.; Ritchie, D.; Savva, M.; Funkhouser, T.; Hanrahan, P. Example-based synthesis of 3D object arrangements. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 135, 2012.
[5]
Jiang, Y.; Lim, M.; Saxena, A. Learning object arrangements in 3D scenes using human context. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, 907914, 2012.
[6]
Fisher, M.; Savva, M.; Li, Y. Y.; Hanrahan, P.; Nießner, M. Activity-centric scene synthesis for functional 3D scene modeling. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 179, 2015.
[7]
Savva, M.; Chang, A. X.; Hanrahan, P.; Fisher, M.; Nießner, M. PiGraphs: Learning interaction snapshots from observations. ACM Transactions on Graphics Vol. 35, No. 4, Article No. 139, 2016.
[8]
Fu, Q. A.; Chen, X. W.; Wang, X. T.; Wen, S. J.; Zhou, B.; Fu, H. B. Adaptive synthesis of indoor scenes via activity-associated object relation graphs. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 201, 2017.
[9]
Qi, S. Y.; Zhu, Y. X.; Huang, S. Y.; Jiang, C.; Zhu, S. C. Human-centric indoor scene synthesis using stochastic grammar. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 58995908, 2018.
[10]
Wang, K.; Savva, M.; Chang, A. X.; Ritchie, D. Deep convolutional priors for indoor scene synthesis. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 70, 2018.
[11]
Wang, K.; Lin, Y.; Weissmann, B.; Savva, M.; Chang, A. X.; Ritchie, D. PlanIT: Planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 132, 2019.
[12]
Saaty, T. L. The Analytic Hierarchy Process. New York: McGraw-Hill, 1980.
[13]
Klir, G. J.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice Hall, 1995.
[14]
Li, M. Y.; Patil, A.; Xu, K.; Chaudhuri, S.; Khan, O.; Shamir, A.; Tu, C. H.; Chen, B. Q.; Cohen-Or, D.; Zhang, H. GRAINS: Generative recursive autoencoders for INdoor scenes. ACM Transactions on Graphics Vol. 38, No. 2, Article No. 12, 2019.
[15]
Fisher, M.; Hanrahan, P. Context-based search for 3Dmodels. ACM Transactions on Graphics Vol. 29, No. 6, Article No. 182, 2010.
[16]
Fisher, M.; Savva, M.; Hanrahan, P. Characterizing structural relationships in scenes using graph kernels. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 34, 2011.
[17]
Sharf, A.; Huang, H.; Liang, C.; Zhang, J. P.; Chen, B. Q.; Gong, M. L. Mobility-trees for indoor scenes manipulation. Computer Graphics Forum Vol. 33, No. 1, 214, 2014.
[18]
Xu, K.; Ma, R.; Zhang, H.; Zhu, C. Y.; Shamir, A.; Cohen-Or, D.; Huang, H. Organizing heterogeneous scene collections through contextual focal points. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 35, 2014.
[19]
Liu, T. Q.; Chaudhuri, S.; Kim, V. G.; Huang, Q. X.; Mitra, N. J.; Funkhouser, T. Creating consistent scene graphs using a probabilistic grammar. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 211, 2014.
[20]
Zhang, S. H.; Zhang, S. K.; Xie, W. Y.; Luo, C. Y.; Yang, Y. L.; Fu, H. B. Fast 3D indoor scene synthesis by learning spatial relation priors of objects. IEEE Transactions on Visualization and Computer Graphics Vol. 28, No. 9, 30823092, 2022.
[21]
Ma, R.; Li, H. H.; Zou, C. Q.; Liao, Z. C.; Tong, X.; Zhang, H. Action-driven 3D indoor scene evolution. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 173, 2016.
[22]
Ma, R.; Patil, A. G.; Fisher, M.; Li, M. Y.; Pirk, S.; Hua, B. S.; Yeung, S. K.; Tong, X.; Guibas, L.; Zhang, H. Language-driven synthesis of 3D scenes from scene databases. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 212, 2018.
[23]
Xu, K.; Chen, K.; Fu, H.; Sun, W. L.; Hu, S. M. Sketch2Scene: Sketch-based co-retrieval and co-placement of 3D models. ACM Transactions on Graphics Vol. 32, No. 4, Article No. 123, 2013.
[24]
Chen, K.; Lai, Y. K.; Wu, Y. X.; Martin, R.; Hu, S. M. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 208, 2014.
[25]
Shao, T.; Monszpart, A.; Zheng, Y.; Koo, B.; Xu, W.; Zhou, K.; Mitra, N. J. Imagining the unseen: Stability-based cuboid arrangements for scene understanding. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 209, 2014.
[26]
Zhang, S. K.; Li, Y. X.; He, Y.; Yang, Y. L.; Zhang, S. H. MageAdd: Real-time interaction simulation for scene synthesis. In: Proceedings of the 29th ACM International Conference on Multimedia, 965973, 2021.
[27]
Frontczak, M.; Wargocki, P. Literature survey on how different factors influence human comfort in indoor environments. Building and Environment Vol. 46, No. 4, 922937, 2011.
[28]
Huang, L.; Zhu, Y. X.; Ouyang, Q.; Cao, B. A study on the effects of thermal, luminous, and acoustic environments on indoor environmental comfort in offices. Building and Environment Vol. 49, 304309, 2012.
[29]
Konis, K. Predicting visual comfort in side-lit open-plan core zones: Results of a field study pairing high dynamic range images with subjective responses. Energy and Buildings Vol. 77, 6779, 2014.
[30]
Ochoa, C. E.; Capeluto, I. G. Evaluating visual comfort and performance of three natural lighting systems for deep office buildings in highly luminous climates. Building and Environment Vol. 41, No. 8, 11281135, 2006.
[31]
Zhang, Z. W.; Yang, Z. P.; Ma, C. Y.; Luo, L. J.; Huth, A.; Vouga, E.; Huang, Q. X. Deep generative modeling for scene synthesis via hybrid representations. ACM Transactions on Graphics Vol. 39, No. 2, Article No. 17, 2020.
[32]
3D Warehouse. Available at https://3dwarehouse.sketchup.com/.
[33]
Song, S. R.; Yu, F.; Zeng, A.; Chang, A. X.; Savva, M.; Funkhouser, T. Semantic scene completion from a single depth image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 190198, 2017.
Computational Visual Media
Pages 717-732
Cite this article:
Fu Q, He S, Fu H, et al. Fuzzy-based indoor scene modeling with differentiated examples. Computational Visual Media, 2023, 9(4): 717-732. https://doi.org/10.1007/s41095-022-0299-z

541

Views

18

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 17 March 2022
Accepted: 15 June 2022
Published: 23 May 2023
© The Author(s) 2023.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return