AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.3 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

GPU based techniques for deep image merging

School of Science, RMIT University, Melbourne, 3000, Australia.
Show Author Information

Abstract

Deep images store multiple fragments per-pixel, each of which includes colour and depth, unlike traditional 2D flat images which store only a single colour value and possibly a depth value. Recently, deep images have found use in an increasing number of applications, including ones using transparency and compositing. A step in compositing deep images requires merging per-pixel fragment lists in depth order; little work has so far been presented on fast approaches.

This paper explores GPU based merging of deep images using different memory layouts for fragment lists: linked lists, linearised arrays, and interleaved arrays. We also report performance improvements using techniques which leverage GPU memory hierarchy by processing blocks of fragment data using fast registers, following similar techniques used to improve performance of transparency rendering. We report results for compositing from two deep images or saving the resulting deep image before compositing, as well as for an iterated pairwise merge of multiple deep images. Our results show a 2 to 6 fold improvement by combining efficient memory layout with fast register based merging.

References

[1]
Heckenberg, D.; Saam, J.; Doncaster, C.; Cooper, C. Deep compositing. 2010. Available at http://www.johannessaam.com/deepImage.pdf.
[2]
Duff, T. Deep compositing using lie algebras. ACM Transactions on Graphics Vol. 36, No. 3, Article No. 26, 2017.
[3]
Maule, M.; Comba, J. L. D.; Torchelsen, R.; Bastos, R. Memory-efficient order-independent transparency with dynamic fragment buffer. In: Proceedings of the 25th SIBGRAPI Conference on Graphics, Patterns and Images, 134141, 2012.
[4]
Knowles, P.; Leach, G.; Zambetta, F. Efficient layered fragment buffer techniques. In: OpenGL Insights. Cozzi, P.; Riccio, C. Eds. CRC Press, 279292, 2012.
[5]
Knowles, P.; Leach, G.; Zambetta, F. Fast sorting for exact OIT of complex scenes. The Visual Computer Vol. 30, Nos. 6–8, 603613, 2014.
[6]
Porter, T.; Duff, T. Compositing digital images. In: Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques, 253259, 1984.
[7]
Egstad, J.; Davis, M.; Lacewell, D. Improved deep image compositing using subpixel masks. In: Proceedings of the 2015 Symposium on Digital Production, 2127, 2015.
[8]
Hillman, P. The theory of OpenEXR deep samples. Technical Report. Weta Digital Ltd., 2013.
[9]
Knowles, P.; Leach, G.; Zambetta, F. Backwards memory allocation and improved OIT. In: Proceedings of the Pacific Graphics, 5964, 2013.
[10]
McGuire, M. Computer graphics archive. 2017. Available at https://casual-effects.com/data.
Computational Visual Media
Pages 277-285
Cite this article:
Archer J, Leach G, van Schyndel R. GPU based techniques for deep image merging. Computational Visual Media, 2018, 4(3): 277-285. https://doi.org/10.1007/s41095-018-0118-8

676

Views

12

Downloads

3

Crossref

N/A

Web of Science

3

Scopus

0

CSCD

Altmetrics

Revised: 23 December 2017
Accepted: 02 May 2018
Published: 04 August 2018
© The Author(s) 2018

This article is published with open access at Springerlink.com

The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return