AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (3.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

A Survey on Multiview Video Synthesis and Editing

Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), B-1050 Brussels, Belgium.
TNList, Tsinghua University, Beijing 100084, China.
Show Author Information

Abstract

Multiview video can provide more immersive perception than traditional single 2-D video. It enables both interactive free navigation applications as well as high-end autostereoscopic displays on which multiple users can perceive genuine 3-D content without glasses. The multiview format also comprises much more visual information than classical 2-D or stereo 3-D content, which makes it possible to perform various interesting editing operations both on pixel-level and object-level. This survey provides a comprehensive review of existing multiview video synthesis and editing algorithms and applications. For each topic, the related technologies in classical 2-D image and video processing are reviewed. We then continue to the discussion of recent advanced techniques for multiview video virtual view synthesis and various interactive editing applications. Due to the ongoing progress on multiview video synthesis and editing, we can foresee more and more immersive 3-D video applications will appear in the future.

References

[1]
Alatan A. A., Yemez Y., Gudukbay U., Zabulis X., Muller K., Erdem I. E., Weigel C., and Smolic A., Scene representation technologies for 3DTV—A survey, IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 17, pp. 1587-1605, 2007.
[2]
Kubota A., Smolic A., Magnor M., Tanimoto M., Chen T., and Zhang C., Multiview imaging and 3DTV, IEEE Signal Processing Magazine, vol. 24, no. 6, pp. 10-21, 2007.
[3]
Fehn C., Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV, in Electronic Imaging, 2004, pp. 93-104.
[4]
Sun W., Xu L., Au O. C., Chui S. H., and Kwok C. W., An overview of free view-point depth-image-based rendering (DIBR), in Proc. APSIPA, 2010, pp. 1023-1030.
[5]
Mustafa A., Kim H., Guillemaut J.-Y., and Hilton A., General dynamic scene reconstruction from multiple view video, in Proc. ICCV, 2015, pp. 900-908.
[6]
Middlebury multiview stereo, http://vision.middlebury.edu/mview/, 2016.
[7]
Vetro A., Wiegand T., and Sullivan G. J., Overview of the stereo and multiview video coding extensions of the h.264/mpeg-4 avc standard, Proceedings of the IEEE, vol. 99, no. 4, pp. 626-642, 2011.
[8]
Ren D., Chan S.-H. G., Cheung G., Zhao V., and Frossard P., Anchor view allocation for collaborative free viewpoint video streaming, IEEE Trans. Multimedia, vol. 17, no. 3, pp. 307-322, 2015.
[9]
Gijsenij A., Gevers T., and Van De Weijer J., Computational color constancy: Survey and experiments, IEEE Trans. Image Process., vol. 20, no. 9, pp. 2475-2489, 2011.
[10]
Reinhard E., Adhikhmin M., Gooch B., and Shirley P., Color transfer between images, IEEE Comput. Graph. Appl., vol. 21, no. 5, pp. 34-41, 2001.
[11]
Levin A., Lischinski D., and Weiss Y., Colorization using optimization, ACM Trans. Graph., vol. 23, no. 3, pp. 689-694, 2004.
[12]
Lischinski D., Farbman Z., Uyttendaele M., and Szeliski R., Interactive local adjustment of tonal values, ACM Trans. Graph., vol. 25, pp. 646-653, 2006.
[13]
Farbman Z., Fattal R., Lischinski D., and Szeliski R., Edge-preserving decompositions for multi-scale tone and detail manipulation, ACM Trans. Graph., vol. 27, p. 67, 2008.
[14]
Xiao X. and Ma L., Gradient-preserving color transfer, Comput. Graph. Forum, vol. 28, no. 7, pp. 1879-1886, 2009.
[15]
Bhat P., Zitnick C. L., Cohen M., and Curless B., Gradientshop: A gradient-domain optimization framework for image and video filtering, ACM Trans. Graph., vol. 29, no. 2, p. 10, 2010.
[16]
An X. and Pellacini F., Appprop: All-pairs appearance-space edit propagation, ACM Trans. Graph., vol. 27, no. 3, p. 40, 2008.
[17]
Xu K., Li Y., Ju T., Hu S.-M., and Liu T.-Q., Efficient affinity-based edit propagation using kd tree, ACM Trans. Graph., vol. 28, no. 5, p. 118, 2009.
[18]
Li Y., Ju T., and Hu S., Instant propagation of sparse edits on images and videos, Comput. Graph. Forum, vol. 29, pp. 2049-2054, 2010.
[19]
Chen X., Zou D., Zhao Q., and Tan P., Manifold preserving edit propagation, ACM Trans. Graph., vol. 31, no. 6, p. 132, 2012.
[20]
Roweis S. T. and Saul L. K., Nonlinear dimensionality reduction by locally linear embedding, Science, vol. 290, no. 5500, pp. 2323-2326, 2000.
[21]
Ma L.-Q. and Xu K., Efficient manifold preserving edit propagation with adaptive neighborhood size, Computers & Graphics, vol. 38, pp. 167-173, 2014.
[22]
Xu L., Yan Q., and Jia J., A sparse control model for image and video editing, ACM Trans. Graph., vol. 32, no. 6, p. 197, 2013.
[23]
Cheng Z., Yang Q., and Sheng B., Deep colorization, in Proc. ICCV, 2015, pp. 415-423.
[24]
Iizuka S., Simo-Serra E., and Ishikawa H., Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification, ACM Trans. Graph., vol. 35, no. 4, p. 110, 2016.
[25]
Afifi M. and Hussain K. F., Mpb: A modified poisson blending technique, Comp. Visual Media, vol. 1, no. 4, pp. 331-341, 2015.
[26]
Qian Y., Liao D., and Zhou J., Manifold alignment based color transfer for multiview image stitching, in Proc. ICIP, 2013, pp. 1341-1345.
[27]
HaCohen Y., Shechtman E., Goldman D. B., and Lischinski D., Optimizing color consistency in photo collections, ACM Trans. Graph., vol. 32, no. 4, p. 85, 2013.
[28]
Park J., Tai Y.-W., Sinha S. N., and Kweon I. S., Efficient and robust color consistency for community photo collections, in Proc. CVPR, 2016, pp. 430-438.
[29]
Farbman Z. and Lischinski D., Tonal stabilization of video, ACM Trans. Graph., vol. 30, no. 4, p. 89, 2011.
[30]
Wang Y., Tao D., Li X., Song M., Bu J., and Tan P., Video tonal stabilization via color states smoothing, IEEE Trans. Image Process., vol. 23, no. 11, pp. 4838-4849, 2014.
[31]
Bonneel N., Tompkin J., Sunkavalli K., Sun D., Paris S., and Pfister H., Blind video temporal consistency, ACM Trans. Graph., vol. 34, no. 6, p. 196, 2015.
[32]
Lei Y., Luo W., Wang Y., and Huang J., Video sequence matching based on the invariance of color correlation, IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 9, pp. 1332-1343, 2012.
[33]
Lu S.-P., Dauphin G., Lafruit G., and Munteanu A., Color retargeting: Interactive time-varying color image composition from time-lapse sequences, Comp. Visual Media, vol. 1, no. 4, pp. 321-330, 2015.
[34]
Zhong J., Kleijn B., and Hu X., Camera control in multi-camera systems for video quality enhancement, IEEE Sensors Journal, vol. 14, no. 9, pp. 2955-2966, 2013.
[35]
Lu S.-P., Ceulemans B., Munteanu A., and Schelkens P., Spatio-temporally consistent color and structure optimization for multiview video color correction, IEEE Trans. Multimedia, vol. 17, no. 5, pp. 577-590, 2015.
[36]
Ilie A. and Welch G., Ensuring color consistency across multiple cameras, in Proc. ICCV, vol. 2, 2005, pp. 1268-1275.
[37]
Fecker U., Barkowsky M., and Kaup A., Histogram-based prefiltering for luminance and chrominance compensation of multiview video, IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 9, pp. 1258-1267, 2008.
[38]
Chen Y., Cai C., and Liu J., Yuv correction for multi-view video compression, in Proc. ICPR, vol. 3, 2006, pp. 734-737.
[39]
Doutre C. and Nasiopoulos P., Color correction preprocessing for multiview video coding, IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 9, pp. 1400-1406, 2009.
[40]
Shi B., Li Y., Liu L., and Xu C., Block-based color correction algorithm for multi-view video coding, in Proc. ICME, 2009, pp. 65-68.
[41]
Panahpour Tehrani M., Ishikawa A., Sakazawa S., and Koike A., Iterative colour correction of multicamera systems using corresponding feature points, J. Visual Commun. Image Represent., vol. 21, no. 5, pp. 377-391, 2010.
[42]
Mouffranc C. and Nozick V., Colorimetric correction for stereoscopic camera arrays, in Proc. ACCV, 2012, pp. 206-217.
[43]
Shao F., Jiang G.-Y., Yu M., and Ho Y.-S., Fast color correction for multi-view video by modeling spatio-temporal variation, J. Visual Commun. Image Represent., vol. 21, no. 5, pp. 392-403, 2010.
[44]
Li K., Dai Q., and Xu W., Collaborative color calibration for multi-camera systems, Signal Process. Image Commun., vol. 26, no. 1, pp. 48-60, 2011.
[45]
Fezza S. A., Larabi M.-C., and Faraoun K. M., Feature-based color correction of multi-view video for coding and rendering enhancement, IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 9, pp. 1486-1498, 2014.
[46]
Zeng H., Ma K.-K., Wang C., and Cai C., Sift-flow-based color correction for multi-view video, Signal Process. Image Commun., vol. 36, pp. 53-62, 2015.
[47]
Ceulemans B., Lu S.-P., Schelkens P., and Munteanu A., Globally optimized multiview video color correction using dense spatio-temporal matching, in Proc. 3DTV, 2015, pp. 1-4.
[48]
Jae-Ho H., Sukhee C., and Yung-Lyul L., Adaptive local illumination change compensation method for h.264avc based multiview video coding, IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 11, pp. 1496-1505, 2007.
[49]
Lee Y.-L., Hur J., Lee Y., Han K., Cho S., Hur N., Kim J., Kim J., Lai P., and Ortega A., Ce11: Illumination compensation, in Doc. MPEG & VCEG JVT-U052R2, 2006, pp. 20-27.
[50]
Yamamoto K., Kitahara M., Kimata H., Yendo T., Fujii T., Tanimoto M., Shimizu S., Kamikura K., and Yashima Y., Multiview video coding using view interpolation and color correction, IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 11, pp. 1436-1449, 2007.
[51]
Shi B., Li Y., Liu L., and Xu C., Color correction and compression for multi-view video using h.264 features, in Proc. ACCV, 2009, pp. 43-52.
[52]
Wang Q., Yan P., Yuan Y., and Li X., Robust color correction in stereo vision, in Proc. ICIP, 2011, pp. 965-968.
[53]
Oliveira M., Sappa A. D., and Santos V., Color correction using 3D Gaussian mixture models, in Proc. ICAR, vol. 7324, 2012, pp. 97-106.
[54]
Yamamoto K. and Oi R., Color correction for multi-view video using energy minimization of view networks, Int. J. Autom. and Comput., vol. 5, no. 3, pp. 234-245, 2008.
[55]
Le Meur O., Gautier J., and Guillemot C., Examplar-based inpainting based on local geometry, in Proc. ICIP, 2011, pp. 3401-3404.
[56]
Xu W. and Mulligan J., Performance evaluation of color correction approaches for automatic multi-view image and video stitching, in Proc. CVPR, 2010, pp. 263-270.
[57]
Plnen M., Hakala J., Bilcu R., Jrvenp T., Hkkinen J., and Salmimaa M., Color asymmetry in 3D imaging: Influence on the viewing experience, 3D Research, vol. 3, no. 3, pp. 1-10, 2012.
[58]
Lee B., Three-dimensional displays, past and present, Physics Today, vol. 66, no. 4, pp. 36-41, 2013.
[59]
Urey H., Chellappan K. V., Erden E., and Surman P., State of the art in stereoscopic and autostereoscopic displays, Proceedings of the IEEE, vol. 99, no. 4, pp. 540-555, 2011.
[60]
Domanski M., Stankiewicz O., Wegner K., Kurc M., Konieczny J., Siast J., Stankowski J., Ratajczak R., and Grajek T., High efficiency 3D video coding using new tools based on view synthesis, IEEE Trans. Image Process., vol. 22, no. 9, pp. 3517-3527, 2013.
[61]
Zou F., Tian D., Vetro A., Sun H., Au O. C., and Shimizu S., View synthesis prediction in the 3-D video coding extensions of avc and hevc, IEEE Trans. Circuits Syst. Video Technol., vol. 24, no. 10, pp. 1696-1708, 2014.
[62]
Tauber Z., Li Z., and Drew M., Review and preview: Disocclusion by inpainting for image-based rendering, IEEE Trans. Syst., Man, Cybern. C, vol. 37, no. 4, pp. 527-540, 2007.
[63]
Plath N., Knorr S., Goldmann L., and Sikora T., Adaptive image warping for hole prevention in 3D view synthesis, IEEE Trans. Image Process., vol. 22, no. 9, pp. 3420-3432, 2013.
[64]
Gallup D., Frahm J.-M., Mordohai P., Yang Q., and Pollefeys M., Real-time plane-sweeping stereo with multiple sweeping directions, in Proc. CVPR, 2007, pp. 1-8.
[65]
Goorts P., Ancuti C., Dumont M., and Bekaert P., Real-time video-based view interpolation of soccer events using depth-selective plane sweeping, in Proc. VISAPP, 2013.
[66]
Herling J. and Broll W., High-quality real-time video inpainting with pixmix, IEEE Trans. Vis. Comput. Graph., vol. 20, no. 6, pp. 866-879, 2014.
[67]
Bertalmio M., Sapiro G., Caselles V., and Ballester C., Image inpainting, in Proc. SIGGRAPH, 2000, pp. 417-424.
[68]
Criminisi A., Prez P., and Toyama K., Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Process., vol. 13, no. 9, pp. 1200-1212, 2004.
[69]
Daribo I. and Pesquet-Popescu B., Depth-aided image inpainting for novel view synthesis, in Proc. MMSP, 2010, pp. 167-170.
[70]
Gautier J., Meur O. L., and Guillemot C., Depth based image completion for view synthesis, in Proc. 3DTV, 2011, pp. 1-4.
[71]
Ahn I. and Kim C., A novel depth-based virtual view synthesis method for free viewpoint video, IEEE Trans. Broadcast., vol. 59, no. 4, pp. 614-626, 2013.
[72]
Buyssens P., Daisy M., Tschumperlé D., and Lézoray O., Depth-aware patch-based image disocclusion for virtual view synthesis, in Proc. SIGGRAPH Asia Technical Briefs, 2015.
[73]
Hao C., Chen Y., Wu W., and Wu E., Image completion with perspective constraint based on a single image, Sci. China Inf. Sci., vol. 58, no. 9, pp. 1-12, 2015.
[74]
Komodakis N. and Tziritas G., Image completion using efficient belief propagation via priority scheduling and dynamic pruning, IEEE Trans. Image Process., vol. 16, no. 11, pp. 2649-2661, 2007.
[75]
Habigt J. and Diepold K., Image completion for view synthesis using markov random fields and efficient belief propagation, in Proc. ICIP, 2013, pp. 2131-2134.
[76]
Ruzic T. and Pizurica A., Context-aware patch-based image inpainting using Markov random field modelling, IEEE Trans. Image Process., vol. 24, no. 1, pp. 444-456, 2014.
[77]
Ceulemans B., Lu S.-P., Lafruit G., and Munteanu A., Efficient mrf-based disocclusion inpainting in multiview video, in 2016 IEEE International Conference on Multimedia and Expo (ICME), 2016.
[78]
Barnes C., Shechtman E., Finkelstein A., and Goldman D., Patchmatch: A randomized correspondence algorithm for structural image editing, ACM Trans. Graph., vol. 28, no. 3, p. 24, 2009.
[79]
He K. and Sun J., Computing nearest-neighbor fields via propagation-assisted kd-trees, in Proc. CVPR, 2012, pp. 111-118.
[80]
Barnes C., Zhang F.-L., Lou L., Wu X., and Hu S.-M., Patchtable: Efficient patch queries for large datasets and applications, ACM Trans. Graph., vol. 34, no. 4, p. 97, 2015.
[81]
Bleyer M., Rhemann C., and Rother C., Patchmatch stereo—stereo matching with slanted support windows, in Proc. BMVC, 2011.
[82]
Gould S. and Zhang Y., Patchmatchgraph: Building a graph of dense patch correspondences for label transfer, in Proc. ECCV, vol. 7576, 2012, pp. 439-452.
[83]
Morse B., Howard J., Cohen S., and Price B., Patchmatch-based content completion of stereo image pairs, in Proc. 3DIMPVT, 2012, pp. 555-562.
[84]
Lu S.-P., Hanca J., Munteanu A., and Schelkens P., Depth-based view synthesis using pixel-level image inpainting, in Proc. DSP, 2013, pp. 1-6.
[85]
Lu S.-P., Ceulemans B., Munteanu A., and Schelkens P., Performance optimizations for patchmatch-based pixel-level multiview inpainting, in Proc. IC3D, 2013, pp. 1-7.
[86]
Ndjiki-Nya P., Koppel M., Doshkov D., Lakshman H., Merkle P., Muller K., and Wiegand T., Depth image-based rendering with advanced texture synthesis for 3-D video, IEEE Trans. Multimedia, vol. 13, no. 3, pp. 453-465, 2011.
[87]
Pérez P., Gangnet M., and Blake A., Poisson image editing, ACM Trans. Graph., vol. 22, no. 3, pp. 313-318, 2003.
[88]
Kppel M., Wang X., Doshkov D., Wiegand T., and Ndjiki-Nya P., Depth image-based rendering with spatio-temporal consistent textures synthesis for 3D video with global motion, in Proc. ICIP, 2012.
[89]
Luo G., Zhu Y., Li Z., and Zhang L., A hole filling approach based on background reconstruction for view synthesis in 3D video, in Proc. CVPR, 2016, pp. 1781-1789.
[90]
Li C. and Wand M., Combining markov random fields and convolutional neural networks for image synthesis, in Proc. CVPR, 2016, pp. 2479-2486.
[91]
Su H., Wang F., Yi E., and Guibas L. J., 3D-assisted feature synthesis for novel views of an object, in Proc. ICCV, 2015, pp. 2677-2685.
[92]
Huang Q., Wang H., and Koltun V., Single-view reconstruction via joint analysis of image and shape collections, ACM Trans. Graph., vol. 34, no. 4, p. 87, 2015.
[93]
Tatarchenko M., Dosovitskiy A., and Brox T., Multi-view 3D models from single images with a convolutional network, in Proc. ECCV, 2016.
[94]
Zhu Z., Huang H.-Z., Tan Z.-P., Xu K., and Hu S.-M., Faithful completion of images of scenic landmarks using internet images, IEEE Trans. Vis. Comput. Graph., vol. 22, no. 8, pp. 1945-1958, 2015.
[95]
Hornung A., Dekkers E., and Kobbelt L., Character animation from 2d pictures and 3D motion data, ACM Trans. Graph., vol. 26, no. 1, p. 1, 2007.
[96]
Zheng Y., Chen X., Cheng M.-M., Zhou K., Hu S.-M., and Mitra N. J., Interactive images: Cuboid proxies for smart image manipulation, ACM Trans. Graph., vol. 31, no. 4, p. 99, 2012.
[97]
Chen T., Zhu Z., Shamir A., Hu S.-M., and Cohen-Or D., 3-sweep: Extracting editable objects from a single photo, ACM Trans. Graph., vol. 32, no. 6, p. 195, 2013.
[98]
Kholgade N., Simon T., Efros A., and Sheikh Y., 3D object manipulation in a single photograph using stock 3D models, ACM Trans. Graph., vol. 33, no. 4, p. 127, 2014.
[99]
Hoiem D., Efros A. A., and Hebert M., Automatic photo pop-up, ACM Trans. Graph., vol. 24, no. 3, pp. 577-584, 2005.
[100]
Lu S.-P., Zhang S.-H., Wei J., Hu S.-M., and Martin R. R., Timeline editing of objects in video, IEEE Trans. Vis. Comput. Graph., vol. 19, no. 7, pp. 1218-1227, 2013.
[101]
Zhang Y., Tang Y.-L., and Cheng K.-L., Efficient video cutout by paint selection, J. Comput. Sci. Technol., vol. 30, no. 3, pp. 467-477, 2015.
[102]
Zhang F. L., Wang J., Zhao H., Martin R. R., and Hu S. M., Simultaneous camera path optimization and distraction removal for improving amateur video, IEEE Trans. Image Process., vol. 24, no. 12, pp. 5982-5994, 2015.
[103]
Lambooij M., Fortuin M., Heynderickx I., and IJsselsteijn W., Visual discomfort and visual fatigue of stereoscopic displays: A review, J. Imaging Science and Technology, vol. 53, no. 3, p. 30 201, 2009.
[104]
Didyk P., Ritschel T., Eisemann E., Myszkowski K., and Seidel H.-P., A perceptual model for disparity, ACM Trans. Graph., vol. 30, no. 4, p. 96, 2011.
[105]
Didyk P., Ritschel T., Eisemann E., Myszkowski K., Seidel H.-P., and Matusik W., A luminance-contrast-aware disparity model and applications, ACM Trans. Graph., vol. 31, no. 6, p. 184, 2012.
[106]
Du S.-P., Masia B., Hu S.-M., and Gutierrez D., A metric of visual comfort for stereoscopic motion, ACM Trans. Graph., vol. 32, no. 6, p. 222, 2013.
[107]
Templin K., Didyk P., Myszkowski K., Hefeeda M. M., Seidel H.-P., and Matusik W., Modeling and optimizing eye vergence response to stereoscopic cuts, ACM Trans. Graph., vol. 33, no. 4, p. 145, 2014.
[108]
Mu T.-J., Sun J.-J., Martin R. R., and Hu S.-M., A response time model for abrupt changes in binocular disparity, The Visual Computer, vol. 31, no. 5, pp. 675-687, 2015.
[109]
Shibata T., Kim J., Hoffman D. M., and Banks M. S., The zone of comfort: Predicting visual discomfort with stereo displays, J. Vision, vol. 11, no. 8, pp. 1-11, 2011.
[110]
Lang M., Hornung A., Wang O., Poulakos S., Smolic A., and Gross M., Nonlinear disparity mapping for stereoscopic 3D, ACM Trans. Graph., vol. 29, no. 4, p. 75, 2010.
[111]
Basha T., Moses Y., and Avidan S., Geometrically consistent stereo seam carving, in Proc. ICCV, 2011, pp. 1816-1823.
[112]
Lee K. Y., Chung C. D., and Chuang Y. Y., Scene warping: Layer-based stereoscopic image resizing, in Proc. CVPR, 2012, pp. 49-56.
[113]
Chang C. H., Liang C. K., and Chuang Y. Y., Content-aware display adaptation and interactive editing for stereoscopic images, IEEE Trans. Multimedia, vol. 13, no. 4, pp. 589-601, 2011.
[114]
Luo S.-J., Sun Y.-T., Shen I.-C., Chen B.-Y., and Chuang Y.-Y., Geometrically consistent stereoscopic image editing using patch-based synthesis, IEEE Trans. Vis. Comput. Graph., vol. 21, no. 1, pp. 56-67, 2015.
[115]
Lo W.-Y., van Baar J., Knaus C., Zwicker M., and Gross M., Stereoscopic 3D copy & paste, ACM Trans. Graph., vol. 29, no. 6, p. 147, 2010.
[116]
Luo S.-J., Shen I.-C., Chen B.-Y., Cheng W.-H., and Chuang Y.-Y., Perspective-aware warping for seamless stereoscopic image cloning, ACM Trans. Graph., vol. 31, no. 6, p. 182, 2012.
[117]
Niu Y., Feng W.-C., and Liu F., Enabling warping on stereoscopic images, ACM Trans. Graph., vol. 31, no. 6, p. 183, 2012.
[118]
Tong R.-F., Zhang Y., and Cheng K.-L., Stereopasting: Interactive composition in stereoscopic images, IEEE Trans. Vis. Comput. Graph., vol. 19, no. 8, pp. 1375-1385, 2013.
[119]
Du S.-P., Hu S.-M., and Martin R. R., Changing perspective in stereoscopic images, IEEE Trans. Vis. Comput. Graph., vol. 19, no. 8, pp. 1288-1297, 2013.
[120]
Mu T.-J., Wang J.-H., Du S.-P., and Hu S.-M., Stereoscopic image completion and depth recovery, The Visual Computer, vol. 30, no. 6, pp. 833-843, 2014.
[121]
Avidan S. and Shamir A., Seam carving for content-aware image resizing, ACM Trans. Graph., vol. 26, no. 3, p. 10, 2007.
[122]
Raimbault F. and Kokaram A., Stereo video inpainting, Proc. SPIE, vol. 7863, p. 78631, 2011.
[123]
Liu F., Niu Y., and Jin H., Joint subspace stabilization for stereoscopic video, in Proc. ICCV, 2013, pp. 73-80.
[124]
Irani M., Multi-frame correspondence estimation using subspace constraints, Int. J. Comput. Vision, vol. 48, no. 3, pp. 173-194, 2002.
[125]
Kopf S., Guthier B., Hipp C., Kiess J., and Effelsberg W., Warping-based video retargeting for stereoscopic video, in Proc. ICIP, 2014, pp. 2898-2902.
[126]
Liu Y., Sun L., and Yang S., A retargeting method for stereoscopic 3D video, Comp. Visual Media, vol. 1, no. 2, pp. 119-127, 2015.
[127]
Wang M., Zhang X.-J., Liang J.-B., Zhang S.-H., and Martin R. R., Comfort-driven disparity adjustment for stereoscopic video, Comp. Visual Media, vol. 2, no. 1, pp. 3-17, 2016.
[128]
Jiang N., Tan P., and Cheong L.-F., Multi-view repetitive structure detection, in Proc. ICCV, 2011, pp. 535-542.
[129]
Laffont P. Y., Bousseau A., and Drettakis G., Rich intrinsic image decomposition of outdoor scenes from multiple views, IEEE Trans. Vis. Comput. Graph., vol. 19, no. 2, pp. 210-224, 2013.
[130]
Djelouah A., Franco J.-S., Boyer E., Clerc F. L., and Prez P., Multi-view object segmentation in space and time, in Proc. ICCV, 2013, pp. 2640-2647.
[131]
Fu Y., Guo Y., Zhu Y., Liu F., Song C., and Zhou Z.-H., Multi-view video summarization, IEEE Trans. Multimedia, vol. 12, no. 7, pp. 717-729, 2010.
[132]
Shamir I. A., Park H. S., Sheikh Y., Hodgins J. K., and Ariel, Automatic editing of footage from multiple social cameras, ACM Trans. Graph., vol. 33, no. 4, p. 81, 2014.
[133]
Wang O., Schroers C., Zimmer H., Gross M., and Sorkine-Hornung A., Videosnapping: Interactive synchronization of multiple videos, ACM Trans. Graph., vol. 33, no. 4, p. 77, 2014.
[134]
Nguyen T., Reitmayr G., and Schmalstieg D., Structural modeling from depth images, IEEE Trans. Vis. Comput. Graph., vol. 21, no. 11, pp. 1230-1240, 2015.
[135]
Armeni I., Sener O., Zamir A. R., Jiang H., Brilakis I., Fischer M., and Savarese S., 3D semantic parsing of large-scale indoor spaces, in Proc. CVPR, 2016, pp. 1534-1543.
[136]
Lu S.-P. and Zhang S.-H., Saliency-based fidelity adaptation preprocessing for video coding, J. Comput. Sci. Tech., vol. 26, no. 1, pp. 195-202, 2011.
[137]
Florea R., Munteanu A., Lu S.-P., and Schelkens P., Wavelet-based L semi-regular mesh coding, IEEE Trans. Multimedia, 2016. .
Tsinghua Science and Technology
Pages 678-695
Cite this article:
Lu S, Mu T, Zhang S. A Survey on Multiview Video Synthesis and Editing. Tsinghua Science and Technology, 2016, 21(6): 678-695. https://doi.org/10.1109/TST.2016.7787010

641

Views

12

Downloads

9

Crossref

N/A

Web of Science

10

Scopus

1

CSCD

Altmetrics

Received: 14 October 2016
Accepted: 21 October 2016
Published: 19 December 2016
© The author(s) 2016
Return