作者 Pesquet-Popescu, Béatrice
書名 Emerging Technologies for 3D Video : Creation, Coding, Transmission and Rendering
出版項 Somerset : John Wiley & Sons, Incorporated, 2013
©2013
國際標準書號 9781118583586 (electronic bk.)
9781118355114
book jacket
版本 1st ed
說明 1 online resource (520 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
附註 Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering -- Contents -- Preface -- List of Contributors -- Acknowledgements -- Part I: Content Creation -- 1 Consumer Depth Cameras and Applications -- 1.1 Introduction -- 1.2 Time-of-Flight Depth Camera -- 1.2.1 Principle -- 1.2.2 Quality of the Measured Distance -- 1.3 Structured Light Depth Camera -- 1.3.1 Principle -- 1.4 Specular and Transparent Depth -- 1.5 Depth Camera Applications -- 1.5.1 Interaction -- 1.5.2 Three-Dimensional Reconstruction -- References -- 2 SFTI: Space-from-Time Imaging -- 2.1 Introduction -- 2.2 Background and Related Work -- 2.2.1 Light Fields, Reflectance Distribution Functions, and Optical Image Formation -- 2.2.2 Time-of-Flight Methods for Estimating Scene Structure -- 2.2.3 Synthetic Aperture Radar for Estimating Scene Reflectance -- 2.3 Sampled Response of One Source-Sensor Pair -- 2.3.1 Scene, Illumination, and Sensor Abstractions -- 2.3.2 Scene Response Derivation -- 2.3.3 Inversion -- 2.4 Diffuse Imaging: SFTI for Estimating Scene Reflectance -- 2.4.1 Response Modeling -- 2.4.2 Image Recovery using Linear Backprojection -- 2.5 Compressive Depth Acquisition: SFTI for Estimating Scene Structure -- 2.5.1 Single-Plane Response to Omnidirectional Illumination -- 2.5.2 Spatially-Patterned Measurement -- 2.5.3 Algorithms for Depth Map Reconstruction -- 2.6 Discussion and Future Work -- Acknowledgments -- References -- 3 2D-to-3D Video Conversion: Overview and Perspectives -- 3.1 Introduction -- 3.2 The 2D-to-3D Conversion Problem -- 3.2.1 General Conversion Approach -- 3.2.2 Depth Cues in Monoscopic Video -- 3.3 Definition of Depth Structure of the Scene -- 3.3.1 Depth Creation Methods -- 3.3.2 Depth Recovery Methods -- 3.4 Generation of the Second Video Stream -- 3.4.1 Depth to Disparity Mapping -- 3.4.2 View Synthesis and Rendering Techniques
3.4.3 Post-Processing for Hole-Filling -- 3.5 Quality of Experience of 2D-to-3D Conversion -- 3.6 Conclusions -- References -- 4 Spatial Plasticity: Dual-Camera Configurations and Variable Interaxial -- 4.1 Stereoscopic Capture -- 4.2 Dual-Camera Arrangements in the 1950s -- 4.3 Classic "Beam-Splitter" Technology -- 4.4 The Dual-Camera Form Factor and Camera Mobility -- 4.5 Reduced 3D Form Factor of the Digital CCD Sensor -- 4.6 Handheld Shooting with Variable Interaxial -- 4.7 Single-Body Camera Solutions for Stereoscopic Cinematography -- 4.8 A Modular 3D Rig -- 4.9 Human Factors of Variable Interaxial -- References -- Part II: Representation, Coding and Transmission -- 5 Disparity Estimation Techniques -- 5.1 Introduction -- 5.2 Geometrical Models for Stereoscopic Imaging -- 5.2.1 The Pinhole Camera Model -- 5.2.2 Stereoscopic Imaging Systems -- 5.3 Stereo Matching Process -- 5.3.1 Disparity Information -- 5.3.2 Difficulties in the Stereo Matching Process -- 5.3.3 Stereo Matching Constraints -- 5.3.4 Fundamental Steps Involved in Stereo Matching Algorithms -- 5.4 Overview of Disparity Estimation Methods -- 5.4.1 Local Methods -- 5.4.2 Global Methods -- 5.5 Conclusion -- References -- 6 3D Video Representation and Formats -- 6.1 Introduction -- 6.2 Three-Dimensional Video Representation -- 6.2.1 Stereoscopic 3D (S3D) Video -- 6.2.2 Multiview Video (MVV) -- 6.2.3 Video-Plus-Depth -- 6.2.4 Multiview Video-Plus-Depth (MVD) -- 6.2.5 Layered Depth Video (LDV) -- 6.3 Three-Dimensional Video Formats -- 6.3.1 Simulcast -- 6.3.2 Frame-Compatible Stereo Interleaving -- 6.3.3 MPEG-4 Multiple Auxiliary Components (MAC) -- 6.3.4 MPEG-C Part 3 -- 6.3.5 MPEG-2 Multiview Profile (MVP) -- 6.3.6 Multiview Video Coding (MVC) -- 6.4 Perspectives -- Acknowledgments -- References -- 7 Depth Video Coding Technologies -- 7.1 Introduction
7.2 Depth Map Analysis and Characteristics -- 7.3 Depth Map Coding Tools -- 7.3.1 Tools that Exploit the Inherent Characteristics of Depth Maps -- 7.3.2 Tools that Exploit the Correlations with the Associated Texture -- 7.3.3 Tools that Optimize Depth Map Coding for the Quality of the Synthesis -- 7.4 Application Example: Depth Map Coding Using "Don't Care" Regions -- 7.4.1 Derivation of "Don't Care" Regions -- 7.4.2 Transform Domain Sparsification Using "Don't Care" Regions -- 7.4.3 Using "Don't Care" Regions in a Hybrid Video Codec -- 7.5 Concluding Remarks -- Acknowledgments -- References -- 8 Depth-Based 3D Video Formats and Coding Technology -- 8.1 Introduction -- 8.1.1 Existing Stereo/Multiview Formats -- 8.1.2 Requirements for Depth-Based Format -- 8.1.3 Chapter Organization -- 8.2 Depth Representation and Rendering -- 8.2.1 Depth Format and Representation -- 8.2.2 Depth-Image-Based Rendering -- 8.3 Coding Architectures -- 8.3.1 AVC-Based Architecture -- 8.3.2 HEVC-Based Architecture -- 8.3.3 Hybrid -- 8.4 Compression Technology -- 8.4.1 Inter-View Prediction -- 8.4.2 View Synthesis Prediction -- 8.4.3 Depth Resampling and Filtering -- 8.4.4 Inter-Component Parameter Prediction -- 8.4.5 Depth Modelling -- 8.4.6 Bit Allocation -- 8.5 Experimental Evaluation -- 8.5.1 Evaluation Framework -- 8.5.2 AVC-Based 3DV Coding Results -- 8.5.3 HEVC-Based 3DV Coding Results -- 8.5.4 General Observations -- 8.6 Concluding Remarks -- References -- 9 Coding for Interactive Navigation in High-Dimensional Media Data -- 9.1 Introduction -- 9.2 Challenges and Approaches of Interactive Media Streaming -- 9.2.1 Challenges: Coding Efficiency and Navigation Flexibility -- 9.2.2 Approaches to Interactive Media Streaming -- 9.3 Example Solutions -- 9.3.1 Region-of-Interest (RoI) Image Browsing -- 9.3.2 Light-Field Streaming -- 9.3.3 Volumetric Image Random Access
9.3.4 Video Browsing -- 9.3.5 Reversible Video Playback -- 9.3.6 Region-of-Interest (RoI) Video Streaming -- 9.4 Interactive Multiview Video Streaming -- 9.4.1 Interactive Multiview Video Streaming (IMVS) -- 9.4.2 IMVS with Free Viewpoint Navigation -- 9.4.3 IMVS with Fixed Round-Trip Delay -- 9.5 Conclusion -- References -- 10 Adaptive Streaming of Multiview Video Over P2P Networks -- 10.1 Introduction -- 10.2 P2P Overlay Networks -- 10.2.1 Overlay Topology -- 10.2.2 Sender-Driven versus Receiver-Driven P2P Video Streaming -- 10.2.3 Layered versus Cross-Layer Architecture -- 10.2.4 When P2P is Useful: Regions of Operation -- 10.2.5 BitTorrent: A Platform for File Sharing -- 10.3 Monocular Video Streaming Over P2P Networks -- 10.3.1 Video Coding -- 10.3.2 Variable-Size Chunk Generation -- 10.3.3 Time-Sensitive Chunk Scheduling Using Windowing -- 10.3.4 Buffer-Driven Rate Adaptation -- 10.3.5 AdaptiveWindow Size and Scheduling Restrictions -- 10.3.6 Multiple Requests from Multiple Peers of a Single Chunk -- 10.4 Stereoscopic Video Streaming over P2P Networks -- 10.4.1 Stereoscopic Video over Digital TV -- 10.4.2 Rate Adaptation in Stereo Streaming: Asymmetric Coding -- 10.4.3 Use Cases: Stereoscopic Video Streaming over P2P Network -- 10.5 MVV Streaming over P2P Networks -- 10.5.1 MVV Streaming over IP -- 10.5.2 Rate Adaptation for MVV: View Scaling -- 10.5.3 Use Cases: MVV Streaming over P2P Network -- References -- Part III: Rendering and Synthesis -- 11 Image Domain Warping for Stereoscopic 3D Applications -- 11.1 Introduction -- 11.2 Background -- 11.3 Image Domain Warping -- 11.4 Stereo Mapping -- 11.4.1 Problems in Stereoscopic Viewing -- 11.4.2 Disparity Range -- 11.4.3 Disparity Sensitivity -- 11.4.4 Disparity Velocity -- 11.4.5 Summary -- 11.4.6 Disparity Mapping Operators -- 11.4.7 Linear Operator -- 11.4.8 Nonlinear Operator
11.4.9 Temporal Operator -- 11.5 Warp-Based Disparity Mapping -- 11.5.1 Data Extraction -- 11.5.2 Warp Calculation -- 11.5.3 Applications -- 11.6 Automatic Stereo to Multiview Conversion -- 11.6.1 Automatic Stereo to Multiview Conversion -- 11.6.2 Position Constraints -- 11.6.3 Warp Interpolation and Extrapolation -- 11.6.4 Three-Dimensional Video Transmission Systems for Multiview Displays -- 11.7 IDW for User-Driven 2D-3D Conversion -- 11.7.1 Technical Challenges of 2D-3D Conversion -- 11.8 Multi-Perspective Stereoscopy from Light Fields -- 11.9 Conclusions and Outlook -- Acknowledgments -- References -- 12 Image-Based Rendering and the Sampling of the Plenoptic Function -- 12.1 Introduction -- 12.2 Parameterization of the Plenoptic Function -- 12.2.1 Light Field and Surface Light Field Parameterization -- 12.2.2 Epipolar Plane Image -- 12.3 Uniform Sampling in a Fourier Framework -- 12.3.1 Spectral Analysis of the Plenoptic Function -- 12.3.2 The Plenoptic Spectrum under Realistic Conditions -- 12.4 Adaptive Plenoptic Sampling -- 12.4.1 Adaptive Sampling Based on Plenoptic Spectral Analysis -- 12.5 Summary -- 12.5.1 Outlook -- References -- 13 A Framework for Image-Based Stereoscopic View Synthesis from Asynchronous Multiview Data -- 13.1 The Virtual Video Camera -- 13.1.1 Navigation Space Embedding -- 13.1.2 Space-Time Tetrahedralization -- 13.1.3 Processing Pipeline -- 13.1.4 Rendering -- 13.1.5 Application -- 13.1.6 Limitations -- 13.2 Estimating Dense Image Correspondences -- 13.2.1 Belief Propagation for Image Correspondences -- 13.2.2 A Symmetric Extension -- 13.2.3 SIFT Descriptor Downsampling -- 13.2.4 Construction of Message-Passing Graph -- 13.2.5 Data Term Compression -- 13.2.6 Occlusion Removal -- 13.2.7 Upsampling and Refinement -- 13.2.8 Limitations -- 13.3 High-Quality Correspondence Edit -- 13.3.1 Editing Operations
13.3.2 Applications
With the expectation of greatly enhanced user experience, 3D video is widely perceived as the next major advancement in video technology. In order to fulfil the expectation of enhanced user experience, 3D video calls for new technologies addressing efficient content creation, representation/coding, transmission and display. Emerging Technologies for 3D Video will deal with all aspects involved in 3D video systems and services, including content acquisition and creation, data representation and coding, transmission, view synthesis, rendering, display technologies, human perception of depth and quality assessment. Key features: Offers an overview of key existing technologies for 3D video Provides a discussion of advanced research topics and future technologies Reviews relevant standardization efforts Addresses applications and implementation issues Includes contributions from leading researchers The book is a comprehensive guide to 3D video systems and services suitable for all those involved in this field, including engineers, practitioners, researchers as well as professors, graduate and undergraduate students, and managers making technological decisions about 3D video
Description based on publisher supplied metadata and other sources
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2020. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries
鏈接 Print version: Pesquet-Popescu, Béatrice Emerging Technologies for 3D Video : Creation, Coding, Transmission and Rendering Somerset : John Wiley & Sons, Incorporated,c2013 9781118355114
主題 3-D video -- Standards.;Digital video -- Standards
Electronic books
Alt Author Cagnazzo, Marco
Pesquet-Popescu, Béatrice
Dufaux, Frédéric