DS07 - Société de l'information et de la communication

Perceptual Levels of Detail for Interactive and Immersive Remote Visualization of Complex 3D Scenes – PISCo

Submission summary

Three-dimensional (3D) graphics are commonplace in many applications such as digital entertainment, cultural heritage, architecture and scientific simulation. These data are increasingly rich and detailed; as a complex 3D scene may contain millions of geometric primitives, enriched with various appearance attributes such as texture maps designed to produce a realistic material appearance, as well as animation data.
The way of consuming and visualizing this 3D content is now evolving from standard screens to Virtual and Mixed Reality (VR/MR). However, the visualization and interaction with 6 degrees of freedom with large and complex 3D scene remains an unresolved issue in such immersive environments, especially when the scene is stored on a remote server. Two distinct bottlenecks exist: (1) the potential complexity of a 3D scene that can be displayed to the user on a VR/MR head-mounted display is substantially smaller than for a standard screen, because the GPU must generate 4 times more images (to ensure two images per frame and a sufficient frame-rate to prevent motion sickness); (2) since an increasing number of VR/MR applications consider 3D data stored on remote servers, strong latency problems may be encountered, caused by the streaming of the scene on the display device.
The objective of this proposal is to devise novel algorithms and tools allowing interactive visualization, in these constrained contexts (Virtual and Mixed reality, with local/remote 3D content), with a very high quality of user experience. As 3D scenes are visualized through a certain viewport, we seek to optimize the display in this viewport by proposing (1) Tools for the generation and compression of high quality levels of details, (2) Visual quality metrics capable of predicting the quality of these levels of detail and driving their generation, (3) Visual attention models capable of predicting where the observer is looking and thus selecting and filtering the primitives and levels of detail. A distinctive property of the project lies into the fact that we will consider rich 3D data, including not only geometric information but also animation and complex physically based materials represented by texture maps (color, metalness, roughness, normals).
The proposed tools will solve both the transmission latency problems encountered in the case of remote 3D content and the rendering constraints present in virtual reality and mixed reality. We plan to implement two prototypes: a virtual reality prototype on HTC Vive device and a mixed reality prototype on the Hololens device from Microsoft.

Project coordination

Guillaume Lavoué (Institut National des Sciences Appiquées de Lyon - Laboratoire d'Informatique en Images et Systèmes d'Information)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

INRIA Institut National de Recherche en Informatique et Automatique
LS2N Université de Nantes - Laboratoire des Sciences du Numériques de Nantes (ex-IRCCyN)
INSA LYON - LIRIS Institut National des Sciences Appiquées de Lyon - Laboratoire d'Informatique en Images et Systèmes d'Information

Help of the ANR 549,093 euros
Beginning and duration of the scientific project: - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter