CE33 - Interaction, robotique

Rendering procedural textures for huge digital worlds – ReProcTex

Rendering procedural textures for huge digital worlds

The management of huge amounts of 3D data is a serious issue in graphics applications, as in media production. In a classical production pipeline, rendering the 3D scene is processed separately from the generation of textures which represent its visual details. Our project is concerned with treating procedural texture synthesis and photo-realistic rendering as one tightly coupled entity.

Developping new procedural techniques for rendering

It is very common that these textures are obtained from procedural models which are well suited to create stochastic textures, e.g. to mimic natural phenomena. Procedural texturing is a generative approach where textures are compactly represented by a set of functions and procedures which are evaluated to produce the final texture. In this project we build on the Procedural Texture Graph (PTG) model, which represents the generative process as a graph where source nodes are mathematical functions, inner nodes are pixel processing operations and sink nodes are the final output textures.<br />In a typical production pipeline, these textures are either computed upfront which becomes extremely storage demanding in real production environments today, or they are evaluated on-the-fly during texture accesses resulting in many redundant calculations. Our project is concerned with this quandary. We plan to treat procedural texture synthesis and photo-realistic rendering as one tightly coupled entity to make the rendering of highly detailed scenes feasible using texture synthesis on demand -- and reduce the redundant calculations by novel caching schemes accounting for all aspects across the pipeline, ranging from texture evaluation to the needs of high-quality rendering. In particular, the latter requires texture filtering, i.e. computing the weighted average of texels in a sub-region of the texture determined by ray differentials (intuitively the part of the texture that contributes to the area of a pixel). We plan to address the multitude of challenges that comes along with this approach by first deriving a novel texture filtering theory for prefiltering and antialiasing of textures created from a PTG (storing color as well as normals/displacements). The key for making the evaluation feasible in a renderer will be newly developed caching algorithms which exploit the additional knowledge that a PTG provides, namely how this texture evolves from individual basis functions at source nodes and operations on the way to the sink nodes.

During the first part of this project, we decided to represent textured objects by micro-facets models. They model interactions between the objects and the light rays, so they are well suited for rendering using ray-tracing. Then the idea is to generate directly the result of the graph (PTG) at different pre-filtered levels. We made use of sparse dictionnaries to approximate these prefiltered components, and thus to preserve procedural generation.

We have published algorithms and codes for generation and rendering.
In a first paper [CGF-2020] we propose a new generating model for glittering phenomena (microscopic light interaction), which is procedural, real-time, and physically grounded. In a second paper [I3DG-2021] we combine this technique with advanced rendering algorithms. In a third publication [EGSR-2021] we combine our models with ray-tracing algorithms. For all these works we published the source codes as part of the ASTex software library (Open Source, developped by the french partner).

During the second stage of the project we will continue to work on WP3 (PhD thesis of Vincent Schuessler) in order to develop new cahing techniques on GPU. We are also investigating a new model for texture graphs (PhD thesis of Charline Grenier). We plan two submissions in the first semester 2022. As the project moves forward we forsee that WP5 is very ambitious, and we may need to reduce the objectives.

[CGF-2020] X. Chermain, B. Sauvage, J-M. Dischler, C. Dachsbacher, «Procedural Physically based BRDF for Real-Time Rendering of Glints«, Computer Graphics Forum, Wiley-Blackwell, pages 243--253, Volume 39, n° 7, novembre 2020. doi:10.1111/cgf.14141

[I3DG-2021] X. Chermain, S. Lucas, B. Sauvage, J-M. Dischler, C. Dachsbacher, «Real-Time Geometric Glint Anti-Aliasing with Normal Map Filtering«, ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Volume 4, n° 1, 2021, doi:10.1145/3451257

[EGSR-2021] X. Chermain, B. Sauvage, J-M. Dischler, C. Dachsbacher «Importance Sampling of Glittering BSDFs based on Finite Mixture Distributions«, Eurographics Symposium on Rendering, 2021, doi:10.2312/sr.20211289

The ever increasing demands on realism and detail in virtual 3D scenes lead to a tremendous amount of data in graphics applications. One major driving force is textures which are widely used to represent fine visual details, such as variation of material parameters across surfaces or displacements. It is very common that these textures are obtained from procedural models which are well suited to create stochastic textures, e.g. to mimic natural phenomena. Procedural texturing is a generative approach where textures are compactly represented by a set of functions and procedures which are evaluated to produce the final texture. In this project we build on the Procedural Texture Graph (PTG) model, which represents the generative process as a graph where source nodes are mathematical functions, inner nodes are pixel processing operations and sink nodes are the final output textures.

In a typical production pipeline, these textures are either computed upfront which becomes extremely storage demanding in real production environments today, or they are evaluated on-the-fly during texture accesses resulting in many redundant calculations. Our project is concerned with this quandary. We plan to treat procedural texture synthesis and photo-realistic rendering as one tightly coupled entity to make the rendering of highly detailed scenes feasible using texture synthesis on demand -- and reduce the redundant calculations by novel caching schemes accounting for all aspects across the pipeline, ranging from texture evaluation to the needs of high-quality rendering. In particular, the latter requires texture filtering, i.e. computing the weighted average of texels in a sub-region of the texture determined by ray differentials (intuitively the part of the texture that contributes to the area of a pixel). We plan to address the multitude of challenges that comes along with this approach by first deriving a novel texture filtering theory for prefiltering and antialiasing of textures created from a PTG (storing color as well as normals/displacements). The key for making the evaluation feasible in a renderer will be newly developed caching algorithms which exploit the additional knowledge that a PTG provides, namely how this texture evolves from individual basis functions at source nodes and operations on the way to the sink nodes.

The second challenge with procedural texture synthesis is that -- although the model is powerful and flexible -- the creation of textures with a desired look: the construction of an appropriate graph is a tedious task for artists and requires in-depth knowledge of the underlying operations in addition to artistic skills. Our new approach requires to add technical metadata to the PTG, which would make the construction task even more difficult. To overcome this, we want to facilitate the production of the graph by a semi-automatic approach from input exemplars, which are often given for a production rendering. We will develop new algorithms to extract a set of elementary functions and combination operators from the exemplars to ultimately obtain images with similar statistics and thus appearance. A fully automatic tool would be neither realistic neither relevant, because artists need to control the result. To this end, we aim at a feedback loop approach: the artist fixes some constraints while the algorithms solve sub-problems.

Both challenges are tightly intertwined and require expertise of both partners: texture synthesis in Strasbourg and rendering in Karlsruhe.

Project coordination

Basile Sauvage (Laboratoire des sciences de l'Ingénieur, de l'Informatique et de l'Imagerie (UMR 7357))

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

KIT Karlsruhe Institute of Technology / Institute for Visualization and Data Analysis
ICube Laboratoire des sciences de l'Ingénieur, de l'Informatique et de l'Imagerie (UMR 7357)

Help of the ANR 131,684 euros
Beginning and duration of the scientific project: December 2019 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter