MDCO - Masse de données Connaissances Ambiantes

3D Models And Dynamic models Representation And Segmentation – MADRAS

Submission summary

With the recent technological developments concerning three-dimensional images (3D scanners, 3D
graphic accelerated hardware, Web3D, and so on), the creation and storage of three-dimensional
models has become a reality. The usage range of the three-dimensional models is wide: cultural
heritage, medical and surgical simulation, CAD design, video games, multimedia applications, etc. For a
few years, sequences of 3D meshes varying through time, which are often called dynamic meshes, have
also become more and more popular in these application domains.
Consequently to the growing usage of the static and dynamic three-dimensional mesh models, the
scientific communities are very interested in the processing of the 3D-model data for various computer
graphic applications such as modeling, indexing, watermarking or compressing 3D-models.
The three-dimensional models are generally represented as meshes of polygons (generally triangles).
This kind of representation has the advantage of being perfectly adapted to 3D display with the help of
modern 3D accelerated hardware. But the main drawback of this format is the lack of a structure or a
hierarchical description that could be very useful for the applications cited above. Hence, the automatic
segmentation of 3D-mesh models is very often a necessary pre-processing tool for these applications.
Mesh segmentation consists in subdividing a polygonal surface into patches of uniform properties either
from a strictly geometrical point of view or from a perceptual / semantic point of view.
To bring a solution to this problem, many systems were and are still currently developed for the
segmentation of bidimensional data (images or videos). However these solutions are not really effective
or not easily adaptable to intrinsically three-dimensional data. Moreover, one could easily notice that,
contrary to the 2D-data domain, there is neither protocol, nor standard data collection for the comparison
and the evaluation of the 3D segmentation methods.
In this context, the objectives of the MADRAS project are the three following:
1. Building a collection of 3D and 3D+t mesh models – that is to say static and dynamic 3Dmodels
– with a ground-truth. The ground-truth will be composed of one or more segmentations
for each of the 3D-model. These segmentations will come from hand-made segmentations and
also from automatic or semi-automatic methods.
2. Exploiting the human factor in order to improve the conception and the evaluation of
segmentation algorithms, through subjective experiments. The subjective and perceptual aspects
will be used to build a reference toolkit that will allow an entire automatic comparison process of
existing and future segmentation methods
3. Designing new segmentation algorithms for static and dynamic 3D-mesh models, using the
human factor, with machine learning techniques.
With this triple goal, the MADRAS project aims at helping the scientific communities involved in 3Dmodel
segmentation. Such a benchmarking tool will allow the researchers to evaluate and compare
existing and new segmentation methods. Moreover, the introduction of the human factor in segmentation
methods, with subjective and perceptual aspects, is the first attempt in the 3D domain.
The Excellence consortium of this project is composed of three French academic partners (LIRIS,
USTL/LIFL and INRIA Rhône-Alpes). Each of the partners has a strong and internationally recognized
experience in the 3D-model analysis, segmentations and exploitation field.

Project coordination

Florent DUPONT (Organisme de recherche)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

Help of the ANR 330,000 euros
Beginning and duration of the scientific project: - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter