CE46 - Modèles numériques, simulation, applications

Improving Predictability of Numerical Computations – ImPreNum

Improving predictability and accuracy of numerical computing

Current computing systems are based on a very limited set of machine precisions (32 or 64 bits). Language, compilers and libraries offer little precision control. As a consequence, accuracy is at often an after-the-fact consideration. Computations may be either wrong, or on the contrary much too accurate, in both cases wasteful.

Accuracy as a first-class concern in numerical computing

The objective of this project is to add accuracy considerations to the cost/performance trade-offs, at all the levels of a computing system (from hardware to languages, compilers, and numerical libraries).

The project faces two challenges:
1/ enabling precision control at the lower levels of the computing stack;
2/ understanding and formalizing the accuracy requirements that will allow an algorithm to exploit this precision control, from the application down to the hardware operators.

A demonstrator based on a modified RISC-V processor with a variable-precision accelerator, extensions to the C language and compiler support.

Case studies and publications

Automate precision control, distinguish between the specification of a problem on real numbers and its finite-precision implementation, analyse problems/programmes to separate compile-time and run-time precision control.

11 articles, 1 patent

Most computations on real numbers manipulate them as floating-point numbers. State of the art processor architectures offer functional units supporting the half, single or double precision of the IEEE-754 standard [30]. These formats, of respectively 16, 32 or 64 bits, offer the equivalent of 3, 7 and 15 decimal digits. The reason for the two larger formats is not that programmers need that many digits on the output. Rather, they are useful to protect him from the accumulation and amplification of rounding errors in the intermediate computations. However, the programmer has to make a dramatic choice between these precisions, and then the chosen precision is unlikely to exactly match the needs of the application. At best, it will be overkill, meaning wasted time, memory and power in computing useless bits. At worst, it will be insufficient, meaning numerically wrong results, with possible catastrophic consequences in a world where embedded computing systems interfere more and more with our lives.

Considering this, the main claim of this project is the following:
accuracy should become a first-class concern in our computing ecosystems currently mainly focused on the cost-performance trade-off. This will lead to better quality numerical software, better trust in their results, but also better performance and power consumption when the accuracy needs are limited.

The objective of this project is therefore to add accuracy considerations to cost/performance trade-offs, at all the levels of a computing system:
1. at the hardware level, with better support for lower-than-standard and higher-than-standard precisions, and with hardware support for adaptive precision;
2. at the level of run-time support software, in particular answering the memory management challenges entailed by adaptive precision;
3. at the lower level of mathematical libraries (for instance BLAS for linear algebra), enhancing well established libraries with precision and accuracy control;
4. at the higher level of mathematical libraries (which includes linear solvers such as LAPACK, ad hoc steppers for ordinary differential equations, triangularization problems in computational geometry, etc). This level is characterized by iterative methods where accuracy and precision control of the lower levels will enable higher-level properties such as convergence and stability;
5. at the compiler level, enhancing optimising compilers with novel optimisations related to precision and accuracy;
6. at the language level, embedding accuracy specification and control in existing languages, and possibly defining domain-specific languages with accuracy-aware semantics for some classes of applications.

To achieve this goal, the project will focus on specific useful use cases in the domains of linear algebra, computational geometry, and machine learning.

The main challenge to address in the lower levels is to offer precision control at an acceptable overhead. For this, the project can build upon the expertise of the project coordinator in hardware and software computing just right, on the expertise in processor integration at LETI, and on the compilation expertise at ENS. On the higher levels, the main challenge is to understand and formalize the accuracy requirements of a computation at each level. There is also a pervasive challenge of designing the relevant interfaces at each level for accuracy and precision control. Defining where the precision can be decided at compile-time, and where it has ti be decided at run-time, is also difficult. We claim that we can address this very difficult challenge for the considered use cases, thanks to the complementary application-domain experience of the project members.

The project will develop a demonstrator based on a RISC-V system enhanced with variable-precision hardware, and an accuracy-aware software stack that covers all the levels above.

Project coordination


The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.


CEA LETI CEA Laboratoire d’Electronique et de Technologie de l’Information
TIMA Techniques de l'Informatique et de la Microélectronique pour l'Architecture des systèmes intégrés
DI ENS Département d'Informatique de l'Ecole Normale Supérieure

Help of the ANR 594,702 euros
Beginning and duration of the scientific project: September 2018 - 48 Months

Useful links

Explorez notre base de projets financés



ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter