TSIA - Circuits - Thématiques Spécifiques en Intelligence Artificielle (Intelligence Artificielle et conception de circuit)

Self-organizing representation [for continual] learning on adaptive hardware neural architectures – SORLAHNA

Submission summary

The preprocessing, categorization and visualization of data play an increasingly essential role with the exponentially increasing amount of digital data collected and stored in all fields. If the currently booming field of deep learning (DL) offers multiple possibilities to meet some of these needs, unsupervised learning is increasingly put forward to overcome some of its limits. Indeed, DL is based on the training of a complex parametric model to a huge set of data, provided during this training phase. The model, once trained, is then deployed in real applications, assuming that the statistics of the data then remain the same as those used in the learning phase. However, some contexts provide non-stationary data, whose statistics gradually drift over time. Having a parametric model of these data supposes that this model can derive with them. Models supporting continual or incremental learning must therefore be favoured to dynamically process such non-stationary data, in particular encountered by many embedded systems (internet of things - IoT, edge computing). Among the possible models, we are interested in models based on topographic vector quantization (self-organizing maps, incremental networks). The algorithmic simplicity and the distributed nature of the calculations of such models makes it possible to consider a hardware implementation of these algorithms, which takes on its full meaning in the context of embedded systems. The project that we propose therefore aims to combine complementary skills in computer science and electronics to co-design modern topographic vector quantization algorithms so as to integrate from their design the double requirement of an adequacy with online learning of non-stationary data, and a compatibility with a feasible and efficient hardware implementation, in particular using reconfigurable circuits allowing a flexibility that is unreachable on ASIC circuits. This co-design approach will lead to proposing generic hardware architectures based on innovative, highly configurable and scalable neural processing units (NPUs), which will help reduce the high dimensionality of the permanent data streams generated by IoT infrastructures, or even help building optimized layers for hybrid neural models aimed at continual learning.

Project coordination

bernard girau (Laboratoire lorrain de recherche en informatique et ses applications)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

IJL Institut Jean Lamour
LORIA Laboratoire lorrain de recherche en informatique et ses applications

Help of the ANR 431,255 euros
Beginning and duration of the scientific project: September 2023 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter