Physical and Intrinsic Security of Embedded Neural Networks – PICTURE
Physical and Intrinsic Security of Embedded Neural Networks
Embedded Machine Learning models have a critical attack surface combining numerous algorithmic and physical threats (e.g., side-channel and fault injection analysis). By analyzing and evaluating these threats and by developing new defense schemes, PICTURE aims at disseminating good practices of security within the design and development processes of IA-based embedded systems.
Protect the integrity, confidentiality and availability of Machine Learning model in the context of a large-scale deployment.
Today, a major challenge in Machine Learning (ML) is edge-deployed models and more particularly neural networks in a large variety of embedded platforms with high interests shown on porting deep neural networks (DNN) for inference purpose. Many reasons explain the direct use of neural networks on edge devices rather than sending data to high-level computing infrastructures such as privacy, power or latency issues. This massive deployment brings a host of new security challenges. ML models are expected to be included in numerous devices, which can be hacked with an extensive overall attack surface, because of critical flaws intrinsically related to the ML algorithms as much as their implementation in physically accessible devices.<br />Then, the security of embedded ML models can be seen through the two sides of a same coin. On one side, an impressive body of publications raises algorithmic threats that could have disastrous impacts on the development and durability of ML models by targeting their integrity, confidentiality or accessibility. On the other side, physical attacks (more particularly side-channel – SCA – and fault injection analysis – FIA) for embedded ML models is a relatively new topic with handful of works that have the strong merit to pave the way to extensive experimentation with more realistic and complex frameworks.<br />Unfortunately, as widely admitted by the ML and Security communities, these two sides are still handled too separately. The main hypothesis of PICTURE is the exploitation of a joined attack surface, i.e. by combining algorithmic attacks and physical attacks, in order to optimally analyze the upcoming threats targeting critical ML embedded systems as well as design efficient defense schemes.<br />Our first objective is to demonstrate the criticality of combined algorithmic and physical attacks against realistic embedded neural network models. Bridging the two attack surfaces will be one of the most important results of PICTURE. More particularly, FIA would take advantage of optimized perturbations defined by advanced algorithmic integrity-based attack (i.e. adversarial examples), or confidentiality-based attacks (e.g. model inversion or membership inference) may be combined with SCA to exploit critical leakages about both the model and the training data. Our second objective is to propose sound protections by evaluating the efficiency of physical countermeasures combined with state-of-the-art defenses against algorithmic attacks. Last but not least, considering the widespread deployment strategy of ML models for a large variety of domains and devices, we aim at disseminating good practices for embedded ML practitioners that could be the base for future standardization and certification schemes.
PICTURE is structured through three research axis. First, the PICTURE consortium aims at proposing an in-depth state-of-the-art on the threats targeting embedded neural network models, more especially software implementations. This compulsory work enables to define precise threat models relying on the adversarial knowledge (white/black box paradigms) and capacities (e.g., access to a clone device). Simultaneously, a set of use-cases is defined gathering public benchmarks and a set of hardware platforms (mainly, 32-bit microcontrollers). A focus is set on face recognition systems since these systems raise major integrity and confidentiality concerns.
Second, we aims at characterizing the critical properties of attacks using both algorithmic and physical means such as side-channel or fault injection analysis. For these physical attacks, cutting-edge platforms from the Center of Microelectronic of Provence are used. Several attacks will bring high interest for the consortium. Integrity of a model at inference time may be threaten by adversarial perturbations on the model inputs and also on the parameters stored in memory as well as critical instruction of the inference program. Model extraction (reverse engineering) is also a major concern for the ML models deployment and the use of side-channel analysis may significantly help an adversary to extract or recover information on a model and steal the IP or the performance.
Finally, jointly with the attack analysis, we develop tools and methodologies to properly evaluate the robustness of models as well as new defense approaches for embedded models. This works encompasses the evaluation of well-known countermeasures already proposed in other contexts (e.g., protection of cryptographic modules) for which we need to evaluate their relevance and efficiency on neural network models.
Ongoing project...
Today, Security is an important topic in the ML community with several dedicated sessions, workshops and issues in major conferences and journals. At the same time, security conferences proposed sessions focused on A.I. and particularly Adversarial ML (e.g., USENIX Security Symposium, ACM CCS, IEEE S&P). However, in both industrial and academic worlds, this consideration is feeble and AI is still driven by pure performance requirements with security as an option. A major objective of PICTURE is to facilitate a shift in the way to consider ML models by putting security at the core of the development and deployment strategy and anticipate and influence future standardization and certification strategies. From a social point of view, the consortium build PICTURE as a way to increase the trust of A.I. applications to a large audience by analyzing with transparency the threats and by demonstrating how to efficiently counteract.
PICTURE plans several publications in international conferences as well as the proposition of public benchmarks to help the scientific community to perform further analysis and studies.
Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre, Luring Transferable Adversarial Perturbations for Deep Neural Networks, In International Joint Conference on Neural Networks, IJCNN 2021.
Mathieu Dumont, Pierre-Alain Moellic, Raphael Viera, Jean-Max Dutertre, Rémi Bernhard, An Overview of Laser Injection against Embedded Neural Network Models, In 7th World Forum on Internet on Things (WF-IOT), 2021.
Raphael Joud, Pierre-Alain Moellic, Rémi Bernard, Jean-Baptiste Rigaud, A Review of Confidentiality Threats Against Embedded Neural Network Models, In 7th World Forum on Internet on Things (WF-IOT), 2021.
K. Hector, P-A. Moellic, M. Dumont, J-M. Dutertre, A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks, To Appear In 28th IEEE International Symposium on On-Line Testing and Robust System Design, IOLTS 2022, 12-14 September, Torino.
A major trend in Artificial Intelligence is the deployment of Machine Learning models even for highly constrained platforms such as low power 32-bit microcontrollers. However, the security of embedded Machine Learning systems is one of the most important issues to this massive deployment, more particularly for deep neural network-based systems. The difficulty comes from a complex twofold attack surface. First of all, an impressive amount of works demonstrate algorithmic flaws targeting the model’s integrity (e.g., adversarial examples) or the confidentiality and privacy of data and models (e.g., membership inference, model inversion). However, few works take into consideration the specificities of embedded models (e.g. quantization, pruning). Second, physical attacks (side-channel and fault injection analysis) represent upcoming and highly critical threats. Today, these two types of threats are considered separately. For the first time, the PICTURE project proposes to jointly analyze the algorithmic and physical threats in order to develop protection schemes bridging these two worlds and to promote a set of good practices enabling the design, development and deployment of more robust models.
PICTURE gathers CEA Tech (LETI) and Ecole des Mines de Saint-Etienne (MSE, Centre de Microélectronique de Provence) as academic partners and IDEMIA and STMicroelectronics as industrial partners that will bring real, complete and critical use cases more particularly focused on Facial Recognition.
To achieve its objectives, the consortium of PICTURE will precisely describe the different threat models targeting the integrity and the confidentiality of software implementation of neural network models on hardware targets from 32-bit microcontrollers (Cortex-M), dual architecture with Cortex-M and Cortex-A platforms to GPU platforms dedicated to embedded systems. Then, PICTURE aims at demonstrating and analyzing – for the first time – complex attacks combining algorithmic and physical attacks. On one hand, for integrity-based threats (i.e. fooling the prediction of a model) by combining principle of adversarial examples attacks and fault injection approaches. On the other hand, by studying the impact of the exploitation of side-channel leakages (side-channel analysis), even fault injection analysis associated to theoretical approaches to reverse engineer a model (model inversion) or to extract training data (membership inference attack). The development of new protection schemes will be achieved by the analysis of the relevance of state-of-the-art countermeasures against physical attacks (such an analysis has never been achieved at this scale). PICTURE will propose protections that will take place at different position within the traditional Machine Learning pipeline and more particularly training-based approaches that enable more robust models. Finally, PICTURE will present new evaluation methods to promote PICTURE results to academic and industrial actors. PICTURE aims at facilitating a shift in the way to consider ML models by putting security at the core of the development and deployment strategy and anticipate as well as influence future certification strategies.
Project coordination
Pierre-Alain Moellic (Direction de la recherche technologique)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partner
IDEMIA IDENTITY & SECURITY FRANCE
CMP Centre de Microélectronique de Provence
STMicroelectronics (Rousset) SAS STMICROELECTRONICS ROUSSET SAS
IDEMIA FRANCE IDEMIA FRANCE
DRT Direction de la recherche technologique
Help of the ANR 675,560 euros
Beginning and duration of the scientific project:
- 42 Months