Billion of connected devices are announced in 2020. This could cause a major revolution in ambiant intelligence if we can formulate appropriate architectures to process the massive data that will be produced or required by these devices.
These last years, the question of the most adequate architecture for intelligent systems based on connected devices was intensively studied. Most of the proposed solutions were based on centralized models. Here, connected devices send data to a central platform (typically a cloud) where are implemented the services that process the data. Once the processing is done, results are sent, if needed, to the referred connected device. There exists several applications where the centralized models were successfully applied. Despite these successes, Internet of Things experts (Gartner, IBM etc.) say that these solutions are not sustanaible. Indeed, it is hard to manage network contention, confidentiality or real-time processing in centralized architectures. One of the most promising alternatives to these models consist of realizing the processing required by a set of connected devices by the devices themselves. The challenge then is to coordinate the set of devices in an environment for the realization of the required computations. Doing so, we build what we call cloud of things. Such systems are nowadays possible because of the increasing computing power of connected devices. In addition, clouds of things have interesting advantages on confidentiality, reactivity and energy consumption.
The goal of the GRECO project is to develop a reference resource manager for cloud of things. The manager should act at the IaaS, PaaS and SaaS layer of the cloud. One of the principal challenges here will consist in handling the execution context of the environment in which the cloud of things operate. Indeed, unlike classical resource managers, connected devices imply to consider new types of networks, execution supports, sensors and new constraints like human interactions. The great mobility and variability of these contexts complexify the modeling of the quality of service. To face this challenge, we intend to innovate in designing scheduling and data management systems that will use machine learning techniques to automatically adapt their behavior to the execution context. Adaptation here requires a modeling of the recurrent cloud of things usages, the modeling of the physical cloud architecture and its dynamic.
The GRECO project is built upon a collaboration between an enterprise (Qarnot Computing) and two French research institutes: the «Laboratoire d'Informatique de Grenoble (LIG)» and the «Institut National de recherche en informatique et automatique (Inria)». In the project, the LIG will bring its expertise in the design of schedulers for large systems. Inria will contribute in the design of a data management system for massive data. Qarnot will offer the expertise it gained in designing a resource manager for its network of digital heaters. This network will also be used for the validation of the project. However, the proposed solutions should be interoperable: they must be usable to build other systems like Edge/Extreme edge computing systems.
Monsieur Paul BENOIT (QARNOT COMPUTING)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Grenoble INP / LIG INSTITUT POLYTECHNIQUE DE GRENOBLE
Inria Rennes - Bretagne Atlantique Centre de recherche Inria Rennes - Bretagne Atlantique
Help of the ANR 522,251 euros
Beginning and duration of the scientific project: January 2017 - 42 Months