CE25 - Réseaux de communication multi-usages, infrastructures de hautes performances, Sciences et technologies logicielles

ARTificial Intelligence-based Cloud network control – ARTIC

ARTificial Intelligence-based Cloud network control (ARTIC)

In recent years, computer networks have become more complex with an increasing diversity of devices and increased traffic dynamics. One solution to deal with this situation is Knowledge Defined Networking (KDN), where machine learning (ML) and artificial intelligence (AI) are combined with SDN / NFV and network monitoring. According to this paradigm, we aim to develop an AI framework capable of learning new efficient network control algorithms.

We aim to design a AI-based framework able to learn new efficient network control algorithms.

Communication networks are being impacted by three major trends: (i) the cloudification, cloud IP traffic will become in the next years the most part of Internet traffic; (ii) the consolidation of IP video as dominant application, thanks to the growth of live streaming and virtual-augmented reality; and, (iii) a rising contribution of mobile data traffic. These trends are provoking a complexification of traffic characteristics: (i) an increasing diversity of devices and connections types and (ii) an evolution to a more dynamic traffic pattern with a busy hour traffic growing more rapidly than average traffic due to video growth. These traffic changes challenges network control schemes, making a case for the exploration of more flexible and autonomous control paradigms.<br />In traditional networks, control was based on dedicated hardware and distributed algorithms. The network adaptability depended on distributed heuristic algorithms solving a part of the total optimal control problem, e.g. TCP solving the congestion control and OSPF or RIP solving the routing. By definition, these algorithms yield to suboptimal configurations, since they cannot have access to a global network view. However, considering the traffic requirement, these configurations were good enough. <br />In the last years, Network Function Virtualization (NFV) and Software Defined Networking (SDN) have allowed the shift to the cloud network paradigm, where control is based on general purpose hardware and centralized algorithms. Network functions are virtualized. The SDN-based control is logically centralized at the so-called SDN controllers, that instruct the programmable networking devices with their control decisions. Since SDN controllers have access to a broad view of the network state, so-found configurations are optimal. If network state changes, programmable devices are reconfigured with the new optimal solution. Nevertheless, this approach assumes: (i) that the overall control problem is solvable in a reasonable time; and, (ii) that a complete optimization model for the network control is available.<br />Unfortunately, the new network problematics above-mentioned weaken the validity of these assumptions. To face this situation, it was proposed the so-called Knowledge Defined Networking (KDN) paradigm, where Machine Learning (ML) and Artificial Intelligence (AI) approaches, such as Deep Learning (DL), are combined with SDN and network telemetry to “gather knowledge about the network”. In this KDN paradigm, a Knowledge Plane (KP) is redefined. This KP would be responsible for processing the data collected by the network monitoring, transform them into knowledge via ML, and use this knowledge to take decisions (either automatically or trough human operators). This project intends to propose a AI framework able to learn new efficient network control algorithms.

Two AI tools are promising to get the project objectives: (i) Deep Learning (DL); and, (ii) Reinforcement Learning (RL). Deep Learning (DL) is a type of representation learning that finds data representations fitted to the task by hierarchy building more abstract data representations from less abstract ones. DL is based on Artificial Neural Network (ANN) architectures composed by “many” layers with only certain connections (non-null weights) between them. Reinforcement Learning (RL) is a type of machine learning used to learn the optimal control by interacting with the environment (in our case, the network and the users). RL interacts with the environment by taking actions (control decisions) that generate a reward (or penalty) from the network and a transition from the current to the next network state. These interactions can be used to guide the learning of the weights of the ANN. The so-trained ANN will (i) transform the original data representation of the network state into a representation “suited” to the control problem; and, (ii) find the optimal control decisions (actions) from the “suited” representation in a more tractable way (since the network state representation is tailored to the control). In fact, the trained ANN constitutes itself a tailored heuristic algorithm for the targeted control problem. We must note that this Deep Reinforcement Learning (DRL) strategy has three main advantages: (i) it can learn from both models and data (the environment used in the RL part can be either a network model or a real operational network), (ii) a unique framework is used to solve the overall control (instead the data-driven step and the model-driven step of a traditional ML-based control); and (iii) scalability issues can be better addressed since a task-fitted (potentially lower dimensional) vector concentrating the most part of the relevant information for our control problem is used to solve the optimization problem. Therefore, the proposed DRL methodology constitutes a promising approach to solve any network control problem regardless the level of complexity (NP) or incertitude about the optimization model.

The expected results of the project are machine learning based algorithms, typically based on Deep Reinforcement Learning, devised to tackle resource allocation problems in network control. These algorithms will be accompanied by frameworks for the computer network simulation or emulation. These frameworks are required to validate the algorithm operation in close-to-real scenarios. The algorithms and the frameworks are still in development.

The initial focus of the project was the dynamic allocation of service chains of virtual network functions, namely the allocation of video delivery service chains. In the following, the project focus could be enlarged to other relevant working scenarios as the integration of wireless and wired access in the so-called fog networks or the multi-agent networking in the multi-domain networks.

The submissions to international conferences and journals are in preparation.

By 2021, cloud IP traffic will be the most part of an Internet traffic that complexifies with an increasing devices diversity and traffic dynamicity. A proposal framed at the cloud to face this situation is the Knowledge Defined Networking (KDN), where Machine Learning (ML) and Artificial Intelligence (AI) are combined with SDN/NFV and network monitoring to collect data, transform them into knowledge (e.g. models) via ML, and take decisions with this knowledge. Under this paradigm, we aim to design a unified AI-based framework able to learn new efficient cloud network control algorithms. This framework will integrate seamlessly data-driven control (based on ML tools) and model-driven control (based on optimization models), addressing scalability and optimality issues of the cloud control. To do that, we intend to apply two promising AI tools: Deep Learning (DL); and, Reinforcement Learning (RL).

In the project, a Deep Learning Artificial Neural Network (ANN) will be used to transform the original input data representations (in our case, the cloud network state) into a low dimensional space where the network structural information and network properties are maximally preserved, and used them to solve in a more tractable way the optimal control problem. RL will be applied to learn the optimal control by interacting with the environment (in our case, the Cloud network).These interactions can be used to guide the learning of the weights of the deep ANN. The result is that the RL algorithm (acting as control loop) will solve more easily the control problem using as input this more compacted and lower dimensional representations found by the deep neural network. The main novelty of our approach is that we state that, for network control problems, the deep ANN should not be implemented using the same deep layer architectures used in computer vision (the so-called convolutional layers), but using a different kind (the so-called novel graph embedding architectures) better fitted to the graph nature of the network problems.

Then, we propose to use the graph embedding layers as deep layers to solve cloud network control problems, namely the dynamic allocation of service chains composed by network virtualised functions. Starting from the case where the network service is unicast, we will move later to the multicast case, since video delivery, the classical multicast service, is the Internet killer application. Finally, we will implement a KDN proof-of-concept tested where our Deep Reinforcement Learning control will send via the northbound interface the control decisions to a an SDN controller, that, in its turn, will instruct an emulated SDN network.

Project coordination

Ramon Aparicio Pardo (Laboratoire informatique, signaux systèmes de Sophia Antipolis)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

I3S Laboratoire informatique, signaux systèmes de Sophia Antipolis

Help of the ANR 221,794 euros
Beginning and duration of the scientific project: March 2020 - 42 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter