Augmented Reality and Tangible User Interface to Supervise and Interact with robot Swarms – ARTUISIS
How can be controled dozens of autonomous, self-organising robots?
This project focuses on the elementary behaviours that lead to the spatial self-organisation of a swarm of robots: coordinated movement, densification, expansion and disordered movement. It aims to enable an operator to: - understand the mechanisms of their self-organisation - supervise the dynamics of the elementary behaviour - act intuitively, ergonomically and effectively on this spatial self-organisation.
Augmented visualisation to aid understanding and intuitive tangible interactive control
The project's problem can be summed up by the following question: how can we effectively influence/control, in an informed way, a swarm of robots whose overall behaviour is complex? The scientific challenges linked to this issue, in the field of Human-Swarm Interaction (HSI), are as follows: 1. The first challenge is to give the operator an understanding of the mechanisms of the swarm's basic behaviour, and a real-time visualisation of its dynamics. The aim here is to determine what information should be transmitted to the operator, and how this information should be transmitted, to help him build a mental model of the swarm's behaviour. 2. The second challenge concerns the implementation of intuitive, ergonomic and effective means of interacting with the swarm of robots. This issue is closely linked to the complexity of the swarm's behaviour and its distributed nature: how can we influence in a natural and intuitive way a system whose algorithm has been designed to operate in a decentralised way? This project proposes to address these issues by identifying and making explicit, through visualisation, the mechanisms leading to the spatial self-organisation of the swarm's robots; and by setting up specific interactors capable of representing and acting on the swarm's spatiality and dynamics. The originality of the ARTUISIS project lies in 1) the use of Augmented Reality to help observe and understand the mechanisms of swarm behaviour, and 2) the design and use of a specific tangible interactor to represent the spatial dynamics of the swarm and influence it intuitively. In this project, we address these issues in the context of fragmentation, a situation that is difficult for humans to grasp and which corresponds to a weakness in the swarm's autonomy, characterised by a loss of communication and coordination between the robots that make up the swarm, leading to its division.
It was first necessary to look at the existing collective behaviour of swarms: by identifying the mechanisms at the origin of the self-organisation present in the models in the literature, similarities reveal "self-organisation methods", which have their own properties and make it possible to generate certain collective behaviours. This analysis has also highlighted information that can be transmitted to the operator to help him understand, and has enabled us to propose controls directly linked to these mechanisms in order to influence the behaviour of the swarm. At the end of this analysis, a method was selected that allowed several different behaviours to be obtained, as well as fragmentations with different properties, and used in two studies involving users.
The 1st study sought to assess the limits of human perception when an operator is in the presence of a swarm of self-organising robots, in particular the observable factors of the swarm that influence this perception. Before proposing augmented visualisation of additional information, it was indeed necessary to first identify what humans are naturally capable of perceiving and understanding about the swarm. In this study, participants had to determine whether the simulated swarms presented to them showed fragmentation or not.
The 2nd study aimed to assess whether making explicit certain aspects of the robots' behavioural model could convey relevant information to users, with the aim of facilitating their understanding of the swarm's dynamics. Taking into account the results of the first study, that showed the difficulties encountered by humans in perceiving certain fragmentations, we proposed 3 localised and augmented visualisations representing what each robot perceives, what motivates its decision-making process and the action decided upon. These visualisations were evaluated to determine their impact on the human's ability to perceive and prevent the appearance of fragmentations, using Virtual Reality (VR) to immerse users in the role of operator.
To address the second issue, the chosen approach is to follow elicitation techniques used in the field of Human-Computer Interaction to identify expert knowledge. Here, the experts would be people trained in the problems of interacting with swarms of robots, and the main objective would be to determine which interactions, forms and object dynamics would be relevant for visualising (physical form) but above all controlling the swarm. The design and hardware/software development of a prototype will then enable the performance of this device to be evaluated. The interactor will then be evaluated (usability, performance) with non-expert users.
The results of the first study show that humans correctly perceive swarm fragmentation (around 90% of the time) for coordinated movement, densification and disordered movement behaviours. However, they have difficulty perceiving fragmentation when the swarm adopts expansion behaviour, as there are no cues to estimate a loss of communication between the agents. Furthermore, the results show that it is difficult for humans to correctly anticipate fragmentation in all categories of behaviour. In addition, humans take longer to respond and have difficulty anticipating fragmentation when the swarm adopts disordered movement behaviour, because the erratic movement of the agents makes it impossible to anticipate the onset of fragmentation. We also showed that inter-/intra-group distance and separation speed influence the chances of correctly identifying swarm fragmentation, suggesting that humans use, among other cues, the distance separating groups to perceive fragmentation, as well as the speed of separation from the groups' centre of mass. These results highlight the situations in which it is most difficult for an operator to perceive and anticipate fragmentation.
The results of the second study show that the localised and systematic display of interactions between robots using links, the direction of the robots using arrows and their dominant force using coloured arrows, shows no significant influence on whether it helps to anticipate or choose the right interaction to avoid fragmentation. Furthermore, humans anticipate and prevent fragmentation more easily in densification behaviour than in expansion, coordinated movement and disordered movement behaviour. These results show that there is a real need to help humans prevent swarm fragmentation, particularly for the behaviours of coordinated movement, expansion and disordered movement. As the proposed visualisations do not significantly improve users' ability to prevent fragmentation, other solutions need to be explored.
Swarms of robots are being considered for use in a wide range of applications in a variety of fields, such as agriculture, search and rescue in accidents, and exploration. To achieve this goal, research is focusing on improving these swarms. Future developments will make swarms more efficient, with larger numbers of robots capable of adjusting their behaviour according to the situation, in total autonomy, and adopting behaviours and dynamics that are increasingly difficult for humans to grasp. As robot swarms become more autonomous and efficient, it becomes crucial to step up research into human-swarm interaction. This will ensure that humans remain capable of understanding and intervening in the operation of swarms when necessary. In order to enable humans to retain control of swarms, this project has contributed to research into human-swarm interaction by investigating the way in which humans understand swarms and identifying ways of helping them to do so. This work has highlighted situations that are difficult for humans to grasp, and behaviours that can mislead humans as to the real state of the swarm, even though these behaviours are "building blocks" that can be found in more complex compound behaviours. If we want to keep humans in the swarm control loop in the long term, it is essential to continue research in this area. The alternative of relying solely on the swarm's autonomy and having to trust it to carry out the desired tasks, because of a lack of understanding of how it works, raises ethical and responsibility issues and cannot be considered a viable or responsible solution.
Scenarios involving several users simultaneously controlling a swarm could also be envisaged in the future. For example, a supervisor and an operator could respectively assign tasks to the swarm and ensure that the swarm performs its task correctly. A supervisor could also divide the swarm and give control of each group to different operators, acting within a common environment and goal. These examples raise the issue of collaboration. Since the swarm is designed to operate autonomously, users can only influence it. In this context, can a user observing the swarm understand that the swarm is influenced by another user and not by its environment? Can they understand the intention behind this influence? Does he not risk interacting because the swarm's behaviour does not suit him and thus thwarting the other user's strategy? Collaboration involving several users and one (or more) swarm(s) is a subject which does not yet seem to have been addressed in the literature. However, it would be interesting to look into it, as it would require the definition of methods for collaboration and tools for communicating users' intentions to each other.
Due to their distributed and autonomous nature, robot swarms have self-adaptation properties that are very useful, but which make any supervision (visualization and understanding) and control by a human operator very difficult. The two main scientific issues are helping the operator to understand and visualize the complex behaviour of the swarm, which emerges from the interactions between robots; and finding ways to interact with the swarm in an efficient way. This project proposes to use 1) Augmented Reality to help visualize and understand the mechanisms of the swarm's behaviour, and 2) to design and use a specific tangible interface to represent the spatial dynamics of the swarm, and influence it in a natural and intuitive way. An evaluation of the contribution of the RA-TUI device to the users' understanding and usability of the system, within two experimentation framework, is planned.
Project coordination
Jérémy Rivière (Laboratoire des Sciences et Techniques de l'Information, de la Communication et de la Connaissance)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partnership
LAB-STICC ubo Laboratoire des Sciences et Techniques de l'Information, de la Communication et de la Connaissance
Help of the ANR 148,524 euros
Beginning and duration of the scientific project:
September 2021
- 42 Months