DS07 - Société de l'information et de la communication

Event-based VIsual SERvoing – e-VISER

dextAIR, a new concept of aerial manipulator with elastic suspension

-

When cable-driven parallel robots meet aerial manipulators, or how to merge large working space, dexterity and low consumption

When a drone (or unmanned aerial vehicle) has to operate on its environment, it is called an aerial manipulator. These robots embed a robotic arm or a gripper to allow the grasping or the transport of objects. Aerial manipulators combine the large working area capacity of drones with the dexterity and accuracy of robotic manipulators. Nevertheless, they consume a lot of energy to compensate for gravity during flight, which considerably reduces their autonomy when operating on battery.<br />Cable-driven parallel robots have an end effector connected to a frame by cables, whose length can be changed to control the position and orientation of the end effector. They have a smaller working space due to the limited length of the cables that operate them, but consume much less energy in return since cables compensate for gravity.<br />The idea then emerged to suspend an aerial vehicle with an elastic cable, thus combining the advantages of both types of robots. The result is the dextAIR robot, an aerial manipulator with elastic suspension.

The dextAIR concept was extended to six degrees of freedom using an omnidirectional aerial manipulator, and the mechanical design simplified with a single cable that connects the manipulator to a possible robotic carrier. In addition, a low-stiffness spring replaces the traditional cable+winch system in cable-driven robots, resulting in reduced power consumption, dynamic decoupling, and mechanical simplicity of the assembly. The robot was designed with off-the-shelf technologies in a parsimonious engineering effort.
The manipulator was suspended to a fixed carrier, anchored to the ceiling, and then suspended to a robotic carrier. The latter extends the robot's working space, previously limited around its equilibrium point due to the spring's restoring force, while reducing energy consumption as the carrier moves the manipulator's equilibrium. Experimental prototypes have demonstrated millimeter accuracy and fast dynamics of the manipulator, as well as a significant reduction in energy consumption.

Nonlinear predictive control strategies have been developed and implemented for different configurations. These strategies allow to dynamically solve an optimization problem in real time by taking into account actuator constraints or energy constraints. These advanced strategies have clearly helped to maximize the performance of the closed-loop system (in terms of speed, accuracy, workspace, offset free, respect of constraints...) and to drastically reduce the energy consumption, compared to more classical control approaches.

The development of the dextAIR robot as well as the topics addressed during the project will continue through 3 new ANR projects: navigation of a drone in a dark environment (dark-NAV), street art by drone (STRAD) and urban micro-climatology (TIR4sTREEt).

The project allowed the development of 3 prototypes of the dextAIR robot, 2 softwares for the low-level control of the robot, and the publication of 2 international journal articles and 7 international conference papers.

Visual servoing techniques allow to guide a robot with the use of cameras as the sole sensors. The principle consists in acquiring, processing and analyzing digital images, frame by frame, with the aim of extracting features to produce data that will be understandable by the robot, in order to take appropriate decisions for its motion. This field significantly contributes to the development of autonomous robots. However, new methods are mandatory because classical techniques are too resource consuming and barely exploit the dynamic characteristics of the perceived scene.

The e-VISER project aims to design an original architecture of visual servoing, that rethinks all the data acquisition and processing chain from perception to decision, based on the event-based paradigm. This novel concept devises anew the way to measure, communicate, compute, control or actuate, only when a significant change has occurred in the dynamic of the robot or in its environment (instead of applying a constant and periodic update rate). More precisely, the idea is to combine innovative event-based elements together (typically vision sensor and controller) and develop new image processing and control strategies.

Among event-based sensors, the so-called dynamic vision sensor (DVS) depicts an innovative approach. Whereas a standard camera periodically sends superfluous and redundant data, addressing all pixels for each image, a DVS only sends the pixel-level changes caused by a movement in the scene, at the time they occur. Thus, each pixel produces an event whenever it perceives a change of intensity. Therefore, DVS offers a drastic decrease in latency for perception, but its spatial resolution still remains low and far from the accuracy required for visual servoing. To overcome this weakness, it is proposed to associate both i) a DVS, to rapidly detect a movement in the scene, and ii) a high-resolution camera, to be more accurate when extracting features from the environment but with a reduced image processing (only applied to the region of interest pinpointed by the DVS events, instead of handling the entire image). This framework involves developing advanced methods for image processing, that should take into account spatial and temporal data from the DVS.

The event-based controller also relies on an innovative concept: control is not computed nor updated unless it is required. Typically, triggering events occur either when a movement (or an obstacle) is detected in the scene perceived through the visual sensor, or when the control signal has to be updated in order for the robot to reach a given objective. This setup involves developing methods for image processing, control and communication in an unconventional way, while still guaranteeing stability, quality of service and performance of the closed-loop system even if the measurement and/or the control signal remain constant between two successive (non-periodic) events.

To summarize, the new event-based visual servoing scheme will allow to i) reduce the computing and communication costs for image processing and control, while keeping the same accuracy and closed-loop performance, ii) speed up the visual servoing and so improve the robot performance, and iii) save energy. The whole framework will be implemented for stabilizing and flying at high speed a complex robotic system under resource constraints: an autonomous quadcopter that embeds on board the vision sensors and a computing platform for image processing and control. The ambitious experimental validation will consist of navigation and obstacle avoidance of the drone in an unstructured and unknown environment.

Project coordination

Sylvain DURAND (Laboratoire des sciences de l'Ingénieur, de l'Informatique et de l'Imagerie)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

ICube Laboratoire des sciences de l'Ingénieur, de l'Informatique et de l'Imagerie

Help of the ANR 197,640 euros
Beginning and duration of the scientific project: - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter