DS10 - Défi de tous les savoirs

Visual-Inertial aided navigation for Micro Aerial Drones – VIMAD

Perception based on visual and inertial sensing; Visual-Inertial sensor fusion; aerial navigation; state estimation; closed form solution; nonlinear observability

Acquire a deep theoretical comprehension of the problem of fusing visual and inertial measurements. Starting from this knowledge, build a new robust and reliable perception system, only based on visual and inertial measurements, to enhance the navigation capabilities of fully autonomous micro aerial drones.

1. Automatic state initialization by only using monocular vision and inertial data delivered during a short time interval. 2. Observability properties of the visual-inertial sensor fusion problem

1. Use of the closed-form solution introduced in [1, 2] for the initialization of a filter that fuses visual and inertial measurements;<br />2. Use of the closed-form solution for the outliers detection;<br />3. Find the range of validity of the method introduced in [3] and, hopefully, derive the general solution of the Unknown Input Observability problem in the nonlinear case;<br />4. Use the method of above to infer the observability properties of the visual-inertial sensor fusion problem;<br />5. Investigate the possibility of introducing new sensor suites, more effective and suitable for micro aerial vehicles.<br /><br />[1] Martinelli A., Vision and IMU Data Fusion: Closed-Form Solutions for Attitude, Speed, Absolute Scale and Bias Determination, Transaction on Robotics, Volume 28, Issue 1 (February), pp 44–60, (2012)<br />[2] A. Martinelli, Closed-form solution of visual-inertial structure from motion, Int. J. of Computer Vision, Vol. 106, N. 2, 138–152, 2014.<br />[3] A. Martinelli, Visual-inertial structure from motion: observability vs minimum number of sensors, to be presented at the International Conference on Robotics and Automation, ICRA, 2014, Hong Kong, China.<br />

In order to obtain a method able to perform automatic state initialization by only using monocular vision and inertial data delivered during a short time interval, we carried out an exhaustive analysis based on simulations on the performance of the closed-form solution introduced in [1,2]. This analysis has clearly shown that, while the closed form solution is very robust against the accelerometer bias, its performance is dramatically deteriorated by a gyroscope bias. For this reason, we introduced a new method able to simultaneously initialize the state and to autocalibrate the gyroscope bias.
Regarding the investigation of the observability properties of the visual-inertial sensor fusion problem, we started by introducing a new analytic tool, able to perform the observability analysis of nonlinear systems in presence of disturbances. The problem of deriving an analytic tool able to determine the observability properties in presence of unknown inputs is known as the Unknown Input Observability (UIO) problem. This problem was introduced and firstly investigated by the automatic control community in the seventies [3,4]. Because of its complexity, only the linear case has been solved [3].

[1] Martinelli A., Vision and IMU Data Fusion: Closed-Form Solutions for Attitude, Speed, Absolute Scale and Bias Determination, Transaction on Robotics, Volume 28, Issue 1 (February), pp 44–60, (2012)
[2] A. Martinelli, Closed-form solution of visual-inertial structure from motion, Int. J. of Computer Vision, Vol. 106, N. 2, 138–152, 2014.
[3] G. Basile and G. Marro. On the observability of linear, time invariant systems with unknown inputs. J. Optimization Theory Appl., 3:410–415, 1969.
[4] R. Guidorzi and G. Marro. On wonham stabilizability condition in the synthesis of observers for unknown-input systems. Automatic Control, IEEE Transactions on, 16(5):499 âAS 500, oct 1971.

The main results so far obtained are the following three:
1. Introduction of a new method to detect outliers in data matching based on the closed form solution previously introduced [1];
2. Improvement of the performance of the closed form solution by simultaneously calibrating the gyroscope bias [2];
3. New analytic solution able to extend the observability rank condition, introduced in 1977 in [7], when the dynamics is affected by a single disturbance [3,4,5,6].

[1] Troiani C., Martinelli A, Laugier C. and Scaramuzza D., (2015), Low computationalcomplexity algorithms for vision-aided inertial navigation of micro aerial vehicles, Robotics and Autonomous Systems 69, 80-97
[2] Simultaneous State Initialization and Gyroscope Bias Calibration in Visual Inertial aided Navigation J Kaiser, A Martinelli, F Fontana, D Scaramuzza, IEEE Robotics and Automation Letters (RA-L), January 2016
[3] Martinelli A, Nonlinear Unknown Input Observability: Analytical expression of the observable codistribution in the case of a single unknown input, SIAM on Control and Applications 2015, SIAM-CT15, Paris, France, July 2015
[4] Martinelli A, Extension of the Observability Rank Condition to Nonlinear systems driven by Unknown Inputs, MED’15 Torremolinos, Spain, June 2015
[5] Martinelli A, Minimalistic sensor design in visual-inertial structure from motion, 2015 IEEE/RSJ International Conference on Robotics and Automation, Seattle (USA)
[6] A. Martinelli, Non linear Unknown Input Observability: Extension of the observability rank condition and the case of a single unknown input, INRIA technical report, available on line on the HAL archive hal.inria.fr/hal-01071314v5
[7] Hermann R. and Krener A.J., 1977, Nonlinear Controllability and Observability, Transaction On Automatic Control, AC-22(5): 728-740

Future actions in the frame work of wp1 will include the integration of this method on the perception system currently adopted by UZH for drones localization and navigation. In particular,this would allow aggressive take-off maneuvers, such as hand throwing the drones in the air, as already demonstrated in [1] with a range sensor. With our technique, we could get rid of the range sensor.

Future work in the frame work of wp2 will be devoted to extend the analytic tool to perform nonlinear observability in presence of disturbances to the case of multiple independent disturbances. We are currently working on this direction. We believe that a new abstract operation that extends the Lie brackets must be considered in the framework of Lie algebras.

[1] M. Faessler, F. Fontana, C. Forster, and D. Scaramuzza, âA.Automatic re-initialization and failure recovery for aggressive flight with a monocular vision-based quadrotor, in International Conference on Robotics and Automation (ICRA), 2015.

- Troiani C., Martinelli A, Laugier C. and Scaramuzza D., (2015), Low computationalcomplexity algorithms for vision-aided inertial navigation of micro aerial vehicles, Robotics and Autonomous Systems 69, 80-97 available online hal.inria.fr/hal-01248800

- Simultaneous State Initialization and Gyroscope Bias Calibration in Visual Inertial aided Navigation J Kaiser, A Martinelli, F Fontana, D Scaramuzza, IEEE Robotics and Automation Letters (RA-L), January 2016 available online hal.inria.fr/hal-01305541

- Martinelli A, Nonlinear Unknown Input Observability: Analytical expression of the observable codistribution in the case of a single unknown input, SIAM on Control and Applications 2015, SIAM-CT15, Paris, France, July 2015, available online hal.inria.fr/hal-01248792

- Martinelli A, Extension of the Observability Rank Condition to Nonlinear systems driven by Unknown Inputs, MED’15 Torremolinos, Spain, June 2015, available online hal.inria.fr/hal-01248783

- Martinelli A, Minimalistic sensor design in visual-inertial structure from motion, 2015 IEEE/RSJ International Conference on Robotics and Automation, Seattle (USA), available online hal.inria.fr/hal-01248785

- A. Martinelli, Non linear Unknown Input Observability: Extension of the observability rank condition and the case of a single unknown input, INRIA technical report, available on line on the HAL archive hal.inria.fr/hal-01071314v5

- Depot A.P.P. Date : 25/09/2015 IDDN.FR.001.400016.000.S.P.2015.000.10100 VI-SFM, Visual Inertial Sensor Fusion Method

The VIMAD project has two main goals:
- (technological) build a robust and reliable perception system, only based on visual and inertial measurements, to enhance the navigation capabilities of fully autonomous micro aerial drones;
- (scientific) acquire a deep comprehension of the problem of fusing visual and inertial measurements (from now on the Visual-Inertial structure from motion, VISfM).

The perception system will be embedded on micro drones to make them able to safely and autonomously navigate in GPS denied and unknown environments and even to perform aggressive manoeuvres. In particular, with unknown environments, we mean environments that are not equipped with motion capture systems or any external sensor. Perception is still the main problem for high-performance robotics. Once the perception problem is assumed solved, for example by the use of external motion-capture systems, then established control techniques allow for highly performing systems [19,28].
A perception system suitable for a micro aerial vehicle must satisfy sever constraints, due to the small size and, consequently, the low allowed payload. This imposes the employment of low weight sensors and low computational
complexity algorithms. In this context inertial sensors and monocular cameras, thanks to their complementary characteristics, low weight, low cost and widespread use, represent an interesting sensor suite. On the other hand, current technologies for navigation only based on visual and inertial sensors have the following strong limitations:
- The localization task is achieved via recursive algorithms which need initialization. This means that they are not fully autonomous and, more importantly, they are not robust against any unmodeled event (e.g. system failure) which requires the algorithm to be re-initialized;
- They are not enough precise in order to allow a micro aerial vehicle to undertake aggressive manoeuvres and, more in general, to accomplish sophisticated tasks.
To overcome these limitations our perception system will be developed by relying on the following three new paradigms:
- Use of the closed-form solution to the visual-inertial structure from motion problem introduced in [23,24];
- Exploitation of the information contained in the dynamics of the drones;
- Use of the observability tool developed in [22]

The first paradigm will allow the perception system to be able to initialize (or reinitialize) the localization task, without external support. In other words, it will make the localization task fully autonomous and robust against any unmodeled event like a kidnapping. Additionally, it can be used to introduce a low-cost data association method. The second paradigm will enhance the perception capabilities in terms of precision. This is important in order to accomplish aggressive manoeuvres. Finally, the third paradigm will allow us both to acquire a deeper comprehension of the VISfM and hopefully to design new and more effective sensor arrangements. This scientific topic deserves in our opinion a deep theoretical investigation since the perception system of most mammals relies precisely on visual and vestibular signals. A deep scientific comprehension of this problem could allow the robotics community to introduce new technologies for navigation. Specifically, we will approach this fundamental problem by proceeding in two main steps. In the former we will investigate an open problem in the framework of control theory, which is the Unknown Input Observability (UIO), namely the observability analysis in the case of unknown inputs; the latter is the use of the results obtained for UIO to investigate the observability properties of the VISfM in the case of missing inertial inputs and eventually to design new sensor arrangements.

Project coordinator

Monsieur Agostino Martinelli (INRIA CENTRE GRENOBLE RHÔNE-ALPES)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

INRIA INRIA CENTRE GRENOBLE RHÔNE-ALPES
UZH University of Zurich

Help of the ANR 88,817 euros
Beginning and duration of the scientific project: September 2014 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter