Vision based automatic Landing of a passenger aircraft – Visioland
VISIOn-based aircraft LANDing techniques
Although its duration represents only 4% of an average flight, the landing phase is certainly one of the most critical. Most major airports are now equipped with dedicated ground facilities (ILS / GPS) providing assistance to the pilot and thus increasing security. But these expensive devices are not always available in medium-sized airports. In addition, in a large majority of situations, emergency landings have to be performed without any ground support.
Is it possible to secure the landing phase by using a video camera ?
In this context, the VISIOLAND project is to contribute to increasing the overall level of security and automatization of the landing phase. The difficult case of unprepared runways will receive a particular attention. The proposed approach is to use information from embedded visual sensors which will offer new control perspectives :<br /> - Define the choice of the most appropriate visual sensor<br /> - Select relevant visual information<br /> - Study techniques of image processing and associated inaccuracies <br /> - Develop new dedicated automatic control techniques<br /> -estimation techniques<br /> -control techniques<br /> -integrity evaluation<br /> -pilot-in-the-loop aspects
Our current resarch focuses on control theory and image processing. Technical and scientific innovations will be implemented and carried out for evaluation on two platforms:
- A fixed-wing aircraft designed by the french company «L' Avion Jaune« and automated by ONERA. With this drone, the algorithms (dedicated to perception, guidance and control) developed in the VISIOLAND project will be tested in real conditions,
- A very representative Airbus simulator. This platform will permit to futher evaluate the algorithms and study how they interact with a human pilot.
Under construction
Under construction
Under construction
The landing of a civilian aircraft constitutes one of the most critical phases of a commercial flight, although it represents only 4% of the flight duration. If ground based dedicated means (e.g. ILS/GPS) do assist the pilot in order to increase the safety of this phase, these infrastructures, which are expensive, are not installed on every airport around the Earth. Moreover, urgent landing are by definition made without such help.
In this context, the VISIOLAND project proposes to contribute to the augmentation of the global safety and automation level of this landing phase, with particular consideration of unprepared landing fields. The proposed approach consists in exploiting the information provided by embedded visual sensors while offering new solutions to the technical problems related to the use of this new perception capability.
As a consequence, the following aspects will be studied:
- Definition of the relevant visual sensor,
- Choice of the most useful visual cues,
- Image processing techniques and related errors study,
- Estimation, control and integrity diagnostic methods development,
- Crew interaction aspects,
Increasing the automation level does not imply that the crew will be useless, but rather that his workload and formation needs will be reduced. Thus, we propose to take advantage of the visual cues to assist the crew and make him validate technical choices proposed in the case of diagnostic problems.
Technical and scientific innovations thus developed will be implemented for evaluation purpose on two platforms:
- A fixed-wing aircraft designed by l'Avion Jaune society and automated by ONERA: with this UAV, the perception, guidance and control algorithms developed within the project will be tested in real conditions.
- An Airbus simulation testbed highly representative of a civilian aircraft: this testbed will be used in order to test the crew interaction software developed within the project.
The consortium gathers five academic and industrial partners with reknown and complementary competencies in order to fulfil our ambitious objectives:
- INRIA (LAGADIC) whose research axis is the application of image processing techniques to robotics;
- IRCCyN (CONTROL and ROBOTICS teams) whose expertise in control theory is reknown, will develop on this basis methods using visual cues in the guidance and control loops through nonlinear observers and control law synthesis using this information;
- ONERA (DSCD) is Airbus key partner for control, guidance and decision making methods developments. Moreover, DCSD has a UAVs lab from which the Avion Jaune will be used within the project;
- AIRBUS, one of the world leaders in civilian aircraft production, contributes through its experience and its knowledge of the operational and industrial context. It will also provide for use in the project a commercial aircraft simulation testbed., where the pilot and new algorithms will be able to interact.
- SPIKENET TECHNOLOGY, an SME located in Toulouse, which develops and sell a technology based on neurosciences for image processing.
Project coordinator
Monsieur Laurent BURLION (Onera TOULOUSE)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partner
SNT SPIKENET TECHNOLOGY
IRCCyN Institut de recherche en communications et cybérnetique de Nantes
AIRBUS AIRBUS OPERATIONS SAS
Inria Rennes - Bretagne Atlantique Inria, Centre de recherche de Rennes - Bretagne Atlantique
Onera Onera TOULOUSE
Help of the ANR 985,054 euros
Beginning and duration of the scientific project:
October 2013
- 48 Months