Urgent news
CE10 - Industrie et usine du futur : Homme, organisation, technologies

Automatic visual inspection using Machines learing : applications for industrie 4.0 – TEMIS

Automatic visual inspection of appearance defects in real time and online using learning machines: contribution to Industry 4.0

/

Challenges and objectives

Visual inspection and defect detection of manufactured products on a production line must be performed in real time at very high speeds (sub-second). These activities are an integral part of the overall product quality improvement strategy, helping to limit customer returns. They are specific to a specific type of defect. As part of this project, we will focus on the following inspection protocols:<br /><br />- part presence control by component counting;<br /><br />- Product appearance inspection (burrs, misalignments, deformations, contours, etc.);<br /><br />- Surface appearance inspection for defects such as tears, dents, cracks, opacity, etc.<br /><br />The TEMIS project aims to develop an automated and reconfigurable approach for in-line inspection of manufactured products. The production requirements, which are essential contextual elements for TEMIS, are:<br /><br />- High throughput, with a sub-second execution time for detection;<br /><br />- Incomplete knowledge of defect classes: We consider the defect typology to be incomplete, as it is potentially infinite. The aim is to detect unlisted defects, never observed or anticipated when the inspection system was developed.<br /><br />- Rarity of defects, the number of which is much lower than that of compliant products: For an automated system to maximize the detection rate, by default, we believe it is preferable to focus on The modeling of compliant products, which are much more numerous.<br /><br />These requirements were established by consulting several manufacturing companies. Two of them caught our attention: AML Systems and Inteva Products, with production sites in Hirson (02) and Esson (14), respectively. These two manufacturers produce mechatronic components for the automotive industry at high speeds (several million parts/year). AML Systems visually inspects its components in less than 1 second per product. Inteva Products performs visual inspections in situ in a production machine, also in less than one second. It seems realistic to consider this limit for the execution time of an inspection. These two companies have already provided us with actual products and acquisition data from their productions for the validation of the approaches recommended in TEMIS (see Section 3, Methodology). The acquisition data for inspections are of different types; we will consider two in particular, namely: 2D (images, in black and white or color, as well as videos if the inspection is filmed by inspection cameras) and 3D (type scanner from tomographs, structured light scanner or even laser and combined CCD camera).

Significant experimental and validation efforts are required to, on the one hand, provide the scientific and industrial communities with a genuine experimental dataset derived from real-world production conditions, and, on the other hand, use this dataset to provide a comprehensive and detailed comparison study in terms of accuracy and performance with comparable statistical approaches such as PCA (Principal Component Analysis), SVM (Support Vector Machine), Auto-Encoders, neural networks, etc.

It is also possible to believe that the AE+GAN approach offers a certain flexibility, as it requires less expert input to characterize a defect and because it allows for the detection of a wide variety, or even any variety, of defects. However, this point remains to be confirmed through an experimental implementation as mentioned above. Next, for the industrial deployment of unsupervised/supervised approaches, it is imperative to estimate the variation in accuracy and performance in a hostile industrial environment where conditions such as temperature variations that change the appearance of a conformal surface, changes in lighting conditions, the presence of dirt on vision systems, vibrations that can disrupt acquisition, etc. are encountered. All of these elements must be integrated and considered during the experimental implementation; this will be possible via UTC's AMS platform (see WP1). Finally, regarding the genericity of the envisaged learning approaches, i.e., the similar application to other datasets, it is also interesting, industrially, to experiment with the potential for genericity via a generic implementation pipeline (process) that can be validated on data from AML Systems and Inteva Products. From a mechanical and industrial engineering point of view, what is important to remember is the overall architecture of the Encoder and DCGAN type for defect detection. This is a so-called unsupervised approach. The pre-study was tested on a computer comprising 3 Nvidia GPUs (located at UTC near the AMS platform). Training, on the 1000 compliant images of the 10 classes (image size 256 by 256), can last several hours. On our computer, between 5 and 6 hours were required. The observed inference time is less than 1 second (0.96 seconds). The maximum rate achieved by our pre-study is 92.48% TVP on average and 81.63% for TVN on average. The work of (Zhao et al., 2018) has a slightly higher average detection rate. Although used in supervision, in the sense that the training dataset only contains real conforming images that required prior labeling by an expert, the AE+DCGAN method has an unsupervised component, because no typology of defects needs to be defined beforehand by an expert.

As planned, and following the research work carried out during the first period, a collaboration with Renault Group was undertaken to create the very first dataset in operational conditions. The images acquired by Renault Group on the Douai site made it possible to offer the research community a dataset called AutoVI, integrating real and realistic classes on production sites. Thus, such a dataset will help to reduce the gap that exists between current methods in the literature and their transposition to other datasets. To demonstrate the impact of such a dataset, a large comparative study of the most well-known methods was carried out on AutoVI. These methods, usually very efficient, on perfect datasets (no blur, no light variation, no noise, etc.) obtain much lower results. This study, dedicated to unsupervised methods, shows that they bring real interest in the detection of structural (textures) and logical defects (presence/absence of parts or reasoning on wiring). In addition, they are the capacity has been transposable while minimizing the data labeling effort. AutoVi has more than 1400 downloads on Zenodo to date and should in the coming years become a reference dataset for the development of unsupervised algorithms for industry and research (Autovi.utc.fr) Also, during this period, from an IT point of view, a prototype (named PowerEyes) was set up to take into account these different aspects while registering this work in a perspective of development for the project and subsequent industrialization. This architecture is based on Microsoft .NET technologies (for the HMI, file, integration, etc.) coupled with native libraries (DLL) in C++ (for mathematical processing and performance) and finally on the OpenCL library for generic GPU support (for vector/matrix algorithms and more broadly the Artificial Intelligence and Deep Learning part). The doctoral students' work enabled the identification and testing of components. This was done in the continuation of the implementation of the acquisition systems, with the AMS cell which was enhanced with the new elements necessary for the creation of the test bench to ensure the visual control of the ESL: The control bench is thus made up of a KuKa robot arm to manipulate the part, an Inspector VSPI-4F2111 camera from SICK and an inspection box with light insulation: In addition, the results of the bibliographic research work carried out in the first period (which gave rise to a vast study of the state of the art concerning existing datasets, the publications referring to them and the algorithms/approaches used) were confirmed.

The perspectives opened up by the project are oriented around the consolidation of unsupervised methods. They could indeed constitute a valuable aid for pre-labeling data (ok / not ok) and thus further limit the need for labeling for the implementation of a trusted solution in production. To this end, a research avenue consisting of exploiting language-based methods would make it possible to describe the expected and unexpected items in a distribution.

Carvalho, P.; Durupt, A.; Grandvalet, Y. A Review of Benchmarks for Visual Defect Detection in the Manufacturing Industry. International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing (JCM 2022), Jun 2022, Ischia, Italy. 1527-1538.

Carvalho, P.; Durupt, A.; Grandvalet, Y. A Survey of Machine Learning Approaches for Visual Inspection on the DAGM Dataset. 19th International Conference on Manufacturing Research (ICMR2022), Sep 2022, Derby, United Kingdom. 255-260.

Carvalho, P.; Lafou, M.; Durupt, A.; Grandvalet, Y.; Leblanc, A. The Automotive Visual Inspection Dataset (AutoVI): A Genuine Industrial Production Dataset for Unsupervised Anomaly Detection. 2024.

Visual inspection and detection of defects in manufactured products on a production line must be carried out in real time and at very high rates (under the second). This activity is an integral part of the overall product quality improvement strategy, it helps limit costly customer returns. There are already approaches to automate visual inspections, but they often need to be reprogrammed and calibrated in the event of changes to the product or the production chain. All products, even the most standard of them, may evolve (e.g. forms, aspects, functions) to meet the specific needs of each customer. To remain competitive, companies must have automatic visual inspections that quickly adapt to changes in product configurations (high product variability) and that perform very well on the production chain. This means that in the context of customized products, new and unknown defects may appear, and must therefore be able to be detected at lower cost. The TEMIS project therefore aims to develop an automated and reconfigurable approach for the online control of manufactured products, while respecting strict production requirements (granularity and inference time <1 second).

The scientific and industrial objective of TEMIS is to experiment and then recommend existing state-of-the-art solutions based on Machine Learning (ML) or Deep Learning (DL) to meet the need for online defect identification. The research that will be carried out in this ANR program, focused on Industrial Engineering, is a complete and detailed experimental study on visual inspection carried out in real conditions via an industrial experimental platform (AMS Agile Manufacturing platform - LabCom DIMEXP) located at UTC and for which the data used as a validation dataset, which will from real conditions, will be made available in open source.

One of the innovative features of the project is the development of an experimental study establishing fine comparisons (of granularity and precision) between approaches from the literature. This study will aim to verify that supervised and unsupervised statistical learning approaches (in ML and DL) can offer agility in inspection. From this study will be produced recommendations on the use of ML and DL algorithms vis-à-vis production requirements criteria as well as generic pipelines (chain of approaches) to reach the desired detection levels.

Project coordination

Harvey ROWSON (DELTACAD)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partnership

ROBERVAL Laboratoire Roberval. Unité de recherche en mécanique acoustique et matériaux.
HEUDIASYC Heuristique et diagnostic des systèmes complexes
DeltaCAD DELTACAD

Help of the ANR 444,825 euros
Beginning and duration of the scientific project: - 42 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter