DEFALS - Challenge DEtection de FALSifications dans des images et vidéos

Reconstruction of a digital image's history by its processing signature, application to anomaly detection – Signatures d'images

Image Signatures: Reconstruction of the history of a digital image by the signatures of its processing, automatic detection of anomalies

We want to develop methods that can be executed online and that verify the standards of reproducible research applied in the Image Processing on Line journal (www.ipol.im). We will interact with other members of the consortium to modify, abandon or adapt the proposed algorithms after they have been subject to objections and criticism. We will offer our help to put online all the algorithms produced by us or by others.

The challenges of reverse engineering from a single image

Our proposal aims to develop algorithms applicable to any digital image. These algorithms will reverse engineer a complete image history and visualizations revealing potential faults. The construction of this history will make it possible to detect improbable anomalies according to this history. Thus we will associate with human expertise probabilities of error or a number of false alarms, allowing quantitative decisions free of all subjectivity. Also our proposal is in three phases, because it is necessary:<br />-Establish the digital history of the image, namely the parameters of all the operations that were carried out on the raw image and in particular: correction of chromatic and optical aberrations, denoising, demosaicking, gamma correction, white balance, deblurring , interpolation, cropping, type and parameters of the compression (in particular the type of transform and quantification), possible double compression.<br /> -Categorize, implement and take charge of image falsification algorithms aiming at the local addition or deletion of information, then possibly with the camouflage of these operations, in particular the algorithms of painting and edition of Poisson.<br /> -Finally design automatic detection algorithms which, having learned the alleged history of the image, detect any anomaly in this history, as well as algorithms for detection and analysis of repetitions. Most of the tools detailed in the proposal are new, but will be based on the very complete expertise of the CMLA team in image processing, and on the coding expertise of the Scientific and Technical Police (DGPN-DCPJ -PTS - Ministry of the Interior (MI)

The first overall objective is the establishment of a set of automatic algorithms and interactive visualization processes constituting what we call the «image clinic«., Reconstructing the history of the image.
The production of an image history is fundamental to the detection of falsification, but it has many other uses. Indeed the image industry very quickly loses the trace of the effective operations applied on the images, often hidden by the manufacturers Also our project aims at producing a generic industrial tool.
The second objective is to review the image manipulation algorithms usable by falsifiers, to simulate and evaluate the anomalies they leave in the image chain.

The third objective of the project is to develop a general method of detection and analysis of internal repetitions of images, which have been designated by gestaltists as one of the most powerful factors at work in the perception of shapes in an image. .
The fourth objective is to develop state-of-the-art forensic algorithms (see our bibliographic analysis) by basing them on the one hand on the clinic of the image to facilitate automatic use, and by supplementing them with stochastic modeling to calculate a probability of false alarm. The forensic expertise developed will pass directly to at least one user, the central laboratory of scientific and technical police, who has agreed to be a partner in the project.

We have developed a theory of automatic anomaly detection applicable to any image [3, 6, 7]. We designed three algorithms, two detecting on the one hand the JPEG grid [2, 10] and on the other hand the Bayer pattern of the CFA grid, with controlled number of false alarms in both cases. We have explored in detail the state of the art in demosaicking [8] which has enabled us to develop a new method for detecting alterations in demosaicing [1].
It is the fact that the number of false alarms is controlled a priori which is the highlight as explained in the new publication [9]. We have developed an internal copy-paste detector applicable to any image [4]. This work is already useful. AFP has shown interest in the algorithm [2] for inclusion in their fake news analysis web service. It detects if an image has been cut again after a first JPEG compression. The internal copy and paste detector is an effective tool which has been verified on recent images from recent falsification cases (see the «Significant facts and results« section). We demonstrate the mechanism of falsification methods by inpainting in [5].

We are in the process of
- analyze the quantization JPEG, estimate the quantization matrix from the image itself and detect any significant and therefore suspect deviation
-develop an a contrario evaluation of the number of false alarms for our copy-paste detector [4]
-develop a noise analysis method to reconstruct the gamma correction curve and detect any significant deviation from the noise model.
-to have these analyzers online and to apply them to all cases of interest notably submitted by the AFP.

[1] Bammey, Q., von Gioi, R. G., & Morel, J. M. (2018, April). Automatic Detection of Demosaicing Image Artifacts and Its Use in Tampering Detection. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 424-429). IEEE.
[2] Nikoukhah, T., von Gioi, R. G., Colom, M., & Morel, J. M. (2018, April). Automatic JPEG Grid Detection with Controlled False Alarms, and Its Image Forensic Applications. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 378-383). IEEE.

[3] Davy, A., Ehret, T., Morel, J. M., & Delbracio, M. (2018, October). Reducing anomaly detection in images to detection in noise. In 2018 25th (ICIP) (pp. 1058-1062). IEEE.

[4] EHRET, Thibaud. Automatic Detection of Internal Copy-Move Forgeries in Images. Image Processing On Line, 2018, vol. 8, p. 167-191.

[5] Di Martino, M., & Facciolo, G. (2018). An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity. Image Processing On Line, 8, 192-218.

[6] Ehret, T., Davy, A., Morel, J. M., & Delbracio, M. (2019). Image anomalies: A review and synthesis of detection methods. Journal of Mathematical Imaging and Vision, 61(5), 710-743.

[7] Ehret, T., Davy, A., Delbracio, M., & Morel, J. M. (2019). How to Reduce Anomaly Detection in Images to Anomaly Detection in Noise. Image Processing On Line, 9, 391-412.

[8] Ehret, T., & Facciolo, G. (2019). A Study of Two CNN Demosaicking Algorithms. Image Processing On Line, 9, 220-230.

[9] Morel, J. M. (2019, July). Reverse Engineering: What Can We Learn From a Digital Image About Its Own History?. ACM Workshop on Information Hiding and Multimedia Security (pp. 1-1).

[10] Nikoukhah, T., Anger, J., Ehret, T., Colom, M., Morel, J. M., & Grompone von Gioi, R. (2019). Jpeg grid detection based on the number of dct zeros and its application to automatic and localized forgery detection. In Proceedings of CVPR Recognition Workshops (pp. 110-118)

The problem of image forgery detection concerns much broader lines of evidence than just tools from automatic image analysis. These include: the declared source of the image, the political, economic, legal conditions in which the image was produced, the supposition by the expert of malicious or intentional falsification, the information on the image source such as its EXIF file or the device that produced it. Finally the detection requires a semantic analysis of the photograph and of the likelihood of the scene. The expert’s eye is often crucial to detect inconsistencies in a scene.
Hence the forensic analysis of an image cannot rely solely on automatic image processes. These are not able to translate the contextual cues that often make a forgery evident to an expert.

Yet image processing and analysis can detect abnormalities that escape the human eye, and in addition can associate to them a quantitative certainty measure. Indeed, falsifying an image alters the parameters of its formation and processing model. These alterations are detectable as soon as one can estimate said parameters from the image itself.
Nevertheless, we shall try to show that a complete cover up of a falsification is possible, though complicated. Suppose the forgers work directly on the raw image (raw) as follows. First they perform the copy-paste operations by involving a separate raw image having almost the same characteristics. Then the forged raw image is denoised, and new noise is simulated on it, according to a standard Poisson model. Finally the raw image undergoes a standard image processing chain with noise reduction, demosaicing, gamma correction, white balance, and JPEG compression.

After this anti-forensic process, an expert could hardly detect any anomaly in the picture’s apparent processing and compression. Indeed, they would have been applied to a plausible raw image. And no repetition of texture or shape would be detected as well.
Fortunately, the literature on forgery detection assumes skilled, but not that meticulous counterfeiters. This literature assumes that the falsification can leave behind traces, despite rough anti-forensic measures. Thus, to detect a forgery by image processing means, we must first reconstruct the image formation model simulated by the forger, to detect in continuation any anomaly to this alleged model.

So our proposal is to develop algorithms applicable to any digital image. These algorithms will produce by reverse engineering a complete history of the image. Adequate visualization tools will also reveal potential anomalies in the image processing chain. We will develop statistical tools associating with these anomalies error probabilities or false alarm number, thus enabling quantitative decisions, free of subjectivity.

So our proposal is in three phases, as it takes:

-Establish a digital history of the image, ie the parameters of all operations that were performed on it from it, starting from the raw image, in particular chromatic and optical aberration correction, de-noising, de-mosaicking, gamma-correction, color balancing, de-blurring, interpolation, cropping, compression type and compression parameters, possibly a double compression.

-Categorize, implement and take over the most promising image falsification algorithms for the addition or removal of local information (inpainting, Poisson editing,…) and do the same for the anti-forensic operations.

-Finally develop automatic detection algorithms, which, based on the alleged history of the image, detect any anomaly in this history. Furthermore algorithms detecting and analyzing internal image suspicious repetitions.

Most tools detailed in the proposal are new, but will rely on the image processing expertise of the CMLA image processing team, and on the forensic expertise of the National Scientific and Technical Police.

Project coordination

Jean-Michel Morel (Centre de Mathématiques et Leurs Applications)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

DGPN-DCPJ-PTS Laboratoire d'Analyse et de Traitement de Signal à la sous-direction de la police technique et scientifique de la direction centrale de la police judiciaire
CMLA Centre de Mathématiques et Leurs Applications

Help of the ANR 398,358 euros
Beginning and duration of the scientific project: February 2017 - 42 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter