Agoracom Blog

Building a Lie Detector for Images – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:59 PM on Wednesday, January 22nd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Building a Lie Detector for Images

  • A new paper from UC Berkeley and Adobe researchers declares war on fake images
  • Leveraging a custom dataset and fresh evaluation metric, the research team introduces a general image forensics approach that achieves high average precision in the detection of CNN-generated imagery

By: Synced

The Internet is full of fun fake images — from flying sharks and cows on cars to a dizzying variety of celebrity mashups. Hyperrealistic image and video fakes generated by convolutional neural networks (CNNs) however are no laughing matter — in fact they can be downright dangerous. Deepfake porn reared its ugly head in 2018, fake political speeches by world leaders have cast doubt on news sources, and during the recent Australian bushfires manipulated images mislead people regarding the location and size of fires. Fake images and videos are giving AI a black eye — but how can the machine learning community fight back?

A new paper from UC Berkeley and Adobe researchers declares war on fake images. Leveraging a custom dataset and fresh evaluation metric, the research team introduces a general image forensics approach that achieves high average precision in the detection of CNN-generated imagery

Spotting such generated images may seem to be a relatively simple task — just train a classifier using fake images versus real images. In fact, the challenge is far more complicated for a number of reasons. Fake images would likely be generated from different datasets, which would incorporate different dataset biases. Fake features are more difficult to detect when the training dataset of the model differs from the dataset used to generate the fake image. Also, network architectures and loss functions can quickly evolve beyond the abilities of a fake image detection model. Finally, images may be pre-processed or post-processed, which increases the difficulty in identifying common features across a set of fake images.

To address these and other issues, the researchers built a dataset of CNN-based generation models spanning a variety of architectures, datasets and loss functions. Real images were then pre-processed and an equal number of fake images generated from each model — from GANs to deepfakes. Due to its high variety, the resulting dataset minimizes biases from either training datasets or model architectures.

The fake image detection model was built on ProGAN, an unconditional GAN model for random image generation with simple CNN based structure, and trained on the new dataset. Evaluated on various CNN image generating methods, the model’s average precision was significantly higher than the control groups.

Data augmentation is another approach the researchers used to improve detection of fake images that had been post-processed after generation. The training images (fake/real) underwent several additional augmentation variants, from Gaussian blur to JPEG compression. Researchers found that including data augmentation in the training set significantly increased model robustness, especially when dealing with post-processed images.

Researchers find the “fingerprint” of CNN-generated images.

The researchers note however that even the best detector will still have trade-offs between true detection and false-positive rates, and it is very likely a malicious user could simply handpick a simple fake image that passes the detection threshold. Another concern is that the post-processing effects added to fake images may increase detection difficulty, since the fake image fingerprints might be distorted during the post-processing. There are also many fake images that were not generated but rather photoshopped, and the detector won’t work on images produced through such shallow methods

The new study does a fine job of identifying the fingerprint of images doctored with various CNN-based image synthesis methods. The researchers however caution that this is one battle — the war on fake images has only just begun.

Source: https://syncedreview.com/2020/01/15/building-a-lie-detector-for-images/

Tags: , , , , , ,

Comments are closed.