Agoracom Blog

How Swiss scientists are trying to spot #deepfakes – SPONSOR: Datametrex AI Limited $

Posted by AGORACOM-JC at 4:13 PM on Friday, March 13th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

How Swiss scientists are trying to spot deepfakes

By Geraldine Wong Sak Hoi

As videos faked using artificial intelligence grow increasingly sophisticated, experts in Switzerland are re-evaluating the risks its malicious use poses to society – and finding innovative ways to stop the perpetrators.

In a computer lab on the vast campus of the Swiss Federal Institute of Technology Lausanne (EPFL), a small team of engineers is contemplating the image of a smiling, bespectacled man boasting a rosy complexion and dark curls.

“Yes, that’s a good one,” says lead researcher Touradj Ebrahimi, who bears a passing resemblance to the man on the screen. The team has expertly manipulated Ebrahimi’s head shot with an online image of Tesla founder Elon Musk to create a deepfake – a digital image or video fabricated through artificial intelligence.

It’s one of many fake illustrations – some more realistic than others – that Ebrahimi’s teamexternal link has created as they develop software, together with cyber security firm Quantum Integrityexternal link (QI), which can detect doctored images, including deepfakes.

Using machine learning, the same process behind the creation of deepfakes, the software is learning to tell the difference between the genuine and the forged: a “creator” feeds it fake images, which a “detector” then tries to find.

“With lots of training, machines can help to detect forgery the same way a human would,” explains Ebrahimi. “The more it’s used, the better it becomes.”

Forged photos and videos have existed since the advent of multimedia. But AI techniques have only recently allowed forgers to alter faces in a video or make it appear the person is saying something they never did. Over the last few years, deepfake technology has spread faster than most experts anticipated.

The team at EPFL have created the image in the centre by using deep learning techniques to alter the headshot of Ebrahimi (right) and a low-resolution image of Elon Musk in profile found on the Internet.​​​​​​​ (EPFL/MMSPG/swissinfo)

The fabrication of deepfake videos has become “exponentially quicker, easier and cheaper” thanks to the distribution of user-friendly software tools and paid-for services online, according to the International Risk Governance Center (IRGC)external link at EPFL.

“Precisely because it is moving so fast, we need to map where this could go – what sectors, groups and countries might be affected,” says its deputy director, Aengus Collins.

Although much of the problem with malign deepfakes involves their use in pornography, there is growing urgency to prepare for cases in which the same techniques are used to manipulate public opinion.

A fast-moving field

When Ebrahimi first began working with QI on detection software three years ago, deepfakes were not on the radar of most researchers. At the time, QI’s clients were concerned about doctored pictures of accidents used in fraudulent car and home insurance claims. By 2019, however, deepfakes had developed a level of sophistication that the project decided to dedicate much more time to the issue.

“I am surprised, as I didn’t think [the technology] would move so fast,” says Anthony Sahakian, QI chief executive.

Sahakian has seen firsthand just how far deepfake techniques have come to achieve realistic results, most recently the swapping of faces on a passport photo that manages to leave all the document seals intact.

Read More:

Tags: , ,

Comments are closed.