Agoracom Blog

Deepfakes and deep media: A new security battleground – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:00 PM on Friday, February 14th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Deepfakes and deep media: A new security battleground

  • In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media
  • Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning

Kyle Wiggers

Deepfakes — media that takes a person in an existing image, audio recording, or video and replaces them with someone else’s likeness using AI — are multiplying quickly. That’s troubling not only because these fakes might be used to sway opinions during an election or implicate a person in a crime, but because they’ve already been abused to generate pornographic material of actors and defraud a major energy producer.

In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning.

Deepfake text

The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath, but cutting-edge language models can now write with humanlike pith and cogency. San Francisco research firm OpenAI’s GPT-2 takes seconds to craft passages in the style of a New Yorker article or brainstorm game scenarios. Of greater concern, researchers at Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) hypothesize that GPT-2 and others like it could be tuned to propagate white supremacy, jihadist Islamism, and other threatening ideologies.

Above: The frontend for GPT-2, AI research firm OpenAI’s trained language model.Image Credit: OpenAI

In pursuit of a system that can detect synthetic content, researchers at the University of Washington’s Paul G. Allen School of Computer Science and Engineering and the Allen Institute for Artificial Intelligence developed Grover, an algorithm they claim was able to pick out 92% of deepfake-written works on a test set compiled from the open source Common Crawl corpus. The team attributes its success to Grover’s copywriting approach, which they say helped familiarize it with the artifacts and quirks of AI-originated language.

A team of scientists hailing from Harvard and the MIT-IBM Watson AI Lab separately released The Giant Language Model Test Room, a web environment that seeks to determine whether text was written by an AI model. Given a semantic context, it predicts which words are most likely to appear in a sentence, essentially writing its own text. If words in a sample being evaluated match the top 10, 100, or 1,000 predicted words, an indicator turns green, yellow, or red, respectively. In effect, it uses its own predictive text as a benchmark for spotting artificially generated content.

Deepfake videos

State-of-the-art video-generating AI is just as capable (and dangerous) as its natural language counterpart, if not more so. An academic paper published by Hong Kong-based startup SenseTime, the Nanyang Technological University, and the Chinese Academy of Sciences’ Institute of Automation details a framework that edits footage by using audio to synthesize realistic videos. And researchers at Seoul-based Hyperconnect recently developed a tool — MarioNETte — that can manipulate the facial features of a historical figure, politician, or CEO by synthesizing a reenacted face animated by the movements of another person.

Even the most realistic deepfakes contain artifacts that give them away, however. “Deepfakes [produced by] generative [systems] learn a data set of actual images in videos, to which you add new images and then generate a new video with the new images,” Ishai Rosenberg, head of the deep learning group at cybersecurity company Deep Instinct, told VentureBeat via email. “The result is that the output video has subtle differences because there are changes in the distribution of the data that is generated artificially by the deepfake and the distribution of the data in the original source video. These differences, which can be referred to as ‘glimpses in the matrix,’ are what the deepfake detectors are able to distinguish.”

Above: Two deepfake videos produced using state-of-the-art methods.Image Credit: SenseTime

Last summer, a team from the University of California, Berkeley and the University of Southern California trained a model to look for precise “facial action units” — data points of people’s facial movements, tics, and expressions, including when they raise their upper lips and how their heads rotate when they frown — to identify manipulated videos with greater than 90% accuracy. Similarly, in August 2018 members of the Media Forensics program at the U.S. Defense Advanced Research Projects Agency (DARPA) tested systems that could detect AI-generated videos from cues like unnatural blinking, strange head movements, odd eye color, and more.

Several startups are in the process of commercializing comparable deepfake video detection tools. Amsterdam-based Deeptrace Labs offers a suite of monitoring products that purport to classify deepfakes uploaded on social media, video hosting platforms, and disinformation networks. Dessa has proposed techniques for improving deepfake detectors trained on data sets of manipulated videos. And Truepic raised an $8 million funding round in July 2018 for its video and photo deepfake detection services. In December 2018, the company acquired another deepfake “detection-as-a-service” startup — Fourandsix — whose fake image detector was licensed by DARPA.

Above: Deepfake images generated by an AI system.

Beyond developing fully trained systems, a number of companies have published corpora in the hopes that the research community will pioneer new detection methods. To accelerate such efforts, Facebook — along with Amazon Web Services (AWS), the Partnership on AI, and academics from a number of universities — is spearheading the Deepfake Detection Challenge. The Challenge includes a data set of video samples labeled to indicate which were manipulated with AI. In September 2019, Google released a collection of visual deepfakes as part of the FaceForensics benchmark, which was cocreated by the Technical University of Munich and the University Federico II of Naples. More recently, researchers from SenseTime partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0, a data set for face forgery detection that they claim is the largest of its kind.

Deepfake audio

AI and machine learning aren’t suited just to video and text synthesis — they can clone voices, too. Countless studies have demonstrated that a small data set is all that’s required to recreate the prosody of a person’s speech. Commercial systems like those of Resemble and Lyrebird need only minutes of audio samples, while sophisticated models like Baidu’s latest Deep Voice implementation can copy a voice from a 3.7-second sample.

Deepfake audio detection tools are not yet abundant, but solutions are beginning to emerge.

Several months ago, the Resemble team released an open source tool dubbed Resemblyzer, which uses AI and machine learning to detect deepfakes by deriving high-level representations of voice samples and predicting whether they’re real or generated. Given an audio file of speech, it creates a mathematical representation summarizing the characteristics of the recorded voice. This enables developers to compare the similarity of two voices or suss out who’s speaking at any given moment.

In January 2019, as part of its Google News Initiative, Google released a corpus of speech containing “thousands” of phrases spoken by the company’s text-to-speech models. The samples were drawn from English articles spoken by 68 different synthetic voices and covered a variety of regional accents. The corpus is available to all participants of ASVspoof 2019, a competition that aims to foster countermeasures against spoofed speech.

A lot to lose

No detector has achieved perfect accuracy, and researchers haven’t yet figured out how to determine deepfake authorship. Deep Instinct’s Rosenberg anticipates this is emboldening bad actors intent on distributing deepfakes. “Even if a malicious actor had their [deepfake] caught, only the [deepfake] itself holds the risk of being busted,” he said. “There is minimal risk to the actor of getting caught. Because the risk is low, there is little deterrence to creating deepfake[s].”

Rosenberg’s theory is supported by a report from Deeptrace, which found 14,698 deepfake videos online during its most recent tally in June and July 2019 — an 84% increase within a seven-month period. The vast majority of those (96%) consist of pornographic content featuring women.

Considering those numbers, Rosenberg argues that companies with “a lot to lose” from deepfakes should develop and incorporate deepfake detection technology — which he considers akin to antimalware and antivirus — into their products. There’s been movement on this front; Facebook announced in early January that it will use a combination of automated and manual systems to detect deepfake content, and Twitter recently proposed flagging deepfakes and removing those that threaten harm.

Of course, the technologies underlying deepfake generators are merely tools — and they have enormous potential for good. Michael Clauser, head of the data and trust practice at consultancy Access Partnership, points out that the technology has already been used to improve medical diagnoses and cancer detection, fill gaps in mapping the universe, and better train autonomous driving systems. He therefore cautions against blanket campaigns to block generative AI.

“As leaders begin to apply existing legal principles like slander and defamation to emerging deepfake use cases, it’s important not to throw out the baby with the bathwater,” Clauser told VentureBeat via email. “Ultimately, the case law and social norms around the use of this emerging technology [haven’t] matured sufficiently to create bright red lines on what constitutes fair use versus misuse.”

Source: https://venturebeat.com/2020/02/11/deepfake-media-and-detection-methods/

Tags: , , , , , ,

Comments are closed.