Agoracom Blog

Disinformation is more than fake news SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:15 PM on Monday, February 10th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Disinformation is more than fake news

By Jared Cohen, for Jigsaw blog

Jigsaw’s work requires forecasting the most urgent threats facing the internet, and wherever we traveled these past years — from Macedonia to Eastern Ukraine to the Philippines to Kenya and the United States — we observed an evolution in how disinformation was being used to manipulate elections, wage war, and disrupt civil society. By disinformation we mean more than fake news. Disinformation today entails sophisticated, targeted influence campaigns, often launched by governments, with the goal of influencing societal, economic, and military events around the world. But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.

Using technology to detect manipulated images

Beginning in 2016 we began working with researchers and academics to develop new methods for using technology to detect certain aspects of disinformation campaigns. Together with Google Research and academic partners, we developed an experimental platform called Assembler to test how technology can help fact-checkers and journalists identify and analyze manipulated media.

Debunking images is a time consuming and error-prone process for fact-checkers and journalists. To verify the authenticity of images, they rely on a number of different tools and methods. For example, Bellingcat, a group of researchers and investigative journalists dedicated to in-depth fact-checking, lists more than 25 different tools and services available to verify the authenticity of photos, videos, websites, and other media. Fact-checkers and journalists need a way to stay ahead of the latest manipulation techniques and make it easier to check the authenticity of images and other assets.

Assembler is an early stage experimental platform advancing new detection technology to help fact-checkers and journalists identify manipulated media. In addition, the platform creates a space where we can collaborate with other researchers who are developing detection technology. We built it to help advance the field of science, and to help provide journalists and fact-checkers with strong signals that, combined with their expertise, can help them judge if and where an image has been manipulated. With the help of a small number of global news providers and fact checking organizations including Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler, we’re testing how Assembler performs in real newsrooms and updating it based on its utility and tester feedback.

How Assembler Works

Assembler brings together multiple image manipulation detectors from various academics into one tool, each one designed to spot specific types of image manipulations. Individually, these detectors can identify very specific types of manipulation — such as copy-paste or manipulations to image brightness. Assembled together, they begin to create a comprehensive assessment of whether an image has been manipulated in any way. Experts from the University of Maryland, University Federico II of Naples, and the University of California, Berkeley each contributed detection models. Assembler uses these models to show the probability of manipulation on an image.

Additionally, we built two new detectors to test on the platform.The first is the StyleGAN detector to specifically address deepfakes. This detector uses machine learning to differentiate between images of real people from deepfake images produced by the StyleGAN deepfake architecture. Our second model, the ensemble model, is trained using combined signals from each of the individual detectors, allowing it to analyze an image for multiple types of manipulation simultaneously. Because the ensemble model can identify multiple image manipulation types, the results are, on average, more accurate than any individual detector.

“These days working in multimedia forensics is extremely stimulating. On one hand, I perceive very clearly the social importance of this work: in the wrong hands, media manipulation tools can be very dangerous, they can be used to ruin the life and reputation of ordinary people, commit frauds, modify the course of elections,” said Dr. Luisa Verdoliva, Associate Professor at the Department of Industrial Engineering at the University Federico II of Naples and Visiting Scholar, Google AI. “On the other hand, the professional challenge is very exciting, new attacks based on artificial intelligence are conceived by day, and we must keep a very fast pace of innovation to face them. Collaborating in Assembler was a great opportunity to put my knowledge and my skills concretely to the service of people. In addition I came to know wonderful and very diverse people involved in this project, all strongly committed in this fight. Overall a great experience.”

The Current: Exposing the architecture of disinformation campaigns

Jigsaw is an interdisciplinary team of researchers, engineers, designers, policy experts, and creative thinkers, and we’ve long wanted to find a way to share more of our team’s work publicly, especially our research insights. That’s why I’m excited to introduce the first issue of The Current, Jigsaw’s new research publication that illuminates complex problems through an interdisciplinary approach — like our team.

Our first issue is, as you might have guessed, all about disinformation — exploring the architecture of disinformation campaigns, the tactics and technology used, and how new technology is being used to detect and stop disinformation campaigns.

One feature of this inaugural issue is the Disinformation Data Visualizer. Jigsaw visualized the research from the Atlantic Council’s DFRLab on coordinated disinformation campaigns around the world and shows the specific tactics used and countries affected. The Visualizer is a work in progress. We’re sharing this with the wider community to enable a dialogue about the most effective and comprehensive disinformation countermeasures.

An ongoing experiment

Disinformation is a complex problem, and there isn’t any simple technological solution. The first step is to better understand the issue. The world ought to understand how disinformation campaigns are increasingly being used as a way of manipulating people’s perception of important issues. We’re committed to sharing our insights and publishing our research so other organizations can examine and scrutinize different ways to approach this issue. We’ll be sharing more updates about Jigsaw’s work in this space over the coming few months.

In the meantime we’d like to express our gratitude to our academic partners, our partners within Google, and the courageous publishers and journalists who are committed to using technology to bring people the truth, wherever it leads: Chris Bregler, Larry Davis, Alexei Efros, Hany Farid, Andrew Owens, Abhinav Shrivastava, Luisa Verdoliva, and Emerson Brookings, Graham Brookie and the Atlantic Council’s DFRLab team.

Source: https://www.stopfake.org/en/disinformation-is-more-than-fake-news/?__cf_chl_jschl_tk__=57eb9e0da0a582c3981aa8ed39e5f90a3cae9ebd-1581350912-0-AYw5r3RGxKqQMHhsZazGCBz7wTi9sZsM25j0X6X-RLqgmPiUWqB7PIEF_iEJx1V-pc-0fEmQ57LyMozcBAp7Oco-Uipl_R3nuYudhJdwnnBOovp12rcGmht1TowUugnZFYn8V-4UddKzmMsDP2Nu7IgasYOI6Q21teLNyGc81iMSGZMkJqLLBA8afv_2SoLGtyQ8KKGIx6ECKITuQaE5aA3w_cIEbAWd5sKH9RAmgKrU1c7uqqpCWkS-sxhOeclBEjqf23nkljl4f9Iqhtp3dHTnuvOZ1SjlyiAC0Ld-y5Z7s3lWb2FblSjPO1ko5kldvA2R0areN4kXGMnf0nyv0XY

Tags: , , , , ,

Comments are closed.