Agoracom Blog Home

Posts Tagged ‘bot’

The Rise of Deepfakes SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:45 PM on Wednesday, February 19th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

The Rise of Deepfakes

  • Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness
  • In recent months videos of influential celebrities and politicians have surfaced displaying a false and augmented reality of one’s believes or gestures

JMSCORY

Deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate and generate visual and audio content with a high potential to deceive. The purpose of this article is to enhance and promote efforts into research and development and not to promote or aid in the creation of nefarious content.

Introduction

Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. In recent months videos of influential celebrities and politicians have surfaced displaying a false and augmented reality of one’s believes or gestures.

Whilst deep learning has been successfully applied to solve various complex problems ranging from big data analytics to that of computer vision the need to control the content generated is crucial alongside that of it’s availability to the public.

Within recent months, a number of mitigation mechanisms have been proposed and cited with the use of Neural Networks and Artificial Intelligence being at the heart of them. From this, we can distinguish that a proposal for technologies that can automatically detect and assess the integrity of visual media is therefore indispensable and in great need if we wish to fight back against adversarial attacks. (Nguyen, 2019)

Early 2017

Deepfakes as we know them first started to gain attention in December 2017, after Vice’s Samantha Cole published an article on Motherboard.

The article talks about the manipulation of celebrity faces to recreate famous scenes and how this technology can be misused for blackmail and illicit purposes.

The videos were significant because they marked the first notable instance of a single person who was able to easily and quickly create high-quality and convincing deepfakes.

Cole goes on to highlight the juxtaposition in society as these tools are made free by corporations for students to gain sufficient knowledge and key skills to enhance their general studies at University and school.

Open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning. — Samantha Cole

Whilst deepfakes have the potential to differ in general quality from previous efforts of superimposing faces onto other bodies. A good deepfake, created by Artificial Intelligence that has been trained on hours of high-quality footage creates such extremely high-quality content humans struggle to understand whether it is real or not. In turn, researches have shown interest in developing neural networks to help understand the accuracy of such videos. From this, they are able to then distinguish them as fake.

In general, a good deepfake can be found where the insertions around the mouth are seamless alongside having smooth head movements and appropriate coloration to surroundings. Gone have the days of simply superimposing a head onto a body and animating it by hand as the erroneous is still noticeable leading to dead context and mismatches.

Early 2018

In January 2018, a proprietary desktop application called FakeApp was launched. This app allows users to easily create and share videos with their faces swapped with each other. As of 2019, FakeApp has been superseded by open-source alternatives such as Faceswap and the command line-based DeepFaceLab. (Nguyen, 2019)

With the availability of this technology being so high, websites such as GitHub have sprung to life in offering new mythologies of combatting such attacks. Within the paper ‘Using Capsule Networks To Detect Forged Images and Videos’ Huy goes on to talk about the ability to use forged images and videos to bypass facial authentication systems in secure environments.

The quality of manipulated images and videos has seen significant improvement with the development of advanced network architectures and the use of large amounts of training data that previously wasn’t available.

Later 2018

Platforms such as Reddit start to ban deepfakes after fake news and videos that started circling from specific communities on their site. Reddit took it on themself to delete these communities in a stride to protect their own.

A few days later BuzzFeed publishes a frighteningly realistic video that went viral. The video showed Barack Obama in a deepfake. Unlike the University of Washington video, Obama was made to say words that weren’t his own, in turn helping to raise light to this technology.

Below is a video BuzzFeed created with Jordan Peele as part of a campaign to raise awareness of this software.

Early 2019

In the last year, several manipulated videos of politicians and other high-profile individuals have gone viral, highlighting the continued dangers of deepfakes, and forcing large platforms to take a position.

Following BuzzFeed’s disturbingly realistic Obama deepfake, instances of manipulated videos of other high-profile subjects began to go viral, and seemingly fool millions of people online.

Despite most of the videos being even more crude than deepfakes — using rudimentary film editing rather than AI — the videos sparked sustained concern about the power of deepfakes and other forms of video manipulation while forcing technology companies to take a stance on what to do with such content. (Business Insider, 2019).

Source: https://medium.com/swlh/the-rise-of-deepfakes-19972498487a

‘Wake up, Zuck’: Protesters gather outside of Facebook founder’s home, demand regulation of political ads SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:05 PM on Tuesday, February 18th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

‘Wake up, Zuck’: Protesters gather outside of Facebook founder’s home, demand regulation of political ads

By Loi Almeron and Julian Mark

On Monday morning around 10 a.m., around 50 protesters gathered outside of Facebook founder and CEO Mark Zuckerberg’s home in the Mission District in protest of the social media giant’s use of personal data and refusal to regulate misleading political advertisements.

“We’re sick and tired of waiting for the government to regulate Facebook,” said Tracy Rosenberg, the executive director of Media Alliance and one of the protest’s organizers. “You’re profiting off of us — you’re selling our information.”

The protesters chanted “Wake up Zuck!” and “fake news, real hate” and carried signs that said “Stop the Lies, Protect our democracy” and “break up Facebook.” In colorful chalk on the sidewalk in  front of the house, demonstrators wrote phrases like: “Facebook is a Russian asset”; “don’t sell my private data”; and “history will write your epitaph as the man who broke democracy.”

In November, Twitter outright banned political ads, and Google said it would limit the targeting of political ads on its search engine and on its video streaming platform YouTube. Facebook has resisted such changes in policy in the face of criticism.

Zuckerberg in December told CBS This Morning that, “in a democracy,” people should “make their own judgments” about what politicians say. “I don’t think a private company should be censoring politicians or news,” he said.

But protesters say that very mindset is destroying democracy, rather than upholding its values.

“We like many others and the organizations that put this rally together feel that Facebook is a dramatic threat to our democratic systems around the world,” said Ted Lewis, an activist with Global Exchange, an organization that advocates for human rights and alternatives to capitalism. “Facebook needs to take responsibility for what they’re doing — they need to get the lies off of their platform.”

Lewis said, specifically, Facebook’s hands-off policy around political advertising is especially troubling. “Political advertising could contain the most blatant falsehood and they refuse to do anything about it,” Lewis said.

Zuckerberg is likely not spending his President’s Day holiday inside of his Mission District manse — as he has some 10 places to call “home” and mainly resides in Palo Alto.

Other protesters bemoaned Facebook’s laissez-faire approach to the spread of misinformation, especially as the 2020 presidential election nears. “You can say anything you want,” said Erin Fisher, an activist with Campaign to Regulate and Break Up Big Tech. “Facebook is the most important. They’re monetizing propaganda.”

“This is one of the pillars of the fight in 2020,” Fisher said, referring to the upcoming November election.

By around 11 a.m. protesters largely dispersed and a few police officers supervised the scene.

Photo by Loi Almeron

Tracy Rosenberg (left), the executive director of Media Alliance, holds a bullhorn.

Source: https://missionlocal.org/2020/02/wake-up-zuck-protesters-gather-outside-of-facebook-founders-home-demand-regulation-

Deepfakes and deep media: A new security battleground – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:00 PM on Friday, February 14th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Deepfakes and deep media: A new security battleground

  • In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media
  • Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning

Kyle Wiggers

Deepfakes — media that takes a person in an existing image, audio recording, or video and replaces them with someone else’s likeness using AI — are multiplying quickly. That’s troubling not only because these fakes might be used to sway opinions during an election or implicate a person in a crime, but because they’ve already been abused to generate pornographic material of actors and defraud a major energy producer.

In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning.

Deepfake text

The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath, but cutting-edge language models can now write with humanlike pith and cogency. San Francisco research firm OpenAI’s GPT-2 takes seconds to craft passages in the style of a New Yorker article or brainstorm game scenarios. Of greater concern, researchers at Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) hypothesize that GPT-2 and others like it could be tuned to propagate white supremacy, jihadist Islamism, and other threatening ideologies.

Above: The frontend for GPT-2, AI research firm OpenAI’s trained language model.Image Credit: OpenAI

In pursuit of a system that can detect synthetic content, researchers at the University of Washington’s Paul G. Allen School of Computer Science and Engineering and the Allen Institute for Artificial Intelligence developed Grover, an algorithm they claim was able to pick out 92% of deepfake-written works on a test set compiled from the open source Common Crawl corpus. The team attributes its success to Grover’s copywriting approach, which they say helped familiarize it with the artifacts and quirks of AI-originated language.

A team of scientists hailing from Harvard and the MIT-IBM Watson AI Lab separately released The Giant Language Model Test Room, a web environment that seeks to determine whether text was written by an AI model. Given a semantic context, it predicts which words are most likely to appear in a sentence, essentially writing its own text. If words in a sample being evaluated match the top 10, 100, or 1,000 predicted words, an indicator turns green, yellow, or red, respectively. In effect, it uses its own predictive text as a benchmark for spotting artificially generated content.

Deepfake videos

State-of-the-art video-generating AI is just as capable (and dangerous) as its natural language counterpart, if not more so. An academic paper published by Hong Kong-based startup SenseTime, the Nanyang Technological University, and the Chinese Academy of Sciences’ Institute of Automation details a framework that edits footage by using audio to synthesize realistic videos. And researchers at Seoul-based Hyperconnect recently developed a tool — MarioNETte — that can manipulate the facial features of a historical figure, politician, or CEO by synthesizing a reenacted face animated by the movements of another person.

Even the most realistic deepfakes contain artifacts that give them away, however. “Deepfakes [produced by] generative [systems] learn a data set of actual images in videos, to which you add new images and then generate a new video with the new images,” Ishai Rosenberg, head of the deep learning group at cybersecurity company Deep Instinct, told VentureBeat via email. “The result is that the output video has subtle differences because there are changes in the distribution of the data that is generated artificially by the deepfake and the distribution of the data in the original source video. These differences, which can be referred to as ‘glimpses in the matrix,’ are what the deepfake detectors are able to distinguish.”

Above: Two deepfake videos produced using state-of-the-art methods.Image Credit: SenseTime

Last summer, a team from the University of California, Berkeley and the University of Southern California trained a model to look for precise “facial action units” — data points of people’s facial movements, tics, and expressions, including when they raise their upper lips and how their heads rotate when they frown — to identify manipulated videos with greater than 90% accuracy. Similarly, in August 2018 members of the Media Forensics program at the U.S. Defense Advanced Research Projects Agency (DARPA) tested systems that could detect AI-generated videos from cues like unnatural blinking, strange head movements, odd eye color, and more.

Several startups are in the process of commercializing comparable deepfake video detection tools. Amsterdam-based Deeptrace Labs offers a suite of monitoring products that purport to classify deepfakes uploaded on social media, video hosting platforms, and disinformation networks. Dessa has proposed techniques for improving deepfake detectors trained on data sets of manipulated videos. And Truepic raised an $8 million funding round in July 2018 for its video and photo deepfake detection services. In December 2018, the company acquired another deepfake “detection-as-a-service” startup — Fourandsix — whose fake image detector was licensed by DARPA.

Above: Deepfake images generated by an AI system.

Beyond developing fully trained systems, a number of companies have published corpora in the hopes that the research community will pioneer new detection methods. To accelerate such efforts, Facebook — along with Amazon Web Services (AWS), the Partnership on AI, and academics from a number of universities — is spearheading the Deepfake Detection Challenge. The Challenge includes a data set of video samples labeled to indicate which were manipulated with AI. In September 2019, Google released a collection of visual deepfakes as part of the FaceForensics benchmark, which was cocreated by the Technical University of Munich and the University Federico II of Naples. More recently, researchers from SenseTime partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0, a data set for face forgery detection that they claim is the largest of its kind.

Deepfake audio

AI and machine learning aren’t suited just to video and text synthesis — they can clone voices, too. Countless studies have demonstrated that a small data set is all that’s required to recreate the prosody of a person’s speech. Commercial systems like those of Resemble and Lyrebird need only minutes of audio samples, while sophisticated models like Baidu’s latest Deep Voice implementation can copy a voice from a 3.7-second sample.

Deepfake audio detection tools are not yet abundant, but solutions are beginning to emerge.

Several months ago, the Resemble team released an open source tool dubbed Resemblyzer, which uses AI and machine learning to detect deepfakes by deriving high-level representations of voice samples and predicting whether they’re real or generated. Given an audio file of speech, it creates a mathematical representation summarizing the characteristics of the recorded voice. This enables developers to compare the similarity of two voices or suss out who’s speaking at any given moment.

In January 2019, as part of its Google News Initiative, Google released a corpus of speech containing “thousands” of phrases spoken by the company’s text-to-speech models. The samples were drawn from English articles spoken by 68 different synthetic voices and covered a variety of regional accents. The corpus is available to all participants of ASVspoof 2019, a competition that aims to foster countermeasures against spoofed speech.

A lot to lose

No detector has achieved perfect accuracy, and researchers haven’t yet figured out how to determine deepfake authorship. Deep Instinct’s Rosenberg anticipates this is emboldening bad actors intent on distributing deepfakes. “Even if a malicious actor had their [deepfake] caught, only the [deepfake] itself holds the risk of being busted,” he said. “There is minimal risk to the actor of getting caught. Because the risk is low, there is little deterrence to creating deepfake[s].”

Rosenberg’s theory is supported by a report from Deeptrace, which found 14,698 deepfake videos online during its most recent tally in June and July 2019 — an 84% increase within a seven-month period. The vast majority of those (96%) consist of pornographic content featuring women.

Considering those numbers, Rosenberg argues that companies with “a lot to lose” from deepfakes should develop and incorporate deepfake detection technology — which he considers akin to antimalware and antivirus — into their products. There’s been movement on this front; Facebook announced in early January that it will use a combination of automated and manual systems to detect deepfake content, and Twitter recently proposed flagging deepfakes and removing those that threaten harm.

Of course, the technologies underlying deepfake generators are merely tools — and they have enormous potential for good. Michael Clauser, head of the data and trust practice at consultancy Access Partnership, points out that the technology has already been used to improve medical diagnoses and cancer detection, fill gaps in mapping the universe, and better train autonomous driving systems. He therefore cautions against blanket campaigns to block generative AI.

“As leaders begin to apply existing legal principles like slander and defamation to emerging deepfake use cases, it’s important not to throw out the baby with the bathwater,” Clauser told VentureBeat via email. “Ultimately, the case law and social norms around the use of this emerging technology [haven’t] matured sufficiently to create bright red lines on what constitutes fair use versus misuse.”

Source: https://venturebeat.com/2020/02/11/deepfake-media-and-detection-methods/

Deep Dive: Fake Profiles Threaten Social Media Networks – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 11:45 AM on Thursday, February 13th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Deep Dive: Fake Profiles Threaten Social Media Networks

  • Fake profiles run rampant on sites such as Facebook, Twitter and YouTube, accounting for up to 25 percent of all new accounts, according to some estimates
  • The damage these fake profiles inflict is incalculable, resulting in billions of dollars lost or even altering the course of world politics.

By PYMNTS

Social media has become an integral part of everyday life, with a recent study finding that there were approximately 2.77 billion social media users around the world as of 2019. This number is projected to grow to more than 3 billion by the end of 2021 — almost half of the global population.

A good portion of these users is not real, however. Fake profiles run rampant on sites such as Facebook, Twitter and YouTube, accounting for up to 25 percent of all new accounts, according to some estimates. The damage these fake profiles inflict is incalculable, resulting in billions of dollars lost or even altering the course of world politics. Social media networks will need to step up their digital authentication games if they want to bring these fraudsters to heel.

How Fake Profiles Damage Social Media

Illegitimate social media profiles are strongly correlated with cybercrime, with researchers finding that bot-run fake profiles account for 75 percent of social media cyberattacks. Some of these crimes involve stealing personal information, like passwords and payment data, while others spread social propaganda or disseminate spam.

Social media networks are often negligent when removing fake profiles, too. Researchers at the NATO Strategic Communications Centre of Excellence conducted a study last year that tested the efficacy of Facebook’s, Google’s and Twitter’s fake profile detection protocols. The research team purchased 3,500 comments, 25,000 likes, 20,000 video views and 5,100 fake followers and found that 80 percent of their fake engagements were still online after one month. Approximately 95 percent of the remaining profiles were still online three weeks after the NATO team announced its findings to the public.

One might think that such an effort would cost a significant amount of time and money, but the study was relatively inexpensive. The researchers only spent €300 ($330) to purchase the comments, likes and followers — a Facebook ad of equivalent value would likely receive just 1,500 clicks. This makes fake profiles much more appealing to unscrupulous individuals and companies.

Fake social media profiles’ impacts became evident in the U.S. in 2016 when Russian hackers created thousands of fake Facebook and Twitter accounts to influence the former’s presidential election. These bots posted thousands of messages and fake news articles attacking candidate Hillary Clinton and sowing divisiveness within the Democratic Party, often promoting information from the Democratic National Committee’s (DNC) email hack.

Social sites often listed hashtags like #HillaryDown and #WarAgainstDemocrats as trending, inadvertently giving these bots a loudspeaker and letting their messages punch far above their weights. Special Counsel Robert Mueller’s 2018 investigation found that these hacker groups had multimillion-dollar budgets — a far cry from then-candidate Donald Trump’s characterization of the DNC hackers as “somebody sitting on their bed that weighs 400 pounds.”

Fake profiles’ threats are self-evident, but the solution to stopping them is not nearly as clear.

How Social Media Sites Can Fight Bots

Social media websites are reticent to disclose exactly how they identify and delete fake profiles — if fraudsters know too much about their prevention techniques, they will be able to circumvent them. Many brands, companies, advertisers and even congressional panels have demanded more information about how social media firms are working to stop the spread of fake profiles, however.

Third-party developers have also introduced solutions to curb the spread of illegitimate accounts, with many utilizing artificial intelligence (AI) and machine learning (ML). Thousands of social media profiles are created every day, making human analysis of each new registration impossible. AI and ML could reduce analytics teams’ burdens by employing pattern recognition to determine the details that all true profiles share, such as the frequency of their posts or what pages they tend to like. Profiles that do not adhere to this pattern could then be flagged for human review.

Social media networks could also utilize facial recognition biometrics to authenticate new accounts, requiring users to submit selfies or live smartphone videos for review to determine if their profiles are legitimate. Many new smartphones, including Apple’s iPhone 11, come with this technology right out of the box, meaning consumers are already familiar with it.

Facial recognition biometrics have fallen afoul of privacy advocates, however. Facebook has long been using facial recognition to identify its users in photographs — a practice that many condemned as privacy infringement. The website shifted this system to an opt-in model last year to appease these privacy advocates, meaning it would likely be reluctant to adopt facial biometrics during onboarding.

There is no obvious authentication solution that can completely prevent fake profiles. Social media sites, advertisers and governments all agree that they do need to be stopped; however — the next step is agreeing how to do it.

Source: https://www.pymnts.com/authentication/2020/fake-profiles-threaten-social-media-networks/

Fake news, deep fakes and fraud detection 2020 – addressing an epidemic – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 11:15 AM on Wednesday, February 12th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Fake news, deep fakes and fraud detection 2020 – addressing an epidemic

  • Online giants and regulators alike have taken up the fight against fake news and deep fakes. Simon Marchand says the answer has been on the tips of our tongues all along. 

Since 2016 when the Macquarie Dictionary named ‘fake news’ as its word of the year, the spread of misinformation online has only increased. Technology has become more sophisticated, giving rise to the production of ‘deep fakes’ and synthetic voices.

It’s no wonder the Analysis and Policy Observatory’s (APO) 2019 ‘Digital News Report’ for Australia found that nearly two-thirds (62%) of the nation is concerned about what is real or fake on the internet ― above the global average.

The lack of consumer confidence in online content is a major threat to marketers, with their brands’ success firmly embedded in establishing a trustworthy and authentic reputation – consumers are far more likely to purchase from, stay loyal to and advocate for brands they trust.

Introduce deep fakes into the mix and you’re looking at a far more sophisticated threat to brand reputation that demands an ultra-modern response. Big tech organisations, government bodies and social media platforms are fighting back against fake news with new policies, technology, litigation and more. However there is an existing, under-utilised tool that could have a major impact if employed by marketers – voice biometrics technology.

Deep fakes – how tech is propelling the issue forward

Deep fakes are used in the context of broad propaganda campaigns, intended to influence the opinion of groups of individuals. On a large scale, this can potentially have a dramatic impact, such as heavily influencing the outcome of a political election. Consumers are continuously warned to be sceptical and afraid; we’re in the middle of a fake news epidemic. The technology used to create this content has become more realistic and accessible, so it’s easy to see why. Effectively, anyone with a computer, internet connectivity and a bit of free time, could produce a deep fake video or audio file. As AI becomes more sophisticated, it’s become increasingly hard to discern what is real or fake.

This is compounded by the increasing reliance on social media for news―the APO’s report found almost half of generation Z (47%) and one-third of generation Y (33%) use Facebook, YouTube, Twitter and other social channels as their main source of news. Blind trust in social media platforms is enabling fake news to spread to masses in record time.

The rise of social media and influencer marketing in recent years has put brands in an extremely vulnerable position. A convincing deep fake of a company’s CEO, brand’s celebrity or an influencer ambassador can be created with ease. If their visual image were manipulated to depict them or even the brand itself in a way that is false or offensive, this would pose a serious threat to modern-day marketers.

The threat-level heightens when you consider that debunking fake news takes time, and content published for that purpose typically receives less coverage than the original, false article. As a result, misinformation can have lasting effects, even once discredited – it is a phenomenon researchers across the globe are investigating.

How social media and tech companies are fighting back

As AI becomes increasingly refined, big tech is racing to keep up. Twitter has announced it will add labels to or remove tweets carrying manipulated content, while Facebook has partnered with Microsoft and academics to create the Deepfake Detection Challenge, which seeks to better detect when AI has been used to alter video.

Google recently released more than 3,000 visual deep fakes to help inform a FaceForensics benchmark that is combating manipulated videos. This follows its earlier release of a synthetic speech dataset.

These solutions are a work in progress, however. Voice biometrics technology – existing fraud detection tech – could have a major impact in marketing.

Voice biometrics to combat fake news

Banks, insurance providers and governments across the globe are already using voice biometrics as an easy and secure way to authenticate their customers, combat fraudulent activity and improve customer satisfaction.

A voiceprint includes more than 1000 unique physical and behavioural characteristics of a person, such as vocal tract length, nasal passage shape, pitch, cadence, accent and more. In fact, research shows it’s as unique to an individual as a fingerprint.

Where behaviours can be easily mimicked, physical voice characteristics cannot, thus preventing impersonators from ‘tricking’ the system. Voice biometrics could be monumental in verifying if a video or audio recording is legitimate, analysing if the voice is actually from the person the message claims to be, or has been manipulated, simulated, or created synthetically to create fake news stories.

The accuracy and speed with which voice biometrics can authenticate a person’s identity mean that harmful deep fakes be debunked with certainty – quickly mitigating the threat to a brand’s reputation.

Biometrics represent a new era of identity security, and given the dramatic influence fake news can have, combating deep fake videos and synthetic audio with voice biometrics is a natural progression for the technology.

Source: https://www.marketingmag.com.au/hubs-c/fake-news-deep-fakes-and-fraud-detection-2020-addressing-an-epidemic/

A new supertool fights fake images – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:45 PM on Tuesday, February 11th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

A new supertool fights fake images

A new supertool fights fake images, plus the Economist’s guide to Instagram and a new way to pay for journalism

  • How can technology strengthen fact-checking? Jigsaw, a nonprofit division of Alphabet, Google’s parent company, asks and then answers its own question with the announcement of Assembler, an experimental detection platform that aims to help fact checkers and journalists identify manipulated images.

The platform pulls together several existing technologies — for example, one that spots copy-and-pastes within images and another that detects image brightness manipulations — into one supertool. Right now, Jigsaw is working with a group of global newsrooms and fact-checkers to test Assembler’s viability in real situations. That means, unfortunately, it isn’t available for the rest of us yet.

This new tool can show you what journalists are writing about on a large scale. MuckRack Trends (log-in required) is a lot like Google Trends, which shows you what people are searching for on Google, but it’s specific to news articles. You can use it, for example, see how many news articles have been written about Oscar-winning director Bong Joon-ho over the past week (5,733, as I write this) and compare it to someone like, say, Martin Scorsese (5,612). There are SO MANY great uses for this. I wrote about some of them here.

The Economist makes charts … for Instagram. The visual social media platform is a natural home for informative and interesting infographics from one of the world’s most prestigious media brands. Charts like “Who is more liberal: Uncle Joe or Mayor Pete?” and “Interest in veganism is surging” perform well between the organization’s beautiful photos and illustrations. The Economist offers a few tips for others willing to try putting info on Insta, including: Keep colors consistent so that fast-scrollers know who they’re looking at, rethink charts and graphs to fit into a small space and cater to your audience, which is probably younger on Instagram.

If you need a new online publishing system, start here. Content management systems, or CMSs, are the engines that run our online journalism. A good engine works without you having to think much about it. A bad one — and I’m just making things up here — takes forever to load, formats text in unexplainable ways and occasionally deletes your stories outright. Together with News Catalyst, Poynter put together a guide on how to get a new CMS, along with a look at five of our favorite modern CMS choices and live demo opportunities for each.

SPONSORED: It takes a village to publish a story. That’s why Trint’s collaborative speech-to-text platform lets newsroom teams work on the same transcript at the same time. Trint’s Pro Team plan means editing, verification and exporting happen simultaneously — stories are published in near real time and with 100% accurate information. And with Trint’s Workspaces, you can choose who has access to what data through custom permissions. Journalists, editors and producers from different teams instantly get access to the transcripts they need as soon as they’re uploaded. Start your Pro Team trial today.

Here’s a map that shows stunning satellite images of locations across the globe. From school bus assembly plants (so much yellow) to airports (kind of meta!), from iron ore mine ponds (so much pink) to man-made islands that depict a pair of dolphins circling each other (‘nuff said), this map is fun to explore and might offer a story idea or two. 

Don’t call it a paywall. It’s more of a parking meter. A company called Transact has joined the fight to get people to pay for journalism. Transact’s transactions work similarly to micropayments, in which readers pay for articles from across the internet à la carte instead of a flat-rate subscription, except that users load up lumps of money at once and spend as they go. Transact calls it a “digital media debit card.” The Santa Barbara Independent, an alt-weekly newspaper in California, is one of the first to implement it. Transact joins a growing list of alternative payment schemes for journalism, including Blendle (which made a pivot away from micropayments not long ago) and Scroll (which kills ads and provides a better user experience).

Source: https://www.poynter.org/tech-tools/2020/a-new-supertool-fights-fake-images-plus-the-economists-guide-to-instagram-and-a-new-way-to-pay-for-journalism/

Disinformation is more than fake news SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:15 PM on Monday, February 10th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Disinformation is more than fake news

By Jared Cohen, for Jigsaw blog

Jigsaw’s work requires forecasting the most urgent threats facing the internet, and wherever we traveled these past years — from Macedonia to Eastern Ukraine to the Philippines to Kenya and the United States — we observed an evolution in how disinformation was being used to manipulate elections, wage war, and disrupt civil society. By disinformation we mean more than fake news. Disinformation today entails sophisticated, targeted influence campaigns, often launched by governments, with the goal of influencing societal, economic, and military events around the world. But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.

Using technology to detect manipulated images

Beginning in 2016 we began working with researchers and academics to develop new methods for using technology to detect certain aspects of disinformation campaigns. Together with Google Research and academic partners, we developed an experimental platform called Assembler to test how technology can help fact-checkers and journalists identify and analyze manipulated media.

Debunking images is a time consuming and error-prone process for fact-checkers and journalists. To verify the authenticity of images, they rely on a number of different tools and methods. For example, Bellingcat, a group of researchers and investigative journalists dedicated to in-depth fact-checking, lists more than 25 different tools and services available to verify the authenticity of photos, videos, websites, and other media. Fact-checkers and journalists need a way to stay ahead of the latest manipulation techniques and make it easier to check the authenticity of images and other assets.

Assembler is an early stage experimental platform advancing new detection technology to help fact-checkers and journalists identify manipulated media. In addition, the platform creates a space where we can collaborate with other researchers who are developing detection technology. We built it to help advance the field of science, and to help provide journalists and fact-checkers with strong signals that, combined with their expertise, can help them judge if and where an image has been manipulated. With the help of a small number of global news providers and fact checking organizations including Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler, we’re testing how Assembler performs in real newsrooms and updating it based on its utility and tester feedback.

How Assembler Works

Assembler brings together multiple image manipulation detectors from various academics into one tool, each one designed to spot specific types of image manipulations. Individually, these detectors can identify very specific types of manipulation — such as copy-paste or manipulations to image brightness. Assembled together, they begin to create a comprehensive assessment of whether an image has been manipulated in any way. Experts from the University of Maryland, University Federico II of Naples, and the University of California, Berkeley each contributed detection models. Assembler uses these models to show the probability of manipulation on an image.

Additionally, we built two new detectors to test on the platform.The first is the StyleGAN detector to specifically address deepfakes. This detector uses machine learning to differentiate between images of real people from deepfake images produced by the StyleGAN deepfake architecture. Our second model, the ensemble model, is trained using combined signals from each of the individual detectors, allowing it to analyze an image for multiple types of manipulation simultaneously. Because the ensemble model can identify multiple image manipulation types, the results are, on average, more accurate than any individual detector.

“These days working in multimedia forensics is extremely stimulating. On one hand, I perceive very clearly the social importance of this work: in the wrong hands, media manipulation tools can be very dangerous, they can be used to ruin the life and reputation of ordinary people, commit frauds, modify the course of elections,” said Dr. Luisa Verdoliva, Associate Professor at the Department of Industrial Engineering at the University Federico II of Naples and Visiting Scholar, Google AI. “On the other hand, the professional challenge is very exciting, new attacks based on artificial intelligence are conceived by day, and we must keep a very fast pace of innovation to face them. Collaborating in Assembler was a great opportunity to put my knowledge and my skills concretely to the service of people. In addition I came to know wonderful and very diverse people involved in this project, all strongly committed in this fight. Overall a great experience.”

The Current: Exposing the architecture of disinformation campaigns

Jigsaw is an interdisciplinary team of researchers, engineers, designers, policy experts, and creative thinkers, and we’ve long wanted to find a way to share more of our team’s work publicly, especially our research insights. That’s why I’m excited to introduce the first issue of The Current, Jigsaw’s new research publication that illuminates complex problems through an interdisciplinary approach — like our team.

Our first issue is, as you might have guessed, all about disinformation — exploring the architecture of disinformation campaigns, the tactics and technology used, and how new technology is being used to detect and stop disinformation campaigns.

One feature of this inaugural issue is the Disinformation Data Visualizer. Jigsaw visualized the research from the Atlantic Council’s DFRLab on coordinated disinformation campaigns around the world and shows the specific tactics used and countries affected. The Visualizer is a work in progress. We’re sharing this with the wider community to enable a dialogue about the most effective and comprehensive disinformation countermeasures.

An ongoing experiment

Disinformation is a complex problem, and there isn’t any simple technological solution. The first step is to better understand the issue. The world ought to understand how disinformation campaigns are increasingly being used as a way of manipulating people’s perception of important issues. We’re committed to sharing our insights and publishing our research so other organizations can examine and scrutinize different ways to approach this issue. We’ll be sharing more updates about Jigsaw’s work in this space over the coming few months.

In the meantime we’d like to express our gratitude to our academic partners, our partners within Google, and the courageous publishers and journalists who are committed to using technology to bring people the truth, wherever it leads: Chris Bregler, Larry Davis, Alexei Efros, Hany Farid, Andrew Owens, Abhinav Shrivastava, Luisa Verdoliva, and Emerson Brookings, Graham Brookie and the Atlantic Council’s DFRLab team.

Source: https://www.stopfake.org/en/disinformation-is-more-than-fake-news/?__cf_chl_jschl_tk__=57eb9e0da0a582c3981aa8ed39e5f90a3cae9ebd-1581350912-0-AYw5r3RGxKqQMHhsZazGCBz7wTi9sZsM25j0X6X-RLqgmPiUWqB7PIEF_iEJx1V-pc-0fEmQ57LyMozcBAp7Oco-Uipl_R3nuYudhJdwnnBOovp12rcGmht1TowUugnZFYn8V-4UddKzmMsDP2Nu7IgasYOI6Q21teLNyGc81iMSGZMkJqLLBA8afv_2SoLGtyQ8KKGIx6ECKITuQaE5aA3w_cIEbAWd5sKH9RAmgKrU1c7uqqpCWkS-sxhOeclBEjqf23nkljl4f9Iqhtp3dHTnuvOZ1SjlyiAC0Ld-y5Z7s3lWb2FblSjPO1ko5kldvA2R0areN4kXGMnf0nyv0XY

The technology that could save us from #deepfake videos SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 4:01 PM on Tuesday, February 4th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

The technology that could save us from deepfake videos

Israeli startup Cyabra’s technology detects expertly doctored videos as well as the bots powering fake social-media profiles.

By Brian Blum

It’s November 2020, just days before the US presidential election, and a video clip comes out showing one of the leading candidates saying something inflammatory and out of character. The public is outraged, and the race is won by the other contender.

The only problem: the video wasn’t authentic. It’s a “deepfake,” where one person’s face is superimposed on another person’s body using sophisticated artificial intelligence and a misappropriated voice is added via smart audio dubbing.

The AI firm Deeptrace uncovered 15,000 deepfake videos online in September 2019, double what was available just nine months earlier.

The technology can be used by anyone with a relatively high-end computer to push out disinformation – in politics as well as other industries where credibility is key: banking, pharmaceuticals and entertainment.

Israeli startup Cyabra is one of the pioneers in identifying deepfakes fast, so they can be taken down before they snowball online.

Cyabra cofounder and CEO Dan Brahmy. Photo: courtesy

Cyabra CEO Dan Brahmy tells ISRAEL21c that there are two ways to train a computer algorithm to analyze the authenticity of a video.

“In a supervised approach, we give the algorithm a dataset of, say, 100,000 pictures of regular faces and face swaps,” he explains. “The algorithm can catch those kinds of swaps 95 percent of the time.”

The second methodology is an “unsupervised approach” inspired by a surprising field: agriculture.

“If you fly a drone over a field of corn and you want to know which crop is ready and which is not, the analysis will look at different colors or the way the wind is blowing,” Brahmy explains. “Is the corn turning towards its right side? Is it a bit more orange than other parts of the field? We look for those small patterns in videos and teach the algorithm to spot deepfakes.”

Cyabra’s approach is more sophisticated than traditional methods of ferreting out deepfakes – looking at metadata, for example, of where was the picture taken, what kind of camera was used and on what date it was shot.

“Our algorithm might not know the exact name of the manipulation used, but it will know that the video is not real,” Brahmy says.

Only a computer program can spot telltale signs the human eye would miss, such as eyeglasses that don’t fit perfectly or lip movements not perfectly synched with movements of the chin and Adam’s apple, Brahmy tells ISRAEL21c.

Staying a few steps ahead

Cyabra’s technology detects inauthentic nuances that the human eye would miss. Photo: courtesy

Deepfake detection technology must continually evolve.

In the early days – all the way back in 2017, when deepfakes first started appearing – fake faces didn’t blink normally. But no sooner had researchers alerted the public to watch for abnormal eye movements than deepfakes suddenly started blinking normally.

“To have a durable edge, you need to be a year or two ahead, to make sure no one can re-do what you just did,” Brahmy says.

That’s important both in catching the deepfakers and for a company like Cyabra to stay ahead of the competition.

Cyabra’s edge is that two of its four cofounders came out of IDF intelligence divisions where they looked for ways to foil terrorist groups trying to create fake profiles to connect with Israelis.

In addition, former Mossad deputy director Ram Ben Barak is on the company’s board of directors.

Fake social-media profiles

Cyabra’s deepfake detection technology was only released in the last month. For most of the past two years, since the company was founded, it has been focused on spotting fake social-media profiles.

Cyabra cofounder and COO Yossef Daar. Photo: courtesy

Brahmy cofounderYossef Daar claims there are 140 million fake accounts on Facebook, 38 million on LinkedIn, and 48 million bots on Twitter.

These, too, are not easy to detect.

Researchers from the University of Iowa discovered that some 100 million Facebook “likes” that appeared between 2015 and 2016 were created by spammers using around a million fake profiles.

Cyabra’s machine-learning algorithms run some 300 unique parameters to determine profile authenticity. A three-day-old profile with 700 friends whose user has no footprint outside of Facebook raises a red flag, for example.

In the 2016 U.S. elections, fake profiles on social media were the biggest problem – deepfakes didn’t exist yet.

By now, though, you’ve probably seen a few deepfakes yourself: Facebook CEO Mark Zuckerberg bragging about having “total control of billions of people’s stolen data,” former US President Obama using a profanity to describe President Trump or Jon Snow apologizing for the writing in season 8 of “Game of Thrones.”

Brahmy says the leadup to the 2020 election season is the right time to offer Cyabra’s solution.

Investors agree. Cyabra has raised $3 million from TAU Ventures and $1 million from the Israel Innovation Authority. The 15-person company started in The Bridge, a seven-month Tel Aviv-based accelerator sponsored by Coca-Cola, Turner and Mercedes. Now they’re based at TAU Ventures with a small presence in the United States as well.

Public and private sector clients

Cyabra’s clients prefer not to be named, although Brahmy did tell ISRAEL21c that 50% of its clients are in the public sector – governmental organizations or agencies – and the other half are “in the world of sensitive brands: consumer product, food and beverage, media conglomerates.”

“Imagine you’re in the business of providing unbiased information and suddenly 500 bots send you a message with a falsified picture and you’re ready to publish it. We want to be there five seconds before you pull the trigger, to let you know it’s false,” says Brahmy. This heatmap shows the level of doctoring done to a picture or frame in a video. Emphaized areas represent more heavily forged pieces of content. Image courtesy of Cyabra

Cyabra leaves the task of fact-checking content for “fake news” to other companies such as NewsGuard and FactMata. (Neither company is Israeli.)

There are also other companies dealing with deepfakes and fake profiles. But, Brahmy says, “we’re the only one doing both, with the technical capability to detect deepfakes along with cross-channel analysis to detect the bots [powering fake social media profiles], all under one roof.”

Facebook announced in January 2020 that it is banning deepfakes intended to mislead rather than entertain. But can Facebook really get ahead of all the deepfakes out there – and those to come?

If Cyabra and companies like it succeed, the next time you see a politician or celebrity saying something you find reprehensible, it might just be true.

Source: https://www.israel21c.org/the-technology-that-could-save-us-from-deepfake-videos/

#BioCatch predicts 10 #cybercrime trends for 2020 SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:57 PM on Friday, January 31st, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

BioCatch predicts 10 cybercrime trends for 2020

  • Deep fake technology will be used for identity theft: Deep fake technology that spoofs the human voice is already being used to attack call centers, or in business email compromise scams.
  • In 2020, we should see the early signs of deep fake being used to defeat face recognition controls, including those using state of the art liveliness tests.
  • The industry will have to come up with silent, behind-the-scenes controls that can offset the vulnerabilities of overt biometric authentication.

BioCatch, a leader in behavioral biometrics, today announced its Cybercrime and Fraud Predictions for 2020 that show fraudsters are keeping pace with the digital transformation and are a growing threat to businesses around the world. These are the 10 biggest cybercrime and fraud trends for the New Year, according to BioCatch Founder and Chief Cyber Officer Uri Rivner.

Deep fake technology will be used for identity theft: Deep fake technology that spoofs the human voice is already being used to attack call centers, or in business email compromise scams. In 2020, we should see the early signs of deep fake being used to defeat face recognition controls, including those using state of the art liveliness tests. The industry will have to come up with silent, behind-the-scenes controls that can offset the vulnerabilities of overt biometric authentication.

LiFi networks will be targeted by hackers: There’s a new, promising high-speed Internet technology in town, and it’s visible light based rather than radio wave based. While reaching full commercial use is still a few years away, and the tech is limited to proximity use given physical limitations on light movement, a network based on LiFi should be as hackable as WiFi and might be more prone to physical interferences. We should see the first demonstrations of LiFi hacks in the new year.

UK identity databases will come under attack by fraudsters: Multiple factors will drive criminals that target the UK financial sector to boost their Account Opening Fraud activities; the success banks have in fighting traditional fraud, the introduction of tighter controls over social engineering, and the coming implementation of PSD2 all make account takeover harder for them. To facilitate this expected boost, hackers will focus their attention on UK identity databases, attempting to get multiple data points on each UK citizen in a similar fashion to what had been the state in the US in the last few years. In the US, synthetic identity fraud is the fastest growing type of financial crime, with an average charge-off balance per instance of $15,000, according to a Federal Reserve study.

FinTech companies will be fraudsters’ next big target: While banks and credit card issuers in the US have been stepping up their defenses against account opening and account takeover fraud, the fintech sector, which has largely escaped the wrath of fraudsters, will begin to see a sharp increase in online fraud. Because they are less heavily regulated, fintech companies are more agile and able to introduce new functionalities. However, the lack of proper defenses and the fact that they have no access to the banking sector’s fraud consortium databases will make them far more exposed.
Chatbot and voice assistance payment fraud will rise: Many financial institutions are beginning to deploy AI-based customer assistance tools, such as chatbots and voice based interfaces, to broaden their offerings beyond traditional online and mobile channels. As soon as those new channels begin to offer full functionality – say, move money from a user’s account – they’ll be targeted by criminals and will need to be protected against account takeover. Researchers have already proven that lasers can be used to spoof voice commands in physical voice assistance devices, and it would be even easier to attack their virtual equivalents.

eComm fraud AI models will become half-blinded: One of the unspoken secrets of AI is that it’s only as good as the tagged data that is fed to it. With the increase of account opening fraud, a huge amount of eComm fraud is going to come not from compromised credit cards, but rather new credit and debit cards that are opened online using identity theft. In these cases, there are no chargebacks, as no real user will call to complain. The result is that AI models will become half-blinded. The criminal patterns that AI models use to pinpoint fraud will be suppressed by genuine confirmations after account opening, as criminals use the fraudulent account to make purchases, just as a genuine user would.

AI will help prevent subscription services fraud: The big content streaming companies have formed an alliance designed to fight password sharing and criminal offerings of compromised passwords. Unfortunately, device-based and location-based controls are no longer holding as technologies to spoof devices and geo-location are readily available. New technologies such as behavioral biometrics and unsupervised anomaly detection AI will prove to fare much better against misuse of subscription services.  

Zelle fraud levels will surge: As many regional banks and credit unions are adding Zelle P2P capabilities to their online and mobile banking, criminals are beginning to single out the US as a new land of opportunities. Well-proven social engineering techniques are already in use, and attacks will escalate and quickly adapt as new controls are added – with the result of real users suffering from higher friction while fraud levels surge.

Selfie biometric data will be the new dark web money maker: There’s already a vibrant dark web trade in personalized biometric data, and that will continue to grow in 2020. More websites and applications are turning to selfie-based verification and more online account opening flows are moving from obsolete controls, such as Knowledge Based Authentication, to more modern controls, like selfie-document matching. Some criminals will focus on collecting data from open sources and social media. Others will target – and already have targeted â€“ users in phishing campaigns designed to steal not just static credentials, but also selfies and videos of the user’s face.

Another threat is that advanced malware capabilities, which are currently in the hands of state sponsored actors and other high-end players, will find their way to criminal hands and be used to break into mobile device authentication.

Money mules will become an endangered species: In an era of easy account opening fraud, why spend resources and take unnecessary risks by interacting with mules? Money mules won’t go away in 2020, but criminals engaged in cashing out compromised bank accounts will begin shifting away from classic recruitment options and start using falsely opened bank accounts instead. The ease of fraudulent account opening will also help other crimes, such as money laundering and impersonating the receiving end of P2P money transfers like Zelle.

Mr. Rivner says: “At the core of our cybercrime problem is a lack of effective methods for establishing and verifying digital identity in the constantly evolving digital ecosystem. New solutions are addressing the challenges, replacing outdated approaches that rely on static information with much more effective, multi-factor tools. Organizations that are fastest to act with new, powerful, cutting edge fraud prevention tools are the ones that will be least affected by fraudsters in 2020 and beyond. “

Source: https://www.planetbiometrics.com/article-details/i/10769/desc/biocatch-predicts-10-cybercrime-trends-for-2020/

Google Announces $1 Million Grant to Help Fight Fake News SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:00 PM on Thursday, January 30th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Google Announces $1 Million Grant to Help Fight Fake News in India

  • Technology giant Google on Wednesday, 29 January, announced a USD 1 million grant to promote news literacy among Indians.

The money will be given to Internews, a global non-profit, which will select a team of 250 journalists, fact checkers, academics and NGO workers for the project, a statement said.

The announcement, part of a USD 10 million commitment worldwide to media literacy, comes at a time when news publishers, especially on the digital front, have been found to have indulged in spreading misinformation.

Google said a curriculum will be developed by a team of global and local experts, who will roll out the project in seven Indian languages.

“The local leaders will then roll out the training to new internet users in non-metro cities in India, enabling them to better navigate the internet and assess the information they find,” the statement said.

With an eye to curb misinformation, Google News Initiative (GNI) India Training Network — a group of 240 senior Indian reporters and journalism educators — has been working to counteract disinformation in their newsrooms and beyond since last year.

GNI has given verification training for more than 15,000 journalists and students from more than 875 news organisations in 10 Indian languages, using a “train-the-trainer” approach over the past year, it said.

Source: https://www.thequint.com/news/webqoof/google-announces-dollar1-million-grant-to-help-fight-fake-news-in-india