Agoracom Blog Home

Posts Tagged ‘fake news filter’

Deep Dive: Fake Profiles Threaten Social Media Networks – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 11:45 AM on Thursday, February 13th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Deep Dive: Fake Profiles Threaten Social Media Networks

  • Fake profiles run rampant on sites such as Facebook, Twitter and YouTube, accounting for up to 25 percent of all new accounts, according to some estimates
  • The damage these fake profiles inflict is incalculable, resulting in billions of dollars lost or even altering the course of world politics.

By PYMNTS

Social media has become an integral part of everyday life, with a recent study finding that there were approximately 2.77 billion social media users around the world as of 2019. This number is projected to grow to more than 3 billion by the end of 2021 — almost half of the global population.

A good portion of these users is not real, however. Fake profiles run rampant on sites such as Facebook, Twitter and YouTube, accounting for up to 25 percent of all new accounts, according to some estimates. The damage these fake profiles inflict is incalculable, resulting in billions of dollars lost or even altering the course of world politics. Social media networks will need to step up their digital authentication games if they want to bring these fraudsters to heel.

How Fake Profiles Damage Social Media

Illegitimate social media profiles are strongly correlated with cybercrime, with researchers finding that bot-run fake profiles account for 75 percent of social media cyberattacks. Some of these crimes involve stealing personal information, like passwords and payment data, while others spread social propaganda or disseminate spam.

Social media networks are often negligent when removing fake profiles, too. Researchers at the NATO Strategic Communications Centre of Excellence conducted a study last year that tested the efficacy of Facebook’s, Google’s and Twitter’s fake profile detection protocols. The research team purchased 3,500 comments, 25,000 likes, 20,000 video views and 5,100 fake followers and found that 80 percent of their fake engagements were still online after one month. Approximately 95 percent of the remaining profiles were still online three weeks after the NATO team announced its findings to the public.

One might think that such an effort would cost a significant amount of time and money, but the study was relatively inexpensive. The researchers only spent €300 ($330) to purchase the comments, likes and followers — a Facebook ad of equivalent value would likely receive just 1,500 clicks. This makes fake profiles much more appealing to unscrupulous individuals and companies.

Fake social media profiles’ impacts became evident in the U.S. in 2016 when Russian hackers created thousands of fake Facebook and Twitter accounts to influence the former’s presidential election. These bots posted thousands of messages and fake news articles attacking candidate Hillary Clinton and sowing divisiveness within the Democratic Party, often promoting information from the Democratic National Committee’s (DNC) email hack.

Social sites often listed hashtags like #HillaryDown and #WarAgainstDemocrats as trending, inadvertently giving these bots a loudspeaker and letting their messages punch far above their weights. Special Counsel Robert Mueller’s 2018 investigation found that these hacker groups had multimillion-dollar budgets — a far cry from then-candidate Donald Trump’s characterization of the DNC hackers as “somebody sitting on their bed that weighs 400 pounds.”

Fake profiles’ threats are self-evident, but the solution to stopping them is not nearly as clear.

How Social Media Sites Can Fight Bots

Social media websites are reticent to disclose exactly how they identify and delete fake profiles — if fraudsters know too much about their prevention techniques, they will be able to circumvent them. Many brands, companies, advertisers and even congressional panels have demanded more information about how social media firms are working to stop the spread of fake profiles, however.

Third-party developers have also introduced solutions to curb the spread of illegitimate accounts, with many utilizing artificial intelligence (AI) and machine learning (ML). Thousands of social media profiles are created every day, making human analysis of each new registration impossible. AI and ML could reduce analytics teams’ burdens by employing pattern recognition to determine the details that all true profiles share, such as the frequency of their posts or what pages they tend to like. Profiles that do not adhere to this pattern could then be flagged for human review.

Social media networks could also utilize facial recognition biometrics to authenticate new accounts, requiring users to submit selfies or live smartphone videos for review to determine if their profiles are legitimate. Many new smartphones, including Apple’s iPhone 11, come with this technology right out of the box, meaning consumers are already familiar with it.

Facial recognition biometrics have fallen afoul of privacy advocates, however. Facebook has long been using facial recognition to identify its users in photographs — a practice that many condemned as privacy infringement. The website shifted this system to an opt-in model last year to appease these privacy advocates, meaning it would likely be reluctant to adopt facial biometrics during onboarding.

There is no obvious authentication solution that can completely prevent fake profiles. Social media sites, advertisers and governments all agree that they do need to be stopped; however — the next step is agreeing how to do it.

Source: https://www.pymnts.com/authentication/2020/fake-profiles-threaten-social-media-networks/

Fake news, deep fakes and fraud detection 2020 – addressing an epidemic – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 11:15 AM on Wednesday, February 12th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Fake news, deep fakes and fraud detection 2020 – addressing an epidemic

  • Online giants and regulators alike have taken up the fight against fake news and deep fakes. Simon Marchand says the answer has been on the tips of our tongues all along. 

Since 2016 when the Macquarie Dictionary named ‘fake news’ as its word of the year, the spread of misinformation online has only increased. Technology has become more sophisticated, giving rise to the production of ‘deep fakes’ and synthetic voices.

It’s no wonder the Analysis and Policy Observatory’s (APO) 2019 ‘Digital News Report’ for Australia found that nearly two-thirds (62%) of the nation is concerned about what is real or fake on the internet ― above the global average.

The lack of consumer confidence in online content is a major threat to marketers, with their brands’ success firmly embedded in establishing a trustworthy and authentic reputation – consumers are far more likely to purchase from, stay loyal to and advocate for brands they trust.

Introduce deep fakes into the mix and you’re looking at a far more sophisticated threat to brand reputation that demands an ultra-modern response. Big tech organisations, government bodies and social media platforms are fighting back against fake news with new policies, technology, litigation and more. However there is an existing, under-utilised tool that could have a major impact if employed by marketers – voice biometrics technology.

Deep fakes – how tech is propelling the issue forward

Deep fakes are used in the context of broad propaganda campaigns, intended to influence the opinion of groups of individuals. On a large scale, this can potentially have a dramatic impact, such as heavily influencing the outcome of a political election. Consumers are continuously warned to be sceptical and afraid; we’re in the middle of a fake news epidemic. The technology used to create this content has become more realistic and accessible, so it’s easy to see why. Effectively, anyone with a computer, internet connectivity and a bit of free time, could produce a deep fake video or audio file. As AI becomes more sophisticated, it’s become increasingly hard to discern what is real or fake.

This is compounded by the increasing reliance on social media for news―the APO’s report found almost half of generation Z (47%) and one-third of generation Y (33%) use Facebook, YouTube, Twitter and other social channels as their main source of news. Blind trust in social media platforms is enabling fake news to spread to masses in record time.

The rise of social media and influencer marketing in recent years has put brands in an extremely vulnerable position. A convincing deep fake of a company’s CEO, brand’s celebrity or an influencer ambassador can be created with ease. If their visual image were manipulated to depict them or even the brand itself in a way that is false or offensive, this would pose a serious threat to modern-day marketers.

The threat-level heightens when you consider that debunking fake news takes time, and content published for that purpose typically receives less coverage than the original, false article. As a result, misinformation can have lasting effects, even once discredited – it is a phenomenon researchers across the globe are investigating.

How social media and tech companies are fighting back

As AI becomes increasingly refined, big tech is racing to keep up. Twitter has announced it will add labels to or remove tweets carrying manipulated content, while Facebook has partnered with Microsoft and academics to create the Deepfake Detection Challenge, which seeks to better detect when AI has been used to alter video.

Google recently released more than 3,000 visual deep fakes to help inform a FaceForensics benchmark that is combating manipulated videos. This follows its earlier release of a synthetic speech dataset.

These solutions are a work in progress, however. Voice biometrics technology – existing fraud detection tech – could have a major impact in marketing.

Voice biometrics to combat fake news

Banks, insurance providers and governments across the globe are already using voice biometrics as an easy and secure way to authenticate their customers, combat fraudulent activity and improve customer satisfaction.

A voiceprint includes more than 1000 unique physical and behavioural characteristics of a person, such as vocal tract length, nasal passage shape, pitch, cadence, accent and more. In fact, research shows it’s as unique to an individual as a fingerprint.

Where behaviours can be easily mimicked, physical voice characteristics cannot, thus preventing impersonators from ‘tricking’ the system. Voice biometrics could be monumental in verifying if a video or audio recording is legitimate, analysing if the voice is actually from the person the message claims to be, or has been manipulated, simulated, or created synthetically to create fake news stories.

The accuracy and speed with which voice biometrics can authenticate a person’s identity mean that harmful deep fakes be debunked with certainty – quickly mitigating the threat to a brand’s reputation.

Biometrics represent a new era of identity security, and given the dramatic influence fake news can have, combating deep fake videos and synthetic audio with voice biometrics is a natural progression for the technology.

Source: https://www.marketingmag.com.au/hubs-c/fake-news-deep-fakes-and-fraud-detection-2020-addressing-an-epidemic/

A new supertool fights fake images – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:45 PM on Tuesday, February 11th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

A new supertool fights fake images

A new supertool fights fake images, plus the Economist’s guide to Instagram and a new way to pay for journalism

  • How can technology strengthen fact-checking? Jigsaw, a nonprofit division of Alphabet, Google’s parent company, asks and then answers its own question with the announcement of Assembler, an experimental detection platform that aims to help fact checkers and journalists identify manipulated images.

The platform pulls together several existing technologies — for example, one that spots copy-and-pastes within images and another that detects image brightness manipulations — into one supertool. Right now, Jigsaw is working with a group of global newsrooms and fact-checkers to test Assembler’s viability in real situations. That means, unfortunately, it isn’t available for the rest of us yet.

This new tool can show you what journalists are writing about on a large scale. MuckRack Trends (log-in required) is a lot like Google Trends, which shows you what people are searching for on Google, but it’s specific to news articles. You can use it, for example, see how many news articles have been written about Oscar-winning director Bong Joon-ho over the past week (5,733, as I write this) and compare it to someone like, say, Martin Scorsese (5,612). There are SO MANY great uses for this. I wrote about some of them here.

The Economist makes charts … for Instagram. The visual social media platform is a natural home for informative and interesting infographics from one of the world’s most prestigious media brands. Charts like “Who is more liberal: Uncle Joe or Mayor Pete?” and “Interest in veganism is surging” perform well between the organization’s beautiful photos and illustrations. The Economist offers a few tips for others willing to try putting info on Insta, including: Keep colors consistent so that fast-scrollers know who they’re looking at, rethink charts and graphs to fit into a small space and cater to your audience, which is probably younger on Instagram.

If you need a new online publishing system, start here. Content management systems, or CMSs, are the engines that run our online journalism. A good engine works without you having to think much about it. A bad one — and I’m just making things up here — takes forever to load, formats text in unexplainable ways and occasionally deletes your stories outright. Together with News Catalyst, Poynter put together a guide on how to get a new CMS, along with a look at five of our favorite modern CMS choices and live demo opportunities for each.

SPONSORED: It takes a village to publish a story. That’s why Trint’s collaborative speech-to-text platform lets newsroom teams work on the same transcript at the same time. Trint’s Pro Team plan means editing, verification and exporting happen simultaneously — stories are published in near real time and with 100% accurate information. And with Trint’s Workspaces, you can choose who has access to what data through custom permissions. Journalists, editors and producers from different teams instantly get access to the transcripts they need as soon as they’re uploaded. Start your Pro Team trial today.

Here’s a map that shows stunning satellite images of locations across the globe. From school bus assembly plants (so much yellow) to airports (kind of meta!), from iron ore mine ponds (so much pink) to man-made islands that depict a pair of dolphins circling each other (‘nuff said), this map is fun to explore and might offer a story idea or two. 

Don’t call it a paywall. It’s more of a parking meter. A company called Transact has joined the fight to get people to pay for journalism. Transact’s transactions work similarly to micropayments, in which readers pay for articles from across the internet à la carte instead of a flat-rate subscription, except that users load up lumps of money at once and spend as they go. Transact calls it a “digital media debit card.” The Santa Barbara Independent, an alt-weekly newspaper in California, is one of the first to implement it. Transact joins a growing list of alternative payment schemes for journalism, including Blendle (which made a pivot away from micropayments not long ago) and Scroll (which kills ads and provides a better user experience).

Source: https://www.poynter.org/tech-tools/2020/a-new-supertool-fights-fake-images-plus-the-economists-guide-to-instagram-and-a-new-way-to-pay-for-journalism/

Disinformation is more than fake news SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:15 PM on Monday, February 10th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Disinformation is more than fake news

By Jared Cohen, for Jigsaw blog

Jigsaw’s work requires forecasting the most urgent threats facing the internet, and wherever we traveled these past years — from Macedonia to Eastern Ukraine to the Philippines to Kenya and the United States — we observed an evolution in how disinformation was being used to manipulate elections, wage war, and disrupt civil society. By disinformation we mean more than fake news. Disinformation today entails sophisticated, targeted influence campaigns, often launched by governments, with the goal of influencing societal, economic, and military events around the world. But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.

Using technology to detect manipulated images

Beginning in 2016 we began working with researchers and academics to develop new methods for using technology to detect certain aspects of disinformation campaigns. Together with Google Research and academic partners, we developed an experimental platform called Assembler to test how technology can help fact-checkers and journalists identify and analyze manipulated media.

Debunking images is a time consuming and error-prone process for fact-checkers and journalists. To verify the authenticity of images, they rely on a number of different tools and methods. For example, Bellingcat, a group of researchers and investigative journalists dedicated to in-depth fact-checking, lists more than 25 different tools and services available to verify the authenticity of photos, videos, websites, and other media. Fact-checkers and journalists need a way to stay ahead of the latest manipulation techniques and make it easier to check the authenticity of images and other assets.

Assembler is an early stage experimental platform advancing new detection technology to help fact-checkers and journalists identify manipulated media. In addition, the platform creates a space where we can collaborate with other researchers who are developing detection technology. We built it to help advance the field of science, and to help provide journalists and fact-checkers with strong signals that, combined with their expertise, can help them judge if and where an image has been manipulated. With the help of a small number of global news providers and fact checking organizations including Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler, we’re testing how Assembler performs in real newsrooms and updating it based on its utility and tester feedback.

How Assembler Works

Assembler brings together multiple image manipulation detectors from various academics into one tool, each one designed to spot specific types of image manipulations. Individually, these detectors can identify very specific types of manipulation — such as copy-paste or manipulations to image brightness. Assembled together, they begin to create a comprehensive assessment of whether an image has been manipulated in any way. Experts from the University of Maryland, University Federico II of Naples, and the University of California, Berkeley each contributed detection models. Assembler uses these models to show the probability of manipulation on an image.

Additionally, we built two new detectors to test on the platform.The first is the StyleGAN detector to specifically address deepfakes. This detector uses machine learning to differentiate between images of real people from deepfake images produced by the StyleGAN deepfake architecture. Our second model, the ensemble model, is trained using combined signals from each of the individual detectors, allowing it to analyze an image for multiple types of manipulation simultaneously. Because the ensemble model can identify multiple image manipulation types, the results are, on average, more accurate than any individual detector.

“These days working in multimedia forensics is extremely stimulating. On one hand, I perceive very clearly the social importance of this work: in the wrong hands, media manipulation tools can be very dangerous, they can be used to ruin the life and reputation of ordinary people, commit frauds, modify the course of elections,” said Dr. Luisa Verdoliva, Associate Professor at the Department of Industrial Engineering at the University Federico II of Naples and Visiting Scholar, Google AI. “On the other hand, the professional challenge is very exciting, new attacks based on artificial intelligence are conceived by day, and we must keep a very fast pace of innovation to face them. Collaborating in Assembler was a great opportunity to put my knowledge and my skills concretely to the service of people. In addition I came to know wonderful and very diverse people involved in this project, all strongly committed in this fight. Overall a great experience.”

The Current: Exposing the architecture of disinformation campaigns

Jigsaw is an interdisciplinary team of researchers, engineers, designers, policy experts, and creative thinkers, and we’ve long wanted to find a way to share more of our team’s work publicly, especially our research insights. That’s why I’m excited to introduce the first issue of The Current, Jigsaw’s new research publication that illuminates complex problems through an interdisciplinary approach — like our team.

Our first issue is, as you might have guessed, all about disinformation — exploring the architecture of disinformation campaigns, the tactics and technology used, and how new technology is being used to detect and stop disinformation campaigns.

One feature of this inaugural issue is the Disinformation Data Visualizer. Jigsaw visualized the research from the Atlantic Council’s DFRLab on coordinated disinformation campaigns around the world and shows the specific tactics used and countries affected. The Visualizer is a work in progress. We’re sharing this with the wider community to enable a dialogue about the most effective and comprehensive disinformation countermeasures.

An ongoing experiment

Disinformation is a complex problem, and there isn’t any simple technological solution. The first step is to better understand the issue. The world ought to understand how disinformation campaigns are increasingly being used as a way of manipulating people’s perception of important issues. We’re committed to sharing our insights and publishing our research so other organizations can examine and scrutinize different ways to approach this issue. We’ll be sharing more updates about Jigsaw’s work in this space over the coming few months.

In the meantime we’d like to express our gratitude to our academic partners, our partners within Google, and the courageous publishers and journalists who are committed to using technology to bring people the truth, wherever it leads: Chris Bregler, Larry Davis, Alexei Efros, Hany Farid, Andrew Owens, Abhinav Shrivastava, Luisa Verdoliva, and Emerson Brookings, Graham Brookie and the Atlantic Council’s DFRLab team.

Source: https://www.stopfake.org/en/disinformation-is-more-than-fake-news/?__cf_chl_jschl_tk__=57eb9e0da0a582c3981aa8ed39e5f90a3cae9ebd-1581350912-0-AYw5r3RGxKqQMHhsZazGCBz7wTi9sZsM25j0X6X-RLqgmPiUWqB7PIEF_iEJx1V-pc-0fEmQ57LyMozcBAp7Oco-Uipl_R3nuYudhJdwnnBOovp12rcGmht1TowUugnZFYn8V-4UddKzmMsDP2Nu7IgasYOI6Q21teLNyGc81iMSGZMkJqLLBA8afv_2SoLGtyQ8KKGIx6ECKITuQaE5aA3w_cIEbAWd5sKH9RAmgKrU1c7uqqpCWkS-sxhOeclBEjqf23nkljl4f9Iqhtp3dHTnuvOZ1SjlyiAC0Ld-y5Z7s3lWb2FblSjPO1ko5kldvA2R0areN4kXGMnf0nyv0XY

The technology that could save us from #deepfake videos SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 4:01 PM on Tuesday, February 4th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

The technology that could save us from deepfake videos

Israeli startup Cyabra’s technology detects expertly doctored videos as well as the bots powering fake social-media profiles.

By Brian Blum

It’s November 2020, just days before the US presidential election, and a video clip comes out showing one of the leading candidates saying something inflammatory and out of character. The public is outraged, and the race is won by the other contender.

The only problem: the video wasn’t authentic. It’s a “deepfake,” where one person’s face is superimposed on another person’s body using sophisticated artificial intelligence and a misappropriated voice is added via smart audio dubbing.

The AI firm Deeptrace uncovered 15,000 deepfake videos online in September 2019, double what was available just nine months earlier.

The technology can be used by anyone with a relatively high-end computer to push out disinformation – in politics as well as other industries where credibility is key: banking, pharmaceuticals and entertainment.

Israeli startup Cyabra is one of the pioneers in identifying deepfakes fast, so they can be taken down before they snowball online.

Cyabra cofounder and CEO Dan Brahmy. Photo: courtesy

Cyabra CEO Dan Brahmy tells ISRAEL21c that there are two ways to train a computer algorithm to analyze the authenticity of a video.

“In a supervised approach, we give the algorithm a dataset of, say, 100,000 pictures of regular faces and face swaps,” he explains. “The algorithm can catch those kinds of swaps 95 percent of the time.”

The second methodology is an “unsupervised approach” inspired by a surprising field: agriculture.

“If you fly a drone over a field of corn and you want to know which crop is ready and which is not, the analysis will look at different colors or the way the wind is blowing,” Brahmy explains. “Is the corn turning towards its right side? Is it a bit more orange than other parts of the field? We look for those small patterns in videos and teach the algorithm to spot deepfakes.”

Cyabra’s approach is more sophisticated than traditional methods of ferreting out deepfakes – looking at metadata, for example, of where was the picture taken, what kind of camera was used and on what date it was shot.

“Our algorithm might not know the exact name of the manipulation used, but it will know that the video is not real,” Brahmy says.

Only a computer program can spot telltale signs the human eye would miss, such as eyeglasses that don’t fit perfectly or lip movements not perfectly synched with movements of the chin and Adam’s apple, Brahmy tells ISRAEL21c.

Staying a few steps ahead

Cyabra’s technology detects inauthentic nuances that the human eye would miss. Photo: courtesy

Deepfake detection technology must continually evolve.

In the early days – all the way back in 2017, when deepfakes first started appearing – fake faces didn’t blink normally. But no sooner had researchers alerted the public to watch for abnormal eye movements than deepfakes suddenly started blinking normally.

“To have a durable edge, you need to be a year or two ahead, to make sure no one can re-do what you just did,” Brahmy says.

That’s important both in catching the deepfakers and for a company like Cyabra to stay ahead of the competition.

Cyabra’s edge is that two of its four cofounders came out of IDF intelligence divisions where they looked for ways to foil terrorist groups trying to create fake profiles to connect with Israelis.

In addition, former Mossad deputy director Ram Ben Barak is on the company’s board of directors.

Fake social-media profiles

Cyabra’s deepfake detection technology was only released in the last month. For most of the past two years, since the company was founded, it has been focused on spotting fake social-media profiles.

Cyabra cofounder and COO Yossef Daar. Photo: courtesy

Brahmy cofounderYossef Daar claims there are 140 million fake accounts on Facebook, 38 million on LinkedIn, and 48 million bots on Twitter.

These, too, are not easy to detect.

Researchers from the University of Iowa discovered that some 100 million Facebook “likes” that appeared between 2015 and 2016 were created by spammers using around a million fake profiles.

Cyabra’s machine-learning algorithms run some 300 unique parameters to determine profile authenticity. A three-day-old profile with 700 friends whose user has no footprint outside of Facebook raises a red flag, for example.

In the 2016 U.S. elections, fake profiles on social media were the biggest problem – deepfakes didn’t exist yet.

By now, though, you’ve probably seen a few deepfakes yourself: Facebook CEO Mark Zuckerberg bragging about having “total control of billions of people’s stolen data,” former US President Obama using a profanity to describe President Trump or Jon Snow apologizing for the writing in season 8 of “Game of Thrones.”

Brahmy says the leadup to the 2020 election season is the right time to offer Cyabra’s solution.

Investors agree. Cyabra has raised $3 million from TAU Ventures and $1 million from the Israel Innovation Authority. The 15-person company started in The Bridge, a seven-month Tel Aviv-based accelerator sponsored by Coca-Cola, Turner and Mercedes. Now they’re based at TAU Ventures with a small presence in the United States as well.

Public and private sector clients

Cyabra’s clients prefer not to be named, although Brahmy did tell ISRAEL21c that 50% of its clients are in the public sector – governmental organizations or agencies – and the other half are “in the world of sensitive brands: consumer product, food and beverage, media conglomerates.”

“Imagine you’re in the business of providing unbiased information and suddenly 500 bots send you a message with a falsified picture and you’re ready to publish it. We want to be there five seconds before you pull the trigger, to let you know it’s false,” says Brahmy. This heatmap shows the level of doctoring done to a picture or frame in a video. Emphaized areas represent more heavily forged pieces of content. Image courtesy of Cyabra

Cyabra leaves the task of fact-checking content for “fake news” to other companies such as NewsGuard and FactMata. (Neither company is Israeli.)

There are also other companies dealing with deepfakes and fake profiles. But, Brahmy says, “we’re the only one doing both, with the technical capability to detect deepfakes along with cross-channel analysis to detect the bots [powering fake social media profiles], all under one roof.”

Facebook announced in January 2020 that it is banning deepfakes intended to mislead rather than entertain. But can Facebook really get ahead of all the deepfakes out there – and those to come?

If Cyabra and companies like it succeed, the next time you see a politician or celebrity saying something you find reprehensible, it might just be true.

Source: https://www.israel21c.org/the-technology-that-could-save-us-from-deepfake-videos/

#BioCatch predicts 10 #cybercrime trends for 2020 SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:57 PM on Friday, January 31st, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

BioCatch predicts 10 cybercrime trends for 2020

  • Deep fake technology will be used for identity theft: Deep fake technology that spoofs the human voice is already being used to attack call centers, or in business email compromise scams.
  • In 2020, we should see the early signs of deep fake being used to defeat face recognition controls, including those using state of the art liveliness tests.
  • The industry will have to come up with silent, behind-the-scenes controls that can offset the vulnerabilities of overt biometric authentication.

BioCatch, a leader in behavioral biometrics, today announced its Cybercrime and Fraud Predictions for 2020 that show fraudsters are keeping pace with the digital transformation and are a growing threat to businesses around the world. These are the 10 biggest cybercrime and fraud trends for the New Year, according to BioCatch Founder and Chief Cyber Officer Uri Rivner.

Deep fake technology will be used for identity theft: Deep fake technology that spoofs the human voice is already being used to attack call centers, or in business email compromise scams. In 2020, we should see the early signs of deep fake being used to defeat face recognition controls, including those using state of the art liveliness tests. The industry will have to come up with silent, behind-the-scenes controls that can offset the vulnerabilities of overt biometric authentication.

LiFi networks will be targeted by hackers: There’s a new, promising high-speed Internet technology in town, and it’s visible light based rather than radio wave based. While reaching full commercial use is still a few years away, and the tech is limited to proximity use given physical limitations on light movement, a network based on LiFi should be as hackable as WiFi and might be more prone to physical interferences. We should see the first demonstrations of LiFi hacks in the new year.

UK identity databases will come under attack by fraudsters: Multiple factors will drive criminals that target the UK financial sector to boost their Account Opening Fraud activities; the success banks have in fighting traditional fraud, the introduction of tighter controls over social engineering, and the coming implementation of PSD2 all make account takeover harder for them. To facilitate this expected boost, hackers will focus their attention on UK identity databases, attempting to get multiple data points on each UK citizen in a similar fashion to what had been the state in the US in the last few years. In the US, synthetic identity fraud is the fastest growing type of financial crime, with an average charge-off balance per instance of $15,000, according to a Federal Reserve study.

FinTech companies will be fraudsters’ next big target: While banks and credit card issuers in the US have been stepping up their defenses against account opening and account takeover fraud, the fintech sector, which has largely escaped the wrath of fraudsters, will begin to see a sharp increase in online fraud. Because they are less heavily regulated, fintech companies are more agile and able to introduce new functionalities. However, the lack of proper defenses and the fact that they have no access to the banking sector’s fraud consortium databases will make them far more exposed.
Chatbot and voice assistance payment fraud will rise: Many financial institutions are beginning to deploy AI-based customer assistance tools, such as chatbots and voice based interfaces, to broaden their offerings beyond traditional online and mobile channels. As soon as those new channels begin to offer full functionality – say, move money from a user’s account – they’ll be targeted by criminals and will need to be protected against account takeover. Researchers have already proven that lasers can be used to spoof voice commands in physical voice assistance devices, and it would be even easier to attack their virtual equivalents.

eComm fraud AI models will become half-blinded: One of the unspoken secrets of AI is that it’s only as good as the tagged data that is fed to it. With the increase of account opening fraud, a huge amount of eComm fraud is going to come not from compromised credit cards, but rather new credit and debit cards that are opened online using identity theft. In these cases, there are no chargebacks, as no real user will call to complain. The result is that AI models will become half-blinded. The criminal patterns that AI models use to pinpoint fraud will be suppressed by genuine confirmations after account opening, as criminals use the fraudulent account to make purchases, just as a genuine user would.

AI will help prevent subscription services fraud: The big content streaming companies have formed an alliance designed to fight password sharing and criminal offerings of compromised passwords. Unfortunately, device-based and location-based controls are no longer holding as technologies to spoof devices and geo-location are readily available. New technologies such as behavioral biometrics and unsupervised anomaly detection AI will prove to fare much better against misuse of subscription services.  

Zelle fraud levels will surge: As many regional banks and credit unions are adding Zelle P2P capabilities to their online and mobile banking, criminals are beginning to single out the US as a new land of opportunities. Well-proven social engineering techniques are already in use, and attacks will escalate and quickly adapt as new controls are added – with the result of real users suffering from higher friction while fraud levels surge.

Selfie biometric data will be the new dark web money maker: There’s already a vibrant dark web trade in personalized biometric data, and that will continue to grow in 2020. More websites and applications are turning to selfie-based verification and more online account opening flows are moving from obsolete controls, such as Knowledge Based Authentication, to more modern controls, like selfie-document matching. Some criminals will focus on collecting data from open sources and social media. Others will target – and already have targeted â€“ users in phishing campaigns designed to steal not just static credentials, but also selfies and videos of the user’s face.

Another threat is that advanced malware capabilities, which are currently in the hands of state sponsored actors and other high-end players, will find their way to criminal hands and be used to break into mobile device authentication.

Money mules will become an endangered species: In an era of easy account opening fraud, why spend resources and take unnecessary risks by interacting with mules? Money mules won’t go away in 2020, but criminals engaged in cashing out compromised bank accounts will begin shifting away from classic recruitment options and start using falsely opened bank accounts instead. The ease of fraudulent account opening will also help other crimes, such as money laundering and impersonating the receiving end of P2P money transfers like Zelle.

Mr. Rivner says: “At the core of our cybercrime problem is a lack of effective methods for establishing and verifying digital identity in the constantly evolving digital ecosystem. New solutions are addressing the challenges, replacing outdated approaches that rely on static information with much more effective, multi-factor tools. Organizations that are fastest to act with new, powerful, cutting edge fraud prevention tools are the ones that will be least affected by fraudsters in 2020 and beyond. “

Source: https://www.planetbiometrics.com/article-details/i/10769/desc/biocatch-predicts-10-cybercrime-trends-for-2020/

Google Announces $1 Million Grant to Help Fight Fake News SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:00 PM on Thursday, January 30th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Google Announces $1 Million Grant to Help Fight Fake News in India

  • Technology giant Google on Wednesday, 29 January, announced a USD 1 million grant to promote news literacy among Indians.

The money will be given to Internews, a global non-profit, which will select a team of 250 journalists, fact checkers, academics and NGO workers for the project, a statement said.

The announcement, part of a USD 10 million commitment worldwide to media literacy, comes at a time when news publishers, especially on the digital front, have been found to have indulged in spreading misinformation.

Google said a curriculum will be developed by a team of global and local experts, who will roll out the project in seven Indian languages.

“The local leaders will then roll out the training to new internet users in non-metro cities in India, enabling them to better navigate the internet and assess the information they find,” the statement said.

With an eye to curb misinformation, Google News Initiative (GNI) India Training Network — a group of 240 senior Indian reporters and journalism educators — has been working to counteract disinformation in their newsrooms and beyond since last year.

GNI has given verification training for more than 15,000 journalists and students from more than 875 news organisations in 10 Indian languages, using a “train-the-trainer” approach over the past year, it said.

Source: https://www.thequint.com/news/webqoof/google-announces-dollar1-million-grant-to-help-fight-fake-news-in-india

Disinformation in 5.4 Billion Fake Accounts: A Lesson for the Private Sector SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:30 PM on Wednesday, January 29th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Disinformation in 5.4 Billion Fake Accounts: A Lesson for the Private Sector

  • Social media platforms are turning a new leaf to make online communities safer and happier places. Instagram turned off “likes,” but the biggest news came when Facebook shut down 5.4 billion fake accounts.

By: John Briar

Social media platforms are turning a new leaf to make online communities safer and happier places. Instagram turned off “likes,” but the biggest news came when Facebook shut down 5.4 billion fake accounts. The company reported that up to five percent of its monthly user base of nearly 2.5 billion consisted of fake accounts. They also noted that while the numbers are high, that doesn’t mean there is an equal amount of harmful information. They are just getting better at identifying the accounts.

The concerted effort to close fictitious accounts is shedding light on disinformation and misinformation campaigns. But it’s not a new tactic. It dates back to the early days of war when false content was spread with the intent to deceive, mislead, or manipulate a target or opponent. Where disinformation was once communicated by telegram, the modern version of vast, coordinated campaigns are now disseminated through social media with bots, Twitterbots and bot farms—at a scale humans could never perform.

Now, disinformation campaigns can be lodged by a government to influence stock prices in another country, or by a private company to degrade brand presence and consumer confidence. What’s worse is that bots can facilitate these campaigns en masse.

Understanding the Role Bots Play in Disinformation

On social media, you might be able to easily identify bots trolling users. Or maybe not—it’s often trickier than you’d expect. Sophisticated bots use several tactics that make them successful at disinformation and appearing human, including:

  1. Speed and Amplification – Bots quickly spread low-credibility content to increase the likelihood information goes viral. The more humans see the disinformation campaigns, the more likely they are to spread it themselves.
  2. Exploiting Human-Generated Content – Bots spread negative or polarizing content generated from real humans that prove to be credible to other humans.
  3. Using Metadata – Bots used more metadata (comments, photo captions, etc.) to appear as human, which helps evade detection.

Whether fraudsters create false information or use existing misinformation, bots are the unstoppable force in the spread of disinformation. Even with platforms like Facebook dismantling campaigns, taking down bots is a pervasive game of whack-a-mole.

Business Interference: A Bot’s Expertise

How do we take the lessons learned and apply them to today’s businesses? For one thing, we know that identifying bots masquerading as customers, competitors, or the actual company is increasingly difficult.

Some attempts to deceive, mislead and manipulate customers use the same bot-driven propaganda techniques as we have seen on social media platforms. Bots can amplify and create negative reviews, spread misinformation about unrest in a company, or defame company leadership.

Beyond that, one of the biggest threats to businesses is content scraping. In this attack vector, bots are programmed to crawl and fetch any information that could be used “as is” or injected with misinformation before spreading. This could include prices, promotions, API data, articles, research and other pertinent information. Because of the open nature of the Internet, nothing is stopping bots from gaining access to websites and applications, unless bot detection and mitigation is in place.

Aside from what we have seen, what do company-targeted disinformation campaigns look like in the wild?

  • Legitimate pricing sheets could be scraped by a bot, then distorted to become favorable to the competition before presenting to prospects.
  • Articles are stolen, injected with misinformation and copied around the Internet—hurting businesses twofold—search engines assuming the company is trying to game SEO and lowering the ranking, and misleading content consumers.

Given that bots account for nearly half of web traffic, standard cybersecurity technologies that do not specialize in bots cannot prevent the onslaught of fraudulent traffic. If information reserved for customers and partners exists on company websites, even behind a portal, companies should expect bots to continue scraping their sites until they leave with valuable content. From all the data that has been studied, bad bots come early – days after a site is launched. They attack in waves, consistently trying and retrying to capture critical information.

The Future of Bots in 2020

If the headlines teach us anything, we can predict that 2020 will bring even more sophisticated bots in full force, leveraging artificial intelligence (AI) and getting smarter about how to behave like a human. To outpace fraudsters and their bot armies, the same advanced technologies like AI and machine learning along with behavioral analytics are required. Only then will it be possible to parse out traffic and allow humans through, while stopping bots before they can gather information for disinformation campaigns.

Source: https://www.securitymagazine.com/articles/91616-disinformation-in-54-billion-fake-accounts-a-lesson-for-the-private-sector

Data Privacy Day 2020: 5 Lessons From The Past To Better Secure The Future – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 3:00 PM on Tuesday, January 28th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Data Privacy Day 2020: 5 Lessons From The Past To Better Secure The Future

(By David Higgins)

  • Lawmakers claim Facebook “contravened the law by failing to safeguard people’s information” – and suffered the consequences.
  • Now the US government is placing additional pressure on Facebook to stop the spread of fake news, foreign interference in elections and hate speech (or risk additional, larger fines)
  • This Data Privacy Day urges individuals and organisations around the world to learn from the fallout of the mega-breaches of the recent past.

Until recently, data privacy was only considered critical in the digital world. But as the digital and physical worlds intersect, it is now integral not only to securing an individual or a corporation’s digital identity, but also to avoiding the safety of citizens being compromised. Data privacy considerations should underpin all company decisions, whether on the board level or on the shop floor and, this Data Privacy Day, organisations should encourage their entire workforce – not just IT teams – to re-evaluate how they secure and manage data.

It’s now well-established that data is the world’s most valuable asset, and a tempting target for malevolent hackers with varying motivations. More often than not, they are pursuing credentials that they can use to infiltrate businesses and target sensitive and valuable data. Attackers seek ways to cause irreparable damage across a whole range of industries, from seizing companies’ administration logins to hacking into medical data so as to hold individuals to ransom over the disclosure of sensitive personal information. As a tragic, but potentially realistic scenario, this could even result in a doctor being unable to perform a life-saving operation due to a lack of availability of the patient’s records for example.

Hackers will inevitably be successful from time to time. Addressing this threat and limiting how far they can infiltrate a network after a successful breach is imperative in order to safeguard national security. Infiltration or compromise of CNI, for instance, could plausibly result in the loss of control of public services such as utilities, healthcare and government, posing a severe risk to public safety. This Data Privacy Day, we need to take a step back to not only understand the value in the data we hold, but also the importance of only allowing individuals and systems that need it to access it.

Mega Breach lesson #1: Equifax Breach (reported in 2017) – Several tech failures in tandem–including a misconfigured device scanning encrypted traffic, and an automatic scan that failed to identify a vulnerable version of Apache Struts–ultimately led to the breach which impacted 145M customers in the US and 10M UK citizens.

Data Privacy Day Learning – get security basics right. Cyberattacks are growing more targeted and damaging but a good industry reminder from the Equifax breach is that standard security basics should never be ignored. Patches should be applied promptly, security certificates should be maintained, and so on. This breach also inspired elected officials to push for stronger legislation to tighten regulations on required protection for consumer data.

Mega Breach lesson #2: Uber Breach (reported in 2017) – In 2017 Uber revealed it had suffered a year-old breach that exposed personal information belonging to 57M drivers and customers.

Data Privacy Day Learning – don’t store code in a publicly accessible database. Uber data was exposed because the AWS access keys were embedded in code that was stored in an enterprise code repository by a third party contractor. A clear takeaway is that no code repository is a safe storage place for credentials.

Mega Breach lesson #3: Facebook’s Cambridge Analytica Breach (reported in 2018) – Cambridge Analytica harvested the personal data of millions of peoples’ Facebook profiles without their consent and used it for political advertising purposes. The scandal finally erupted in March 2018 with the emergence of a whistle-blower and Facebook was fined £500,000 ($663,000), which was the maximum fine allowed at the time of the breach.

Data Privacy Day Learning – protect user data (or pay up). Lawmakers claim Facebook “contravened the law by failing to safeguard people’s information” – and suffered the consequences. Now the US government is placing additional pressure on Facebook to stop the spread of fake news, foreign interference in elections and hate speech (or risk additional, larger fines).

Mega Breach lesson #4: Ecuadorian Breach (reported in 2019) – Data on approximately 17M Ecuadorian citizens, including 6.7M children, was breached due to a vulnerability on an unsecured AWS Elasticsearch server where Ecuador stores some of its data. A similar Elasticsearch server exposed the voter records of approximately 14.3 million people in Chile, around 80% of its population.

Data Privacy Day Learning – adhere to the shared responsibility model. Most cloud providers operate under a shared responsibility model, where the provider handles security up to a point and, beyond that, it becomes the responsibility of those using the service. As more and more government agencies look to the cloud to help them become more agile and better serve their citizens, it’s vital they continue to evolve their cloud security strategies to proactively protect against emerging threats – and reinforce trust among the citizens who rely on their services.

Mega Breach lesson #5: Desjardins Breach (reported in 2019) — The data breach that leaked info on 2.9M members wasn’t the result of an outside cyber attacker, but a malicious insider – someone within the company’s IT department who decided to go rogue and steal protected personal information from his employer.

Data Privacy Day Learning – be proactive in identifying unusual/unauthorized behaviour. While insider threats can be more difficult to identify, especially in a case where the user had privileged access rights, having a solution in place to monitor for unusual and unauthorized activities that can take automated remediation steps as needed can help reduce the amount of time it takes to stop an attack and minimize data exposure. This breach shows that a defence in depth security strategy that includes privileged access security, multi factor authentication, and the detection of anomalous behaviour with tools such as database activity monitoring has never been more crucial.

Source: https://www.expresscomputer.in/news/data-privacy-day-2020-5-lessons-from-the-past-to-better-secure-the-future/45933/

Discerning fake or deceptive stories has become increasingly difficult over the last four years American Citizens say – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 2:10 PM on Monday, January 27th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Discerning fake or deceptive stories has become increasingly difficult over the last four years American Citizens say

  • New polls found that 59 percent of Americans state it is difficult to recognize bogus data, deliberately deceptive and incorrect stories depicted as truth, via web-based networking media
  • Another 37 per cent deviated, saying it is anything but difficult to spot

by nitin198

Four years after Russia propelled a digital campaign to upset and impact the 2016 presidential crusade, about 33% of Americans state deceiving stories via web-based networking media represent the greatest risk to the wellbeing of U.S. elections. Half of the population thinks President Trump encourages political race impedance, as indicated by the most recent PBS NewsHour/NPR/Marist survey.

A larger part of the American populace says detecting the distinction among certainty and bogus data via web-based networking media is troublesome and gotten more prominent since 2016. Hardly any vibe assures them that tech companies will forestall the abuse of social media to impact the upcoming 2020 elections.

Is misinformation getting more difficult to spot?

The new polls found that 59 percent of Americans state it is difficult to recognize bogus data, deliberately deceptive and incorrect stories depicted as truth, via web-based networking media. Another 37 per cent deviated, saying it is anything but difficult to spot.

Moreover, with the 2020 presidential battle about to get going vigorously, the greater part of U.S. grown-ups said recognizing these phony or misleading stories has gotten progressively troublesome in the course of the most recent four years. That conclusion was shared by 58 per cent of Democrats, 55 per cent of independents and a somewhat lower extent of Republicans at 44 per cent.

The fact that Americans know about the risk of deception is significant. However, it is nonsensical to anticipate that a normal individual should truth check each snippet of data zooming past them as they look through their online networking feeds.

Research has indicated that bogus data disperses quicker than reality via social media. In 2018, an investigation from the Massachusetts Institute for Technology said deception moved multiple times quicker than the reality on Twitter.

Who does the public depend on to be truth’s guardian?

39 per cent of Americans state the news media is liable for reviewing deceiving data. Another 18 per cent state that organizations like Facebook, Twitter or Google are capable. Furthermore, 15 per cent state the government’s essential occupation is to diminish the public’s presentation to deception.

Are the web-based social networking organizations doing what’s needed?

Seventy-five per cent of U.S. grown-ups have little trust in Facebook, Twitter, Google, and YouTube to stop the spread of falsehood. Just 5 per cent of survey responders said they felt “extremely sure” these organizations would forestall the viral spread of bogus stories.

The populace lacks trust in significant social media organizations, notwithstanding tech goliaths; for example, Facebook and Twitter have vowed to find a way to forestall election impedance on their platforms. Facebook, one of the most famous social media platforms, has said it will make a superior showing of expelling deceiving political promotions before the 2020 presidential political decision. Twitter said it will boycott political advertisements inside and out.

Americans are wary that social media organizations will respect these guarantees. Americans additionally realize platforms aren’t doing what’s necessary to stop the spread of falsehood. In May 2018, Facebook made a political promotion file to distinguish and research possibly hazardous advertisements; however, that mechanized framework isn’t immaculate.

Source: https://techsprouts.com/discerning-fake-or-deceptive-stories-has-become-increasingly-difficult-over-the-last-four-years-american-citizens-say/