Agoracom Blog

Disinformation in 5.4 Billion Fake Accounts: A Lesson for the Private Sector SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:30 PM on Wednesday, January 29th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Disinformation in 5.4 Billion Fake Accounts: A Lesson for the Private Sector

  • Social media platforms are turning a new leaf to make online communities safer and happier places. Instagram turned off “likes,” but the biggest news came when Facebook shut down 5.4 billion fake accounts.

By: John Briar

Social media platforms are turning a new leaf to make online communities safer and happier places. Instagram turned off “likes,” but the biggest news came when Facebook shut down 5.4 billion fake accounts. The company reported that up to five percent of its monthly user base of nearly 2.5 billion consisted of fake accounts. They also noted that while the numbers are high, that doesn’t mean there is an equal amount of harmful information. They are just getting better at identifying the accounts.

The concerted effort to close fictitious accounts is shedding light on disinformation and misinformation campaigns. But it’s not a new tactic. It dates back to the early days of war when false content was spread with the intent to deceive, mislead, or manipulate a target or opponent. Where disinformation was once communicated by telegram, the modern version of vast, coordinated campaigns are now disseminated through social media with bots, Twitterbots and bot farms—at a scale humans could never perform.

Now, disinformation campaigns can be lodged by a government to influence stock prices in another country, or by a private company to degrade brand presence and consumer confidence. What’s worse is that bots can facilitate these campaigns en masse.

Understanding the Role Bots Play in Disinformation

On social media, you might be able to easily identify bots trolling users. Or maybe not—it’s often trickier than you’d expect. Sophisticated bots use several tactics that make them successful at disinformation and appearing human, including:

  1. Speed and Amplification – Bots quickly spread low-credibility content to increase the likelihood information goes viral. The more humans see the disinformation campaigns, the more likely they are to spread it themselves.
  2. Exploiting Human-Generated Content – Bots spread negative or polarizing content generated from real humans that prove to be credible to other humans.
  3. Using Metadata – Bots used more metadata (comments, photo captions, etc.) to appear as human, which helps evade detection.

Whether fraudsters create false information or use existing misinformation, bots are the unstoppable force in the spread of disinformation. Even with platforms like Facebook dismantling campaigns, taking down bots is a pervasive game of whack-a-mole.

Business Interference: A Bot’s Expertise

How do we take the lessons learned and apply them to today’s businesses? For one thing, we know that identifying bots masquerading as customers, competitors, or the actual company is increasingly difficult.

Some attempts to deceive, mislead and manipulate customers use the same bot-driven propaganda techniques as we have seen on social media platforms. Bots can amplify and create negative reviews, spread misinformation about unrest in a company, or defame company leadership.

Beyond that, one of the biggest threats to businesses is content scraping. In this attack vector, bots are programmed to crawl and fetch any information that could be used “as is” or injected with misinformation before spreading. This could include prices, promotions, API data, articles, research and other pertinent information. Because of the open nature of the Internet, nothing is stopping bots from gaining access to websites and applications, unless bot detection and mitigation is in place.

Aside from what we have seen, what do company-targeted disinformation campaigns look like in the wild?

  • Legitimate pricing sheets could be scraped by a bot, then distorted to become favorable to the competition before presenting to prospects.
  • Articles are stolen, injected with misinformation and copied around the Internet—hurting businesses twofold—search engines assuming the company is trying to game SEO and lowering the ranking, and misleading content consumers.

Given that bots account for nearly half of web traffic, standard cybersecurity technologies that do not specialize in bots cannot prevent the onslaught of fraudulent traffic. If information reserved for customers and partners exists on company websites, even behind a portal, companies should expect bots to continue scraping their sites until they leave with valuable content. From all the data that has been studied, bad bots come early – days after a site is launched. They attack in waves, consistently trying and retrying to capture critical information.

The Future of Bots in 2020

If the headlines teach us anything, we can predict that 2020 will bring even more sophisticated bots in full force, leveraging artificial intelligence (AI) and getting smarter about how to behave like a human. To outpace fraudsters and their bot armies, the same advanced technologies like AI and machine learning along with behavioral analytics are required. Only then will it be possible to parse out traffic and allow humans through, while stopping bots before they can gather information for disinformation campaigns.

Source: https://www.securitymagazine.com/articles/91616-disinformation-in-54-billion-fake-accounts-a-lesson-for-the-private-sector

Tags: , , , , , ,

Comments are closed.