Agoracom Blog

Deep Dive: Fake Profiles Threaten Social Media Networks – SPONSOR: Datametrex AI Limited $

Posted by AGORACOM-JC at 11:45 AM on Thursday, February 13th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Deep Dive: Fake Profiles Threaten Social Media Networks

  • Fake profiles run rampant on sites such as Facebook, Twitter and YouTube, accounting for up to 25 percent of all new accounts, according to some estimates
  • The damage these fake profiles inflict is incalculable, resulting in billions of dollars lost or even altering the course of world politics.


Social media has become an integral part of everyday life, with a recent study finding that there were approximately 2.77 billion social media users around the world as of 2019. This number is projected to grow to more than 3 billion by the end of 2021 — almost half of the global population.

A good portion of these users is not real, however. Fake profiles run rampant on sites such as Facebook, Twitter and YouTube, accounting for up to 25 percent of all new accounts, according to some estimates. The damage these fake profiles inflict is incalculable, resulting in billions of dollars lost or even altering the course of world politics. Social media networks will need to step up their digital authentication games if they want to bring these fraudsters to heel.

How Fake Profiles Damage Social Media

Illegitimate social media profiles are strongly correlated with cybercrime, with researchers finding that bot-run fake profiles account for 75 percent of social media cyberattacks. Some of these crimes involve stealing personal information, like passwords and payment data, while others spread social propaganda or disseminate spam.

Social media networks are often negligent when removing fake profiles, too. Researchers at the NATO Strategic Communications Centre of Excellence conducted a study last year that tested the efficacy of Facebook’s, Google’s and Twitter’s fake profile detection protocols. The research team purchased 3,500 comments, 25,000 likes, 20,000 video views and 5,100 fake followers and found that 80 percent of their fake engagements were still online after one month. Approximately 95 percent of the remaining profiles were still online three weeks after the NATO team announced its findings to the public.

One might think that such an effort would cost a significant amount of time and money, but the study was relatively inexpensive. The researchers only spent €300 ($330) to purchase the comments, likes and followers — a Facebook ad of equivalent value would likely receive just 1,500 clicks. This makes fake profiles much more appealing to unscrupulous individuals and companies.

Fake social media profiles’ impacts became evident in the U.S. in 2016 when Russian hackers created thousands of fake Facebook and Twitter accounts to influence the former’s presidential election. These bots posted thousands of messages and fake news articles attacking candidate Hillary Clinton and sowing divisiveness within the Democratic Party, often promoting information from the Democratic National Committee’s (DNC) email hack.

Social sites often listed hashtags like #HillaryDown and #WarAgainstDemocrats as trending, inadvertently giving these bots a loudspeaker and letting their messages punch far above their weights. Special Counsel Robert Mueller’s 2018 investigation found that these hacker groups had multimillion-dollar budgets — a far cry from then-candidate Donald Trump’s characterization of the DNC hackers as “somebody sitting on their bed that weighs 400 pounds.”

Fake profiles’ threats are self-evident, but the solution to stopping them is not nearly as clear.

How Social Media Sites Can Fight Bots

Social media websites are reticent to disclose exactly how they identify and delete fake profiles — if fraudsters know too much about their prevention techniques, they will be able to circumvent them. Many brands, companies, advertisers and even congressional panels have demanded more information about how social media firms are working to stop the spread of fake profiles, however.

Third-party developers have also introduced solutions to curb the spread of illegitimate accounts, with many utilizing artificial intelligence (AI) and machine learning (ML). Thousands of social media profiles are created every day, making human analysis of each new registration impossible. AI and ML could reduce analytics teams’ burdens by employing pattern recognition to determine the details that all true profiles share, such as the frequency of their posts or what pages they tend to like. Profiles that do not adhere to this pattern could then be flagged for human review.

Social media networks could also utilize facial recognition biometrics to authenticate new accounts, requiring users to submit selfies or live smartphone videos for review to determine if their profiles are legitimate. Many new smartphones, including Apple’s iPhone 11, come with this technology right out of the box, meaning consumers are already familiar with it.

Facial recognition biometrics have fallen afoul of privacy advocates, however. Facebook has long been using facial recognition to identify its users in photographs — a practice that many condemned as privacy infringement. The website shifted this system to an opt-in model last year to appease these privacy advocates, meaning it would likely be reluctant to adopt facial biometrics during onboarding.

There is no obvious authentication solution that can completely prevent fake profiles. Social media sites, advertisers and governments all agree that they do need to be stopped; however — the next step is agreeing how to do it.


Tags: , , , , , ,

Comments are closed.