Agoracom Blog

Fake News In 2020 Election Puts Social Media Companies Under Siege – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 4:00 PM on Monday, March 2nd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Fake News In 2020 Election Puts Social Media Companies Under Siege

  • The social media giant recently unearthed hundreds of fake accounts that originated not only in Russia but Iran and Vietnam as well
  • Facebook says their purpose was clear: Sow confusion in the U.S. and ultimately disrupt the integrity of this year’s U.S. presidential contest
  • Facebook purged the fake accounts in early February, and says it has heavily beefed up its safety and security team

By: BRIAN DEAGON

The struggle to keep the 2020 election free of fake news on social media already is proving to be an uphill battle. Just ask the watchdogs at Facebook (FB) who are battling more disinformation than ever, courtesy of “deepfakes” and other new weapons of deception.

The social media giant recently unearthed hundreds of fake accounts that originated not only in Russia but Iran and Vietnam as well. Facebook says their purpose was clear: Sow confusion in the U.S. and ultimately disrupt the integrity of this year’s U.S. presidential contest. Facebook purged the fake accounts in early February, and says it has heavily beefed up its safety and security team.

Halting the flood of Facebook fake news and misinformation on other platforms is critical to social media companies. Failure on their part runs the risk of alienating loyal users and angering lawmakers, who could slap them with new regulations. And the scrutiny is sure to grow after reports this week said U.S. intelligence officials have told Congress that Russia already is meddling in this year’s elections to boost President Donald Trump’s reelection chances.

Clearly, U.S. election misinformation is a blossoming enterprise. In 2016 Russia established numerous fake accounts on Facebook, Twitter (TWTR) and the YouTube unit of Alphabet (GOOGL). In 2020 these efforts continue to expand both inside and outside Russia — and across all walks of social media. America’s enemies have put the nation’s electoral process in the crosshairs with fake news stories on social media and deepfakes, or doctored videos.

“What started as a Russian effort to undermine elections and cause chaos and basically reduce faith in our democratic institutions is now becoming a free-for-all,” said Lisa Kaplan, founder of Alethea Group, a consulting group that helps businesses, politicians and candidates protect themselves against disinformation.

Fake News On Social Media In The 2020 Election

Election meddling goes back decades, but the internet has greatly amplified the disruption. Anyone with an internet connection has a megaphone to the world. And that means governments in Russia, China, Iran and others who are less than friendly to the U.S. are actively using social media to influence the nation and its electorate, according to intelligence agencies and studies.

“Lying is not a new concept but … knowing that a majority of Americans get their news online through social media, it’s easy to misinform and manipulate people,” Kaplan said. “It makes it much easier for bad actors to launch these large-scale persuasion campaigns.”

Facebook fake news is a huge problem for the company. The same goes for Twitter and YouTube. Senior executives of these social media companies have spent considerable time over the past few years testifying at congressional inquiries and investigations.

At the same time, they’re struggling to stop a steady flow of fake news and disinformation planted on their platforms. Not only are the disinformation campaigns coming from overseas but from domestic groups as well.

FBI Director Christopher Wray says Russia continues to conduct an “information warfare” operation against the U.S. ahead of the 2020 election. Wray on Feb. 5 told the House Judiciary Committee that Moscow is using a covert social-media campaign.

“It is that kind of effort that is still very much ongoing,” Wray told the panel. “It’s not just an election cycle; it’s an effort to influence our republic in that regard.”

Anger Over Fake News On Social Media

The efforts by Russia and others have ushered in a new era of scrutiny for tech giants. U.S. Sen. Elizabeth Warren, D-Mass., one of the Democratic presidential hopefuls, has taken aim at Facebook fake news and company Chief Executive Mark Zuckerberg. She chides Facebook for spreading disinformation against her and other candidates.

In late January, Warren pledged that her campaign would not share fake news or promote fraudulent accounts on social media. It’s part of her plan to battle disinformation and hold Facebook, Google and Twitter responsible for its spread.

“Anyone who seeks to challenge and defeat Donald Trump in the 2020 election must be fully prepared to take on the full array of disinformation that foreign actors and people in and around the Trump campaign will use to divide Democrats, suppress Democratic votes, and erode the standing of the Democratic nominee,” Warren said in a written statement on her campaign website.

She added: “And anyone who seeks to be the Democratic nominee must condemn the use of disinformation and pledge not to knowingly use it to benefit their own candidacy or damage others.”

More fuel to that fire came Thursday. Reports that Russia already is actively meddling in the 2020 race drew concerns from lawmakers. The news also angered Trump, who expressed fear Democrats would use the information against him in the campaign. Trump dismissed Joseph Maguire, former acting director of national intelligence, for telling the House Intelligence Committee of the interference.

Interference In 2016 Election

But election meddling woes began in 2015 with a well-funded Russian web brigade, called the Internet Research Agency. The group reportedly had 400 employees and was based in St. Petersburg, Russia. It used Facebook and Twitter to disseminate an onslaught of fake, politically charged content in an attempt to influence the 2016 presidential election.

The widespread misuse of social media came to light in early 2018 during the investigation of Cambridge Analytica, a data mining and analysis firm used by President Trump’s 2016 campaign. Through trickery and deception, Cambridge Analytica accessed personal information on 87 million Facebook users without their knowledge and used that data to target specific readers with fake stories, divisive memes and other content.

Media executives later were called before Congress to discuss what they intended to do about disinformation for 2020. Congressional probes revealed the ease of manipulating their platforms.

Facebook, Twitter and Google have responded with a slew of election integrity projects such as new restrictions on postings. They also increasingly try to root out what they call “inauthentic behavior” â€” users assuming a false identity.

In response to written questions from IBD, Facebook says the size of its teams working on safety and security matters is now 35,000, triple its 2017 level. It also created rapid response centers to monitor suspicious activity during the 2020 election.

“Since 2017, we’ve made large investments in teams and technologies to better secure our elections and are deploying them where they will have the greatest impact,” Facebook spokeswoman Katie Derkits said in a written statement.

Twitter Bans Political Ads In 2020 Election

In late October, Twitter Chief Executive Jack Dorsey banned all political advertising from his network. Google quickly followed suit, putting limits on political ads across some of its properties, including YouTube.

“As caucuses and primaries for the 2020 presidential election get underway, we’ll build on our efforts to protect the public conversation and enforce our policies against platform manipulation,” Carlos Monje, Twitter’s director of public policy and philanthropy, told Investor’s Business Daily in written remarks. “We take the learnings from every recent election around the world and use them to improve our election integrity work.”

In September, Twitter suspended more than 10,000 accounts across six countries. The company said the accounts actively spread disinformation and encouraged unrest in politically sensitive regions.

YouTube and Google plan to restrict how precisely political advertisers can target an audience on their services.

Playing Whack-A-Mole With Facebook Fake News

Will these efforts make a difference in the 2020 election?

Research suggests social media firms will play a game of whack-a-mole. They’ve deleted thousands of inauthentic accounts with millions of followers. But that hasn’t stopped people from finding new ways to get back online and send out fake news.

In the most recent takedown of accounts by Facebook, Russia was the largest target. Facebook removed 118 accounts, groups and pages that targeted Ukraine citizens. Other Russia sites focused on its involvement in Syria and ethnic tensions in Crimea.

“Although the people behind this network attempted to conceal their identities and coordination, our investigation found links to Russian military intelligence services,” Facebook said in a blog post announcing the slate of removals.

Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said the social media company also removed 11 accounts distributing fake news from Iran. The accounts focused mostly on U.S.-Iran relations, Christianity and the upcoming election.

“We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge,” Gleicher wrote.

Emerging Threat Of Deepfakes In 2020 Election

In December, Facebook and Twitter disabled a global network of 900 pages, groups and accounts sending pro-Trump messages. The fake news accounts managed to avoid detection as being inauthentic. And they used photos generated with the aid of artificial intelligence. The campaign was based in the U.S. and Vietnam.

“There’s no question that social media has really changed the way that we talk about politics,” said Deen Freelon, a media professor at the University of North Carolina at Chapel Hill. “The No. 1 example is our president who, whether you like him or not, uses social media in ways that are unprecedented for a president and I would say any politician.”

The other fake news threat that social media companies face is from deepfakes. The level of realism in deepfakes has increased vastly from just a year ago, analysts say.

Using artificial intelligence technology, deepfake purveyors replace a person in an existing image or video with someone else’s likeness. Users also employ artificial intelligence tools in deepfakes to misrepresent an event that occurred. Deepfakes can even manufacture an event that never took place.

“Deepfakes are pretty scary to me,” said Freelon. “But I also think the true impact of deepfakes won’t become apparent until the technology gets developed a bit more.”

Cheapfakes: A Simpler Kind Of Fake News

Simpler versions of deepfakes get the name “cheapfakes,” or videos altered with traditional editing tools or low-end technology.

An example of a cheapfake that went viral was an altered video of House Speaker Nancy Pelosi. The edited video slowed down her speech to make her seem inebriated. That prompted right-wing cable news pundits to question Pelosi’s mental health and fitness to serve office.

YouTube removed the video. Facebook did not. Only videos generated by artificial intelligence to depict people saying fictional things would be removed, Facebook said. It eventually placed a warning label on the Pelosi video.

In January, Facebook took steps to ban many types of misleading videos from its site. It was part of a push against deepfake content and online misinformation campaigns.

Facebook said in a blog post that these fake news videos distort reality and present a “significant challenge” for the technology industry. The rules will not apply to satire or parody.

In February, Twitter changed its video policies, saying it would more aggressively scrutinize fake or altered photos and videos. Starting in March, Twitter will add labels or take down tweets carrying manipulated images and videos, it said in a blog post.

Also this month, YouTube said that it planned to remove misleading election-related content that can cause “serious risk of egregious harm.” It also laid out how it will handle such political videos and viral falsehoods.

Spreading Fake News On Social Media

But are the hurdles too high to surmount? A Massachusetts Institute of Technology study last year concluded fake news is more likely to go viral than other news. And it showed that a false story reached 1,500 people six times quicker than a true story.

As to why falsehoods perform so well, the MIT team settled on the hypothesis that fake news is more “novel” than real news. Subsequently, it evokes more emotion than the average tweet or post.

Ordinary social media users play a role in spreading fake news as well. The determining factor for whether people spread disinformation is the number of times they see it.

People who repeatedly encounter a fake news item may feel less unethical about sharing it on social media. That comes regardless of whether they believe it is accurate, according to a study published in the journal Psychological Science.

“Even when they know it’s false, if they repeatedly encounter it, they feel it’s less unethical to share and they’re less likely to censor,” said Daniel Effron, professor of Organizational Behavior at the London Business School and an author of the study. “It suggests that social media companies need a different approach to combating the spread of disinformation.”

Letting Consumers Decide On Fake News

The findings carry heavy implications for industry executives hoping to stop 2020 election fake news on social media.

“We suggest that efforts to fight disinformation should consider how people judge the morality of spreading it, not just whether they believe it,” Effron said.

After the Cambridge Analytica scandal, Facebook promised to do better, and rolled out a number of reforms. But in October, Zuckerberg delivered a strongly worded address at Georgetown University, defending unfettered speech, including paid advertising.

Zuckerberg says he wants to avoid policing what politicians can and cannot say to constituents. Facebook should allow its social media users to make those decisions for themselves, he contends.

Facebook officials repeatedly warn against significant changes to its rules for political or issue ads. Such changes could make it hard for less well-funded groups to raise money for the 2020 election, they say.

“We face increasingly sophisticated attacks from nation states like Russia, Iran and China,” Zuckerberg said. “But, I’m confident that we’re more prepared now because we’ve played a role in defending against election interference in more than 200 elections around the world since 2016.”

Source: https://www.investors.com/news/technology/fake-news-2020-election-puts-social-media-companies-under-siege/

Tags: , , , ,

Comments are closed.