Agoracom Blog Home

Posts Tagged ‘bot’

How Swiss scientists are trying to spot #deepfakes – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 4:13 PM on Friday, March 13th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

How Swiss scientists are trying to spot deepfakes

By Geraldine Wong Sak Hoi

As videos faked using artificial intelligence grow increasingly sophisticated, experts in Switzerland are re-evaluating the risks its malicious use poses to society – and finding innovative ways to stop the perpetrators.

In a computer lab on the vast campus of the Swiss Federal Institute of Technology Lausanne (EPFL), a small team of engineers is contemplating the image of a smiling, bespectacled man boasting a rosy complexion and dark curls.

“Yes, that’s a good one,” says lead researcher Touradj Ebrahimi, who bears a passing resemblance to the man on the screen. The team has expertly manipulated Ebrahimi’s head shot with an online image of Tesla founder Elon Musk to create a deepfake – a digital image or video fabricated through artificial intelligence.

It’s one of many fake illustrations – some more realistic than others – that Ebrahimi’s teamexternal link has created as they develop software, together with cyber security firm Quantum Integrityexternal link (QI), which can detect doctored images, including deepfakes.

Using machine learning, the same process behind the creation of deepfakes, the software is learning to tell the difference between the genuine and the forged: a “creator” feeds it fake images, which a “detector” then tries to find.

“With lots of training, machines can help to detect forgery the same way a human would,” explains Ebrahimi. “The more it’s used, the better it becomes.”

Forged photos and videos have existed since the advent of multimedia. But AI techniques have only recently allowed forgers to alter faces in a video or make it appear the person is saying something they never did. Over the last few years, deepfake technology has spread faster than most experts anticipated.

The team at EPFL have created the image in the centre by using deep learning techniques to alter the headshot of Ebrahimi (right) and a low-resolution image of Elon Musk in profile found on the Internet.​​​​​​​ (EPFL/MMSPG/swissinfo)

The fabrication of deepfake videos has become “exponentially quicker, easier and cheaper” thanks to the distribution of user-friendly software tools and paid-for services online, according to the International Risk Governance Center (IRGC)external link at EPFL.

“Precisely because it is moving so fast, we need to map where this could go – what sectors, groups and countries might be affected,” says its deputy director, Aengus Collins.

Although much of the problem with malign deepfakes involves their use in pornography, there is growing urgency to prepare for cases in which the same techniques are used to manipulate public opinion.

A fast-moving field

When Ebrahimi first began working with QI on detection software three years ago, deepfakes were not on the radar of most researchers. At the time, QI’s clients were concerned about doctored pictures of accidents used in fraudulent car and home insurance claims. By 2019, however, deepfakes had developed a level of sophistication that the project decided to dedicate much more time to the issue.

“I am surprised, as I didn’t think [the technology] would move so fast,” says Anthony Sahakian, QI chief executive.

Sahakian has seen firsthand just how far deepfake techniques have come to achieve realistic results, most recently the swapping of faces on a passport photo that manages to leave all the document seals intact.

Read More: https://www.swissinfo.ch/eng/sci-tech/manipulated-media_how-swiss-scientists-are-trying-to-spot-deepfakes/45595336

Democracy Labs uses Datametrex AI $DM.ca #Nexalogy Tech to Study #Coronavirus Misinformation

Posted by AGORACOM-JC at 7:35 AM on Thursday, March 12th, 2020
  • Announce that Democracy Labs successfully used Nexalogy’s technology to monitor #covid19 and #coronavirus to identify misinformation campaigns and Fake News
  • Democracy Labs is a US based organization providing a hub for ongoing technology and creative innovation that serves progressive campaigns and organizations at the national, state, and local levels.

TORONTO, March 12, 2020 — Datametrex AI Limited (the “Company” or Datametrex”) (TSXV: DM) (FSE: D4G) is pleased to announce that Democracy Labs successfully used Nexalogy’s technology to monitor #covid19 and #coronavirus to identify misinformation campaigns and Fake News. Democracy Labs is a US based organization providing a hub for ongoing technology and creative innovation that serves progressive campaigns and organizations at the national, state, and local levels. In addition to misinformation about Covid-19 DemLabs has also used Nexalogy tech to examine Islamophobia against U.S. Representative Rashida Tlaib.

Key takeaways:

  • 450,000 tweets analyzed from March 1st through 4th using hashtag #covid19
  • Russia Today suggested that the U.S.A. primaries be cancelled and was promoted by BOTS

Results of these campaigns can be found by clicking the attached links:

https://insights.nexalogy.com/democracy-labs-uses-nexalogy-tech-to-study-coronavirus-misinformation-b88ea13c4bed
https://nexalogy.com/insights/keep-an-eye-on-trolls-activity-and-report-harassment-to-twitter/

About Datametrex

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com).

For further information, please contact:

Jeff Stevens
Email: [email protected]Phone: 647-777-7974

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

INTERVIEW: Datametrex $DM.ca – The Small Cap #A.I. Company Governments Use To Fight Fake News & Election Meddling

Posted by AGORACOM-JC at 5:01 PM on Wednesday, March 11th, 2020

Until now, investor participation in Artificial Intelligence has been the domain of mega companies and those funded by Silicon Valley.  Small cap investors can finally consider participating in the great future of A.I. through Datametrex AI (DM: TSXV) (Soon To Be Nexaology) who has achieved the following over the past few months:

  • Q3 Revenues Of $1.6 million,  an increase of 186%
  • 9 Month Revenues Of $2.56M an increase of 37%
  • Repeat Contracts Of $1M and $600,000 With Korean Giant LOTTE   
  • $954,000 Contract With Canadian Department of Defence To Fight Social Media Election Meddling
  • Participation In NATO Research Task Group On Social Media Threat Detection 

When a small cap Artificial Intelligence company is successfully deploying its technology with military and conglomerates, smart investors have to take a closer look.   That look can begin with our latest interview of Datametrex CEO, Marshall Gunter, who talks to us about the use of the Company’s Artificial Intelligence to discover and eliminate US Presidential election meddling.  The fake news isn’t just targeting candidates specifically, it also targets wedge issues such as abortion cases now before the US Supreme Court and even the Coronavirus.   Watch this interview on one of your favourite screens or hit play and listen to the audio as you drive. 

The new election frontier: #Deepfakes are coming and often target women – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 12:45 PM on Monday, March 9th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

The new election frontier: Deepfakes are coming and often target women

By C.J. Moore

  • Deepfake technology has been called a powerful feat in artificial intelligence and machine learning at its best, and unsettling — even sinister — at its worst
  • “Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare,” per the report
  • Also notes that much of deepfake content online “is pornographic, and deepfake pornography disproportionately victimizes women.”

Deepfakes are media — usually videos, audio recordings or photographs — that have been doctored through artificial intelligence (AI) software to fabricate a person’s facial or body movements. They can easily spread by sharing over social media platforms and other websites. 

One well-known example is a video that circulated in August 2019, in which actor Bill Hader does an impersonation of Tom Cruise. The video is edited so Hader’s face morphs into a realistic image of Cruise, giving the impression that it’s the latter talking.

Beyond that, deepfake circulation could be damaging in 2020 and future election cycles. Along with celebrities, government leaders are the most common subjects of deepfakes, according to a February Science and Tech Spotlight from the U.S. Government Accountability Office (GAO).

“Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare,” per the report. It also notes that much of deepfake content online “is pornographic, and deepfake pornography disproportionately victimizes women.”

In 2018, Reddit shut down r/deepfakes, a forum that distributed videos of celebrities whose faces had been superimposed on actors in real pornography. The computer-generated fake pornography was banned because it was “involuntary,” or created without consent. 

Much of the same technology used to make those videos could be used to exploit women running for office, according to a GAO official.

“We can’t speak to intent, but the result is definitely that the majority of these do target women,” said Karen Howard, a director on GAO’s Science, Technology Assessment and Analytics (STAA) team.

Read More: https://www.michiganadvance.com/2020/03/09/the-new-election-frontier-deepfakes-are-coming-and-often-target-women/

Trusting video in a fake news world – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:10 PM on Thursday, March 5th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Trusting video in a fake news world

  • fake news is a tricky problem to solve, is probably not news to anyone at this point
  • However, the problem stands to get a lot trickier once the fakesters open their eyes to the potential of a mostly untapped weapon: trust in videos

By: Mansoor Ahmed-Rengers

That fake news is a tricky problem to solve, is probably not news to anyone at this point. However, the problem stands to get a lot trickier once the fakesters open their eyes to the potential of a mostly untapped weapon: trust in videos.

Fake news so far has relied on social media bubbles and textual misinformation with the odd photoshopped picture thrown in here and there. This has meant that, by and large, curious individuals have been able to uncover fakes with some investigation.

This could soon change. You see, “pics or it didn’t happen” isn’t just a meme, it is the mental model by which people judge the veracity of a piece of information on the Internet. What happens when the fakesters are able to create forgeries that even a keen eye cannot distinguish? How do we distinguish truth from fact?

We are far closer to this future than many realise. In 2017, researchers created a tool that produced realistic looking video clips of Barack Obama saying things he has never been recorded saying. Since then, a barrage of similar tools have become available; an equally worrying, if slightly tangential, trend is the rise of fake pornographic video that superimpose images of celebrities on to adult videos.

These tools represent the latest weapons in the arsenal of fake news creators – ones far easier to use for the layman than those before. While the videos produced by these tools may not presently stand up to scrutiny by forensics experts, they are already good enough to fool a casual viewer and are only getting better. The end result is that creating a good-enough fake video is now a trivial matter.

There are, of course, more traditional ways of creating fake videos as well. The White House was caught using the oldest trick in the book while trying to justify the barring of a reporter from the briefing room: they sped up the video to make it look like the reporter was physically rough with a staff member.

Other traditional ways are misleadingly editing videos to leave out critical context (as in the Planned Parenthood controversy), or splicing video clips to map wrong answers to questions, etc. I expect that we will see an increase in these traditional fake videos before a further transition to the complete fabrications discussed above. Both represent a grave danger to the pursuit of truth.

Major platforms are acutely aware of the issues. Twitter has recently introduced a fact checking feature to label maliciously edited videos in its timeline. YouTube has put disclaimers about the nature of news organizations below their videos (for example, whether it is a government sponsored news organization or not). Facebook has certified fact checkers who may label viral stories as misleading.

However, these approaches rely on manual verification and by the time a story catches the attention of a fact checker, it has already been seen by millions. YouTube’s approach is particularly lacking since it doesn’t say anything about an individual video at all, only about the source of funding of a very small set of channels.

Now, forensically detecting forgeries in videos is a deeply researched field with work dating back decades. There are many artefacts that are left behind when someone edits a video: the compression looks weird, the shadows may jump in odd patterns, the shapes of objects might get distorted.

Source: https://www.opendemocracy.net/en/digitaliberties/trusting-video-fake-news-world/

CLIENT FEATURE: Datametrex $DM.ca – An Artificial Intelligence #AI and Machine Learning Company, Clients Include: Canadian Government and Health Canada

Posted by AGORACOM-JC at 5:25 PM on Wednesday, March 4th, 2020

Artificial Intelligence and Machine Learning Company Focused on Social Media Discovery and Fake News Detection

Clients Include: Canadian Federal Government, DRDC, Health Canada, LOTTE

CTV News Cites Datametrex (DM:TSXV) For Proof That Foreign-Controlled Bot Networks Hit Canadian Election

Company Reported Record Quarter With $1,683,985 In Revenue

  • Reported (Q3-2019) revenues of $1,683,985 compared to $589,648, up by 186%
  • For the nine months operations, company reported revenues of $2,559,068 compared to $1,872,944, up by 37%
  • Cash position improved significantly, $812,853 compared to $66,296 in the previous quarter

Recent Achievements:

  • Secured the second contract of a multi phase R&D program through the Department of National Defence’s Innovation for Defence Excellence and Security (IDEaS) program with a value of approximately $945,094.
  • Software licencing contract with GreenInsightz Limited for the use of its proprietary Nexalogy’s Artificial Intelligence software platform for a value of approximately $1 million in cash and shares
  • Secured another contract with a division of Lotte for approximately $1,000,000.
  • Participated in NATO Research Task Group in Paris, France.

The Technology:

NexaIntelligence

Social-media discovery and monitoring platform for those who need to extract actionable insights out of discussions to inform decision-making.

Current languages supported: English, French, Russian, and Korean (more coming soon).

The system collects and analyses data from Twitter, Facebook, Tumblr, blogs, web forums, online news sites, Google Alerts and RSS feeds. With it, you’ll be able to make qualitative analyses based on both quantitative and qualitative data so you can provide context for the numbers, not just spreadsheets.

When exploring Twitter data, users immediately have access to:

  • An interactive timeline showing peaks of activity
  • Most frequent publishers and most frequently mentioned accounts
  • Most common words and hashtags
  • A lexical map that automatically clusters conversations to show common patterns of interactions and key topics
  • A geolocation-based heat map

FULL DISCLOSURE: Datametrex AI Limited is an advertising client of AGORA Internet Relations Corp.

States launch ‘trusted information’ efforts against fake news on social media – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 11:48 AM on Wednesday, March 4th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

States launch ‘trusted information’ efforts against fake news on social media

  • Wrong claims in Maine that Election Day is on different days for Republicans than for Democrats.
  • The misinformation on social media is contributing to a heightened alert ahead of Super Tuesday, when millions of Americans are expected to cast 2020 primary ballots.

By Brian Fung, CNN

(CNN)A Facebook account impersonating the Swain County board of elections in North Carolina. Unfounded rumors that Tarrant County, Texas, doesn’t have former Vice President Joe Biden on the ballot.

Wrong claims in Maine that Election Day is on different days for Republicans than for Democrats. The misinformation on social media is contributing to a heightened alert ahead of Super Tuesday, when millions of Americans are expected to cast 2020 primary ballots.

“Misinformation is the most likely source of trouble we’re going to experience this year,” Keith Ingram, elections director at the Texas Secretary of State’s office, told CNN.   State officials say misinformation poses as big a threat to elections as cyber-attacks that could cripple voting infrastructure.

So to counter the bad information online, states are increasingly going on the offensive — trying to spread good information to inoculate the public. But while experts commend the effort, many have questions about its effectiveness — and some say states could be doing more.   Earlier this week, California’s secretary of state sent emails to the 6.6 million registered voters with email addresses on file, directing them to the state’s election education guide. North Carolina’s board of elections ran radio ads recently reminding voters that photo identification will not be necessary in the state on Super Tuesday, thanks to a recent court ruling. Ingram said Texas’s online portal for accurate election information, votetexas.gov, is being “pounded in people’s minds” through social media.  

And across the country, officials are using the hashtag #trustedinfo2020 to tell Americans exactly where to find the bedrock truth for election information.   “Your source for #TrustedInfo2020 is ALWAYS your state and county election officials,” Oklahoma’s state election board tweeted last week — pointing voters to an internet portal for identifying polling places and requesting absentee ballots. The hashtag campaign is organized by the National Association of Secretaries of State (NASS).

Drowning out misinformation

By flooding the zone with constructive content, states are hoping to drown out negative or harmful material. It’s an idea linked to a growing body of research on online extremism, which has found that offering a contrasting view against hate speech can minimize its impact and lead to more engagement for the positive messages on social media.  

“The #trustedinfo2020 campaign is really a sort of reminder to people that there are resources that they can trust if they hear something or if they have some question about the news,” said Maine Secretary of State Matthew Dunlap in an interview with CNN.  

Meanwhile, in California, Secretary of State Alex Padilla has taken out ads on social media to promote the visibility of accurate information, according to Sam Mahood, an agency spokesman. In some cases, Mahood said, posts from the secretary’s official social media accounts correcting online misinformation were picked up by news outlets who helped further suppress the spread of false claims.  

Social media platforms have also dramatically improved their relationships with states compared to 2016 and 2018, election officials said. Whereas some states once lacked ways to contact Facebook or Twitter in earlier cycles, that’s changed, said Ingram.   “They’ve all made themselves accessible,” he said. “They all have folks who reach out to us, and we have their [contact] information.”   The same goes for the federal government.

The Department of Homeland Security has established real-time communications channels for state and local officials to share reports of suspicious activity. Those portals are mostly focused on cybersecurity threats. But the US government will “continue to plan for the worst” as it anticipates Russia continuing its misinformation efforts this year, acting Homeland Security secretary Chad Wolf told CNN last week in North Carolina.  

Wolf also called on voters to make sure they are “getting their information straight from the source.”

States reaching out to social media

As recently as last week, Facebook removed a misleading page that falsely told North Carolina voters they could fill out one bubble on a general-election ballot in order to vote for a single party across all eligible races, said Patrick Gannon, a spokesman for the state board of elections.

The page risked confusing North Carolinians and damaging trust in the democratic process, he added, but Facebook removed it at the state’s request.   Still, playing Whack-a-Mole against individual cases of misinformation is no substitute for providing credible information, according to state officials.  

Experts say awareness campaigns like #trustedinfo2020 are critical to improving public trust in the democratic process.   But, they added, there’s no single solution for a problem as abstract and multi-faceted as online misinformation, said Matt Sheehan, managing director of the Center for Public Interest Communications at the University of Florida.  

“I wish there was a fix as simple as a hashtag, but it runs counter to how we’re wired as humans,” he said. “Our personalities and worldviews color the information we find credible, or seek out as consumers.”   The dedication of those trying to mislead voters, as well as the natural ebb and flow of ordinary misinformation, makes it hard for officials to compete, said Rachel Goodman, an attorney at the civil society nonprofit Protect Democracy.  

“The unfortunate reality is, because there’s so many resources on the misnformation side,” she said, “it’s hard to see until we’re really in the crucible how it really measures up.”   By some estimates, the #trustedinfo2020 campaign doesn’t appear to have spread very far. One researcher who analyzed the hashtag told CNN that since late last year, it has been mentioned in about 10,000 tweets, mostly in posts created by election officials themselves. NASS declined to comment.   “Ten thousand mentions since mid-November is a relatively low volume,” said Ben Nimmo, a nonresident senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “It shows there’s been some pickup, but it’s not a viral phenomenon yet.”   Source: https://edition.cnn.com/2020/03/02/politics/state-efforts-against-social-media-misinformation/index.html

Experts Talk Deepfake Technology at NYU Conference – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:00 PM on Tuesday, March 3rd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Experts Talk Deepfake Technology at NYU Conference

  • Deepfakes are fabricated videos made to appear real using artificial intelligence
  • In some cases, the technology realistically imposes a face and voice over those of another individual

Andrew Califf, Contributing Writer

The Greenberg Lounge in Vanderbilt Hall was packed full by attendees listening to keynote speaker Kathryn Harrison from the DeepTrust Alliance. The NYU Journal of Legislation and Public Policy as well as the Center for Cybersecurity hosted the conference at NYU Law about the problem of deepfakes and the law. (Staff Photo by Alexandra Chan)

Laughter rippled through NYU Law School’s Greenberg Lounge Monday morning after the founder and CEO of DeepTrust Alliance, a coalition to fight digital disinformation — Kathryn Harrison — played a video of actor Jordan Peele using deepfake technology to imitate President Obama.

Deepfakes are fabricated videos made to appear real using artificial intelligence. In some cases, the technology realistically imposes a face and voice over those of another individual.

The technology poses implications such as harassment, the spread of disinformation, manipulation of the stock market, theft and fear-mongering, Harrison said.

Harrison and other professionals spoke at Vanderbilt Hall this Monday at an NYU Center for Cybersecurity and the NYU Journal of Legislation & Public Policy conference to spread awareness about this deceptive technology, and to look at technological, legal and practical ways to combat the deception.

The professionals consisted of journalism, legal and cybersecurity experts who combat troubles posed by the rapidly developing technology in different ways.

The tone of the room shifted to silence as Harrison continued her keynote speech to discuss how the technology was used to harass Rana Ayyub — an Indian journalist who was critical of Prime Minister Narendra Modi — by putting her face into pornographic material.

“Imagine if this was your teenage daughter, who said the wrong thing to the wrong person at school,” Harrison said.

Distinguished Fellow at the NYU Center for Cybersecurity Judi Germano said the solution for combatting deepfakes is two-fold.

“There is a lot of work to be done to confront the deepfakes problem,” Germano told WSN. “In addition to technological solutions, we need policy solutions.”

Germano moderated the event’s first panel, which specifically focused on technology, fake news and detection of deepfakes. She also discussed the role deepfakes play in the spread of disinformation.

Despite how innovative deepfake technology is, experts such as Corin Faife — a journalist specializing in AI and disinformation — consider them to be a new form of an old problem.

“One of the important things for deepfakes is to put it into context of this broader problem of disinformation that we have, and to understand that that is an ecosystemic problem,” Faife explained to WSN in an interview. “There are multiple different contributing factors, and [the technological solutions] are no good if people won’t accept that a certain video is false or manipulated because of their preexisting beliefs.”

This line of thought is why some are hesitant to push through legislature regarding deep fake technology. The director of the American Civil Liberties Union’s Speech, Privacy and Technology Project, Ben Wizner, took this position during the second panel on how legislature should evolve to deal with deepfakes.

Since deepfakes are a means to commit illegal acts, Rob Volkert, VP of Threat Investigations at Nisos, understands his fellow panelist’s mindset. Volkert said he also struggles with pinpointing who to accuse.

“The responsibility is on the user, not on the platform,” Volkert told WSN in an interview after explaining how the market for deepfake software does not need to hide in the dark web.

Deepfake technology is an ominous cloud approaching the presidential election and that is why it was an appropriate topic for this event, Journal of Legislation and Public Policy board member Lisa Femia said. 

Facebook’s Cybersecurity Policy Lead Saleela Khanum, who spoke during the conference, raised a point about public trust during elections.

“There should not be a level of distrust that we therefore trust nothing,” Khanum said to the audience.

Email Andrew Califf at [email protected].

Source: https://nyunews.com/news/2020/02/03/nyu-deepfakes-conference

Fake News In 2020 Election Puts Social Media Companies Under Siege – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 4:00 PM on Monday, March 2nd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Fake News In 2020 Election Puts Social Media Companies Under Siege

  • The social media giant recently unearthed hundreds of fake accounts that originated not only in Russia but Iran and Vietnam as well
  • Facebook says their purpose was clear: Sow confusion in the U.S. and ultimately disrupt the integrity of this year’s U.S. presidential contest
  • Facebook purged the fake accounts in early February, and says it has heavily beefed up its safety and security team

By: BRIAN DEAGON

The struggle to keep the 2020 election free of fake news on social media already is proving to be an uphill battle. Just ask the watchdogs at Facebook (FB) who are battling more disinformation than ever, courtesy of “deepfakes” and other new weapons of deception.

The social media giant recently unearthed hundreds of fake accounts that originated not only in Russia but Iran and Vietnam as well. Facebook says their purpose was clear: Sow confusion in the U.S. and ultimately disrupt the integrity of this year’s U.S. presidential contest. Facebook purged the fake accounts in early February, and says it has heavily beefed up its safety and security team.

Halting the flood of Facebook fake news and misinformation on other platforms is critical to social media companies. Failure on their part runs the risk of alienating loyal users and angering lawmakers, who could slap them with new regulations. And the scrutiny is sure to grow after reports this week said U.S. intelligence officials have told Congress that Russia already is meddling in this year’s elections to boost President Donald Trump’s reelection chances.

Clearly, U.S. election misinformation is a blossoming enterprise. In 2016 Russia established numerous fake accounts on Facebook, Twitter (TWTR) and the YouTube unit of Alphabet (GOOGL). In 2020 these efforts continue to expand both inside and outside Russia — and across all walks of social media. America’s enemies have put the nation’s electoral process in the crosshairs with fake news stories on social media and deepfakes, or doctored videos.

“What started as a Russian effort to undermine elections and cause chaos and basically reduce faith in our democratic institutions is now becoming a free-for-all,” said Lisa Kaplan, founder of Alethea Group, a consulting group that helps businesses, politicians and candidates protect themselves against disinformation.

Fake News On Social Media In The 2020 Election

Election meddling goes back decades, but the internet has greatly amplified the disruption. Anyone with an internet connection has a megaphone to the world. And that means governments in Russia, China, Iran and others who are less than friendly to the U.S. are actively using social media to influence the nation and its electorate, according to intelligence agencies and studies.

“Lying is not a new concept but … knowing that a majority of Americans get their news online through social media, it’s easy to misinform and manipulate people,” Kaplan said. “It makes it much easier for bad actors to launch these large-scale persuasion campaigns.”

Facebook fake news is a huge problem for the company. The same goes for Twitter and YouTube. Senior executives of these social media companies have spent considerable time over the past few years testifying at congressional inquiries and investigations.

At the same time, they’re struggling to stop a steady flow of fake news and disinformation planted on their platforms. Not only are the disinformation campaigns coming from overseas but from domestic groups as well.

FBI Director Christopher Wray says Russia continues to conduct an “information warfare” operation against the U.S. ahead of the 2020 election. Wray on Feb. 5 told the House Judiciary Committee that Moscow is using a covert social-media campaign.

“It is that kind of effort that is still very much ongoing,” Wray told the panel. “It’s not just an election cycle; it’s an effort to influence our republic in that regard.”

Anger Over Fake News On Social Media

The efforts by Russia and others have ushered in a new era of scrutiny for tech giants. U.S. Sen. Elizabeth Warren, D-Mass., one of the Democratic presidential hopefuls, has taken aim at Facebook fake news and company Chief Executive Mark Zuckerberg. She chides Facebook for spreading disinformation against her and other candidates.

In late January, Warren pledged that her campaign would not share fake news or promote fraudulent accounts on social media. It’s part of her plan to battle disinformation and hold Facebook, Google and Twitter responsible for its spread.

“Anyone who seeks to challenge and defeat Donald Trump in the 2020 election must be fully prepared to take on the full array of disinformation that foreign actors and people in and around the Trump campaign will use to divide Democrats, suppress Democratic votes, and erode the standing of the Democratic nominee,” Warren said in a written statement on her campaign website.

She added: “And anyone who seeks to be the Democratic nominee must condemn the use of disinformation and pledge not to knowingly use it to benefit their own candidacy or damage others.”

More fuel to that fire came Thursday. Reports that Russia already is actively meddling in the 2020 race drew concerns from lawmakers. The news also angered Trump, who expressed fear Democrats would use the information against him in the campaign. Trump dismissed Joseph Maguire, former acting director of national intelligence, for telling the House Intelligence Committee of the interference.

Interference In 2016 Election

But election meddling woes began in 2015 with a well-funded Russian web brigade, called the Internet Research Agency. The group reportedly had 400 employees and was based in St. Petersburg, Russia. It used Facebook and Twitter to disseminate an onslaught of fake, politically charged content in an attempt to influence the 2016 presidential election.

The widespread misuse of social media came to light in early 2018 during the investigation of Cambridge Analytica, a data mining and analysis firm used by President Trump’s 2016 campaign. Through trickery and deception, Cambridge Analytica accessed personal information on 87 million Facebook users without their knowledge and used that data to target specific readers with fake stories, divisive memes and other content.

Media executives later were called before Congress to discuss what they intended to do about disinformation for 2020. Congressional probes revealed the ease of manipulating their platforms.

Facebook, Twitter and Google have responded with a slew of election integrity projects such as new restrictions on postings. They also increasingly try to root out what they call “inauthentic behavior” â€” users assuming a false identity.

In response to written questions from IBD, Facebook says the size of its teams working on safety and security matters is now 35,000, triple its 2017 level. It also created rapid response centers to monitor suspicious activity during the 2020 election.

“Since 2017, we’ve made large investments in teams and technologies to better secure our elections and are deploying them where they will have the greatest impact,” Facebook spokeswoman Katie Derkits said in a written statement.

Twitter Bans Political Ads In 2020 Election

In late October, Twitter Chief Executive Jack Dorsey banned all political advertising from his network. Google quickly followed suit, putting limits on political ads across some of its properties, including YouTube.

“As caucuses and primaries for the 2020 presidential election get underway, we’ll build on our efforts to protect the public conversation and enforce our policies against platform manipulation,” Carlos Monje, Twitter’s director of public policy and philanthropy, told Investor’s Business Daily in written remarks. “We take the learnings from every recent election around the world and use them to improve our election integrity work.”

In September, Twitter suspended more than 10,000 accounts across six countries. The company said the accounts actively spread disinformation and encouraged unrest in politically sensitive regions.

YouTube and Google plan to restrict how precisely political advertisers can target an audience on their services.

Playing Whack-A-Mole With Facebook Fake News

Will these efforts make a difference in the 2020 election?

Research suggests social media firms will play a game of whack-a-mole. They’ve deleted thousands of inauthentic accounts with millions of followers. But that hasn’t stopped people from finding new ways to get back online and send out fake news.

In the most recent takedown of accounts by Facebook, Russia was the largest target. Facebook removed 118 accounts, groups and pages that targeted Ukraine citizens. Other Russia sites focused on its involvement in Syria and ethnic tensions in Crimea.

“Although the people behind this network attempted to conceal their identities and coordination, our investigation found links to Russian military intelligence services,” Facebook said in a blog post announcing the slate of removals.

Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said the social media company also removed 11 accounts distributing fake news from Iran. The accounts focused mostly on U.S.-Iran relations, Christianity and the upcoming election.

“We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge,” Gleicher wrote.

Emerging Threat Of Deepfakes In 2020 Election

In December, Facebook and Twitter disabled a global network of 900 pages, groups and accounts sending pro-Trump messages. The fake news accounts managed to avoid detection as being inauthentic. And they used photos generated with the aid of artificial intelligence. The campaign was based in the U.S. and Vietnam.

“There’s no question that social media has really changed the way that we talk about politics,” said Deen Freelon, a media professor at the University of North Carolina at Chapel Hill. “The No. 1 example is our president who, whether you like him or not, uses social media in ways that are unprecedented for a president and I would say any politician.”

The other fake news threat that social media companies face is from deepfakes. The level of realism in deepfakes has increased vastly from just a year ago, analysts say.

Using artificial intelligence technology, deepfake purveyors replace a person in an existing image or video with someone else’s likeness. Users also employ artificial intelligence tools in deepfakes to misrepresent an event that occurred. Deepfakes can even manufacture an event that never took place.

“Deepfakes are pretty scary to me,” said Freelon. “But I also think the true impact of deepfakes won’t become apparent until the technology gets developed a bit more.”

Cheapfakes: A Simpler Kind Of Fake News

Simpler versions of deepfakes get the name “cheapfakes,” or videos altered with traditional editing tools or low-end technology.

An example of a cheapfake that went viral was an altered video of House Speaker Nancy Pelosi. The edited video slowed down her speech to make her seem inebriated. That prompted right-wing cable news pundits to question Pelosi’s mental health and fitness to serve office.

YouTube removed the video. Facebook did not. Only videos generated by artificial intelligence to depict people saying fictional things would be removed, Facebook said. It eventually placed a warning label on the Pelosi video.

In January, Facebook took steps to ban many types of misleading videos from its site. It was part of a push against deepfake content and online misinformation campaigns.

Facebook said in a blog post that these fake news videos distort reality and present a “significant challenge” for the technology industry. The rules will not apply to satire or parody.

In February, Twitter changed its video policies, saying it would more aggressively scrutinize fake or altered photos and videos. Starting in March, Twitter will add labels or take down tweets carrying manipulated images and videos, it said in a blog post.

Also this month, YouTube said that it planned to remove misleading election-related content that can cause “serious risk of egregious harm.” It also laid out how it will handle such political videos and viral falsehoods.

Spreading Fake News On Social Media

But are the hurdles too high to surmount? A Massachusetts Institute of Technology study last year concluded fake news is more likely to go viral than other news. And it showed that a false story reached 1,500 people six times quicker than a true story.

As to why falsehoods perform so well, the MIT team settled on the hypothesis that fake news is more “novel” than real news. Subsequently, it evokes more emotion than the average tweet or post.

Ordinary social media users play a role in spreading fake news as well. The determining factor for whether people spread disinformation is the number of times they see it.

People who repeatedly encounter a fake news item may feel less unethical about sharing it on social media. That comes regardless of whether they believe it is accurate, according to a study published in the journal Psychological Science.

“Even when they know it’s false, if they repeatedly encounter it, they feel it’s less unethical to share and they’re less likely to censor,” said Daniel Effron, professor of Organizational Behavior at the London Business School and an author of the study. “It suggests that social media companies need a different approach to combating the spread of disinformation.”

Letting Consumers Decide On Fake News

The findings carry heavy implications for industry executives hoping to stop 2020 election fake news on social media.

“We suggest that efforts to fight disinformation should consider how people judge the morality of spreading it, not just whether they believe it,” Effron said.

After the Cambridge Analytica scandal, Facebook promised to do better, and rolled out a number of reforms. But in October, Zuckerberg delivered a strongly worded address at Georgetown University, defending unfettered speech, including paid advertising.

Zuckerberg says he wants to avoid policing what politicians can and cannot say to constituents. Facebook should allow its social media users to make those decisions for themselves, he contends.

Facebook officials repeatedly warn against significant changes to its rules for political or issue ads. Such changes could make it hard for less well-funded groups to raise money for the 2020 election, they say.

“We face increasingly sophisticated attacks from nation states like Russia, Iran and China,” Zuckerberg said. “But, I’m confident that we’re more prepared now because we’ve played a role in defending against election interference in more than 200 elections around the world since 2016.”

Source: https://www.investors.com/news/technology/fake-news-2020-election-puts-social-media-companies-under-siege/

Latest #AI could one day take over as the biggest editor of Wikipedia – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 10:54 AM on Friday, February 21st, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Latest AI could one day take over as the biggest editor of Wikipedia

  • “There are so many updates constantly needed to Wikipedia articles. It would be beneficial to automatically modify exact portions of the articles, with little to no human intervention,” said Darsh Shah, a PhD student in MIT’s Computer Science and AI Laboratory, who is one of the lead authors.

by Colm Gorey

Researchers have developed an AI that can automatically rewrite outdated sentences on Wikipedia, drastically reducing the need for human editing.

Despite thousands of volunteer editors dedicating many hours towards keeping Wikipedia up to date, editing an estimated 52m articles seems like an almost impossible task. However, researchers from MIT are set to unveil a new AI that could be used to automatically update any inaccuracies on the online encyclopaedia, thereby giving human editors a robotic helping hand.

In a paper presented at the AAAI Conference on AI, the researchers described a text-generating system that pinpoints and replaces specific information in relevant Wikipedia sentences, while keeping the language similar to how humans write and edit.

The idea is that humans could type an unstructured sentence with the updated information into an interface, without the need to worry about grammar. The AI then searches Wikipedia for the right pages and outdated information, which it then updates in a human-like style.

The researchers are hopeful that, down the line, it could be possible to build an AI that can do the entire process automatically. This would mean it could scour the web for updated news on a topic and replace the text.

Taking on â€˜fake news’

“There are so many updates constantly needed to Wikipedia articles. It would be beneficial to automatically modify exact portions of the articles, with little to no human intervention,” said Darsh Shah, a PhD student in MIT’s Computer Science and AI Laboratory, who is one of the lead authors.

“Instead of hundreds of people working on modifying each Wikipedia article, then you’ll only need a few, because the model is helping or doing it automatically. That offers dramatic improvements in efficiency.”

Looking beyond Wikipedia, the study also put forward the AI’s potential benefits as a tool to eliminate bias when training detectors of so-called ‘fake news’. Some of these detectors train on datasets of agree-disagree sentence pairs to verify a claim by matching it to given evidence.

“During training, models use some language of the human-written claims as ‘give-away’ phrases to mark them as false, without relying much on the corresponding evidence sentence,” Shah said. “This reduces the model’s accuracy when evaluating real-world examples, as it does not perform fact-checking.”

By applying their AI to the agree-disagree method of disinformation detection, an augmented dataset used by the researchers was able to reduce the error rate of a popular detector by 13pc.

Source: https://www.siliconrepublic.com/machines/wikipedia-editors-ai-fake-news