Posted by AGORACOM-JC
at 12:45 PM on Monday, March 9th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
The new election frontier: Deepfakes are coming and often target women
Deepfake technology has been called a powerful feat in artificial intelligence and machine learning at its best, and unsettling — even sinister — at its worst
“Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare,†per the report
Also notes that much of deepfake content online “is pornographic, and deepfake pornography disproportionately victimizes women.â€
Deepfakes are media — usually videos,
audio recordings or photographs — that have been doctored through
artificial intelligence (AI) software to fabricate a person’s facial or
body movements. They can easily spread by sharing over social media platforms and other websites.
One well-known example is a video that circulated
in August 2019, in which actor Bill Hader does an impersonation of Tom
Cruise. The video is edited so Hader’s face morphs into a realistic
image of Cruise, giving the impression that it’s the latter talking.
Beyond that, deepfake circulation
could be damaging in 2020 and future election cycles. Along with
celebrities, government leaders are the most common subjects of
deepfakes, according to a February Science and Tech Spotlight from the U.S. Government Accountability Office (GAO).
“Deepfakes could be used to influence
elections or incite civil unrest, or as a weapon of psychological
warfare,†per the report. It also notes that much of deepfake content
online “is pornographic, and deepfake pornography disproportionately
victimizes women.â€
In 2018, Reddit shut down
r/deepfakes, a forum that distributed videos of celebrities whose faces
had been superimposed on actors in real pornography. The
computer-generated fake pornography was banned because it was
“involuntary,†or created without consent.
Much of the same technology used to
make those videos could be used to exploit women running for office,
according to a GAO official.
“We can’t speak to intent, but the
result is definitely that the majority of these do target women,†said
Karen Howard, a director on GAO’s Science, Technology Assessment and
Analytics (STAA) team.
Posted by AGORACOM-JC
at 5:10 PM on Thursday, March 5th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Trusting video in a fake news world
fake news is a tricky problem to solve, is probably not news to anyone at this point
However, the problem stands to get a lot trickier once the fakesters open their eyes to the potential of a mostly untapped weapon: trust in videos
That fake news is a tricky problem to solve, is probably not news to
anyone at this point. However, the problem stands to get a lot trickier
once the fakesters open their eyes to the potential of a mostly untapped
weapon: trust in videos.
Fake news so far has relied on social media bubbles and textual
misinformation with the odd photoshopped picture thrown in here and
there. This has meant that, by and large, curious individuals have been
able to uncover fakes with some investigation.
This could soon change. You see, “pics or it didn’t happen†isn’t just a meme,
it is the mental model by which people judge the veracity of a piece of
information on the Internet. What happens when the fakesters are able
to create forgeries that even a keen eye cannot distinguish? How do we
distinguish truth from fact?
We are far closer to this future than many realise. In 2017, researchers created a tool that produced realistic looking video clips of Barack Obama saying things he has never been recorded saying.
Since then, a barrage of similar tools have become available; an
equally worrying, if slightly tangential, trend is the rise of fake
pornographic video that superimpose images of celebrities on to adult
videos.
These tools represent the latest weapons in the arsenal of fake news
creators – ones far easier to use for the layman than those before.
While the videos produced by these tools may not presently stand up to
scrutiny by forensics experts, they are already good enough to fool a
casual viewer and are only getting better. The end result is that
creating a good-enough fake video is now a trivial matter.
There are, of course, more traditional ways of creating fake videos
as well. The White House was caught using the oldest trick in the book
while trying to justify the barring of a reporter from the briefing
room: they sped up the video to make it look like the reporter was physically rough with a staff member.
Other traditional ways are misleadingly editing videos to leave out critical context (as in the Planned Parenthood controversy),
or splicing video clips to map wrong answers to questions, etc. I
expect that we will see an increase in these traditional fake videos
before a further transition to the complete fabrications discussed
above. Both represent a grave danger to the pursuit of truth.
However, these approaches rely on manual verification and by the time
a story catches the attention of a fact checker, it has already been
seen by millions. YouTube’s approach is particularly lacking since it
doesn’t say anything about an individual video at all, only about the
source of funding of a very small set of channels.
Now, forensically detecting forgeries in videos is a deeply
researched field with work dating back decades. There are many artefacts
that are left behind when someone edits a video: the compression looks
weird, the shadows may jump in odd patterns, the shapes of objects might
get distorted.
Reported (Q3-2019) revenues of $1,683,985 compared to $589,648, up by 186%
For the nine months operations, company reported revenues of $2,559,068 compared to $1,872,944, up by 37%
Cash position improved significantly, $812,853 compared to $66,296 in the previous quarter
Recent Achievements:
Secured the second contract of a multi
phase R&D program through the Department of National Defence’s
Innovation for Defence Excellence and Security (IDEaS) program with a
value of approximately $945,094.
Software licencing contract with
GreenInsightz Limited for the use of its proprietary Nexalogy’s
Artificial Intelligence software platform for a value of approximately
$1 million in cash and shares
Secured another contract with a division of Lotte for approximately $1,000,000.
Participated in NATO Research Task Group in Paris, France.
The Technology:
NexaIntelligence
Social-media discovery and monitoring platform for those who need to extract actionable insights out of discussions to inform decision-making.
Current languages supported: English, French, Russian, and Korean (more coming soon).
The system collects and analyses data from Twitter, Facebook, Tumblr, blogs, web forums, online news sites, Google Alerts and RSS feeds. With it, you’ll be able to make qualitative analyses based on both quantitative and qualitative data so you can provide context for the numbers, not just spreadsheets.
When exploring Twitter data, users immediately have access to:
An interactive timeline showing peaks of activity
Most frequent publishers and most frequently mentioned accounts
Most common words and hashtags
A lexical map that automatically clusters conversations to show common patterns of interactions and key topics
A geolocation-based heat map
FULL DISCLOSURE: Datametrex AI Limited is an advertising client of AGORA Internet Relations Corp.
Posted by AGORACOM-JC
at 11:48 AM on Wednesday, March 4th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
States launch ‘trusted information’ efforts against fake news on social media
Wrong claims in Maine that Election Day is on different days for Republicans than for Democrats.
The misinformation on social media is contributing to a heightened alert ahead of Super Tuesday, when millions of Americans are expected to cast 2020 primary ballots.
(CNN)A Facebook account impersonating the Swain County board of elections in North Carolina. Unfounded rumors that Tarrant County, Texas, doesn’t have former Vice President Joe Biden on the ballot.
Wrong claims in Maine that Election Day is on different days for Republicans than for Democrats. The misinformation on social media is contributing to a heightened alert ahead of Super Tuesday, when millions of Americans are expected to cast 2020 primary ballots.
“Misinformation is the most likely source of trouble we’re going to experience this year,” Keith Ingram, elections director at the Texas Secretary of State’s office, told CNN. Â State officials say misinformation poses as big a threat to elections as cyber-attacks that could cripple voting infrastructure.
So to counter the bad information online, states are increasingly going on the offensive — trying to spread good information to inoculate the public. But while experts commend the effort, many have questions about its effectiveness — and some say states could be doing more. Â Earlier this week, California’s secretary of state sent emails to the 6.6 million registered voters with email addresses on file, directing them to the state’s election education guide. North Carolina’s board of elections ran radio ads recently reminding voters that photo identification will not be necessary in the state on Super Tuesday, thanks to a recent court ruling. Ingram said Texas’s online portal for accurate election information, votetexas.gov, is being “pounded in people’s minds” through social media. Â
And across the country, officials are using the hashtag #trustedinfo2020 to tell Americans exactly where to find the bedrock truth for election information. Â “Your source for #TrustedInfo2020 is ALWAYS your state and county election officials,” Oklahoma’s state election board tweeted last week — pointing voters to an internet portal for identifying polling places and requesting absentee ballots. The hashtag campaign is organized by the National Association of Secretaries of State (NASS).
Drowning out misinformation
By flooding the zone with constructive content, states are hoping to drown out negative or harmful material. It’s an idea linked to a growing body of research on online extremism, which has found that offering a contrasting view against hate speech can minimize its impact and lead to more engagement for the positive messages on social media. Â
“The #trustedinfo2020 campaign is really a sort of reminder to people that there are resources that they can trust if they hear something or if they have some question about the news,” said Maine Secretary of State Matthew Dunlap in an interview with CNN. Â
Meanwhile, in California, Secretary of State Alex Padilla has taken out ads on social media to promote the visibility of accurate information, according to Sam Mahood, an agency spokesman. In some cases, Mahood said, posts from the secretary’s official social media accounts correcting online misinformation were picked up by news outlets who helped further suppress the spread of false claims. Â
Social media platforms have also dramatically improved their relationships with states compared to 2016 and 2018, election officials said. Whereas some states once lacked ways to contact Facebook or Twitter in earlier cycles, that’s changed, said Ingram. Â “They’ve all made themselves accessible,” he said. “They all have folks who reach out to us, and we have their [contact] information.” Â The same goes for the federal government.
The Department of Homeland Security has established real-time communications channels for state and local officials to share reports of suspicious activity. Those portals are mostly focused on cybersecurity threats. But the US government will “continue to plan for the worst” as it anticipates Russia continuing its misinformation efforts this year, acting Homeland Security secretary Chad Wolf told CNN last week in North Carolina. Â
Wolf also called on voters to make sure they are “getting their information straight from the source.”
States reaching out to social media
As recently as last week, Facebook removed a misleading page that falsely told North Carolina voters they could fill out one bubble on a general-election ballot in order to vote for a single party across all eligible races, said Patrick Gannon, a spokesman for the state board of elections.
The page risked confusing North Carolinians and damaging trust in the democratic process, he added, but Facebook removed it at the state’s request. Â Still, playing Whack-a-Mole against individual cases of misinformation is no substitute for providing credible information, according to state officials. Â
Experts say awareness campaigns like #trustedinfo2020 are critical to improving public trust in the democratic process. Â But, they added, there’s no single solution for a problem as abstract and multi-faceted as online misinformation, said Matt Sheehan, managing director of the Center for Public Interest Communications at the University of Florida. Â
“I wish there was a fix as simple as a hashtag, but it runs counter to how we’re wired as humans,” he said. “Our personalities and worldviews color the information we find credible, or seek out as consumers.” Â The dedication of those trying to mislead voters, as well as the natural ebb and flow of ordinary misinformation, makes it hard for officials to compete, said Rachel Goodman, an attorney at the civil society nonprofit Protect Democracy. Â
“The unfortunate reality is, because there’s so many resources on the misnformation side,” she said, “it’s hard to see until we’re really in the crucible how it really measures up.” Â By some estimates, the #trustedinfo2020 campaign doesn’t appear to have spread very far. One researcher who analyzed the hashtag told CNN that since late last year, it has been mentioned in about 10,000 tweets, mostly in posts created by election officials themselves. NASS declined to comment. Â “Ten thousand mentions since mid-November is a relatively low volume,” said Ben Nimmo, a nonresident senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “It shows there’s been some pickup, but it’s not a viral phenomenon yet.” Â Source: https://edition.cnn.com/2020/03/02/politics/state-efforts-against-social-media-misinformation/index.html
Posted by AGORACOM-JC
at 5:00 PM on Tuesday, March 3rd, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Experts Talk Deepfake Technology at NYU Conference
Deepfakes are fabricated videos made to appear real using artificial intelligence
In some cases, the technology realistically imposes a face and voice over those of another individual
Andrew Califf, Contributing Writer
The Greenberg Lounge in Vanderbilt Hall was packed full by attendees
listening to keynote speaker Kathryn Harrison from the DeepTrust
Alliance. The NYU Journal of Legislation and Public Policy as well as
the Center for Cybersecurity hosted the conference at NYU Law about the
problem of deepfakes and the law. (Staff Photo by Alexandra Chan)
Laughter rippled through NYU Law School’s Greenberg Lounge Monday morning after the founder and CEO of DeepTrust Alliance,
a coalition to fight digital disinformation — Kathryn Harrison — played
a video of actor Jordan Peele using deepfake technology to imitate President Obama.
Deepfakes are fabricated videos made
to appear real using artificial intelligence. In some cases, the
technology realistically imposes a face and voice over those of another
individual.
The technology poses implications
such as harassment, the spread of disinformation, manipulation of the
stock market, theft and fear-mongering, Harrison said.
Harrison and other professionals
spoke at Vanderbilt Hall this Monday at an NYU Center for Cybersecurity
and the NYU Journal of Legislation & Public Policy conference to
spread awareness about this deceptive technology, and to look at
technological, legal and practical ways to combat the deception.
The professionals consisted of
journalism, legal and cybersecurity experts who combat troubles posed by
the rapidly developing technology in different ways.
The tone of the room shifted to silence as Harrison continued her keynote speech to discuss how the technology was used to harass Rana Ayyub — an Indian journalist who was critical of Prime Minister Narendra Modi — by putting her face into pornographic material.
“Imagine if this was your teenage daughter, who said the wrong thing to the wrong person at school,†Harrison said.
Distinguished Fellow at the NYU Center for Cybersecurity Judi Germano said the solution for combatting deepfakes is two-fold.
“There is a lot of work to be done to
confront the deepfakes problem,†Germano told WSN. “In addition to
technological solutions, we need policy solutions.â€
Germano moderated the event’s first
panel, which specifically focused on technology, fake news and detection
of deepfakes. She also discussed the role deepfakes play in the spread
of disinformation.
Despite how innovative deepfake
technology is, experts such as Corin Faife — a journalist specializing
in AI and disinformation — consider them to be a new form of an old
problem.
“One of the important things for
deepfakes is to put it into context of this broader problem of
disinformation that we have, and to understand that that is an
ecosystemic problem,†Faife explained to WSN in an interview. “There are
multiple different contributing factors, and [the technological
solutions] are no good if people won’t accept that a certain video is
false or manipulated because of their preexisting beliefs.â€
This line of thought is why some are hesitant to push through legislature regarding deep fake technology. The director of the American Civil Liberties Union’s
Speech, Privacy and Technology Project, Ben Wizner, took this position
during the second panel on how legislature should evolve to deal with
deepfakes.
Since deepfakes are a means to commit illegal acts, Rob Volkert, VP of Threat Investigations at Nisos, understands his fellow panelist’s mindset. Volkert said he also struggles with pinpointing who to accuse.
“The responsibility is on the user,
not on the platform,†Volkert told WSN in an interview after explaining
how the market for deepfake software does not need to hide in the dark
web.
Deepfake technology is an ominous
cloud approaching the presidential election and that is why it was an
appropriate topic for this event, Journal of Legislation and Public Policy board member Lisa Femia said.
Facebook’s Cybersecurity Policy Lead
Saleela Khanum, who spoke during the conference, raised a point about
public trust during elections.
“There should not be a level of distrust that we therefore trust nothing,†Khanum said to the audience.
Posted by AGORACOM-JC
at 4:00 PM on Monday, March 2nd, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Fake News In 2020 Election Puts Social Media Companies Under Siege
The social media giant recently unearthed hundreds of fake accounts that originated not only in Russia but Iran and Vietnam as well
Facebook says their purpose was clear: Sow confusion in the U.S. and ultimately disrupt the integrity of this year’s U.S. presidential contest
The struggle to keep the 2020 election free of fake news on social
media already is proving to be an uphill battle. Just ask the watchdogs
at Facebook (FB) who are battling more disinformation than ever, courtesy of “deepfakes” and other new weapons of deception.
The social media giant recently unearthed hundreds of fake accounts
that originated not only in Russia but Iran and Vietnam as well.
Facebook says their purpose was clear: Sow confusion in the U.S. and
ultimately disrupt the integrity of this year’s U.S. presidential
contest. Facebook purged the fake accounts in early February, and says it has heavily beefed up its safety and security team.
Halting the flood of Facebook fake news and misinformation on other
platforms is critical to social media companies. Failure on their part
runs the risk of alienating loyal users and angering lawmakers, who
could slap them with new regulations. And the scrutiny is sure to grow
after reports this week said
U.S. intelligence officials have told Congress that Russia already is
meddling in this year’s elections to boost President Donald Trump’s
reelection chances.
Clearly, U.S. election misinformation is a blossoming enterprise. In
2016 Russia established numerous fake accounts on Facebook, Twitter (TWTR) and the YouTube unit of Alphabet (GOOGL).
In 2020 these efforts continue to expand both inside and outside Russia
— and across all walks of social media. America’s enemies have put the
nation’s electoral process in the crosshairs with fake news stories on
social media and deepfakes, or doctored videos.
“What started as a Russian effort to undermine elections and cause
chaos and basically reduce faith in our democratic institutions is now
becoming a free-for-all,” said Lisa Kaplan, founder of Alethea Group, a
consulting group that helps businesses, politicians and candidates
protect themselves against disinformation.
Fake News On Social Media In The 2020 Election
Election meddling goes back decades, but the internet has greatly
amplified the disruption. Anyone with an internet connection has a
megaphone to the world. And that means governments in Russia, China,
Iran and others who are less than friendly to the U.S. are actively
using social media to influence the nation and its electorate, according
to intelligence agencies and studies.
“Lying is not a new concept but … knowing that a majority of
Americans get their news online through social media, it’s easy to
misinform and manipulate people,” Kaplan said. “It makes it much easier
for bad actors to launch these large-scale persuasion campaigns.”
Facebook fake news is a huge problem for the company. The same goes
for Twitter and YouTube. Senior executives of these social media
companies have spent considerable time over the past few years
testifying at congressional inquiries and investigations.
At the same time, they’re struggling to stop a steady flow of fake
news and disinformation planted on their platforms. Not only are the
disinformation campaigns coming from overseas but from domestic groups
as well.
FBI Director Christopher Wray says Russia continues to conduct an
“information warfare” operation against the U.S. ahead of the 2020
election. Wray on Feb. 5 told the House Judiciary Committee that Moscow
is using a covert social-media campaign.
“It is that kind of effort that is still very much ongoing,” Wray told the panel. “It’s not just an election cycle; it’s an effort to influence our republic in that regard.”
Anger Over Fake News On Social Media
The efforts by Russia and others have ushered in a new era of
scrutiny for tech giants. U.S. Sen. Elizabeth Warren, D-Mass., one of
the Democratic presidential hopefuls, has taken aim at Facebook fake
news and company Chief Executive Mark Zuckerberg. She chides Facebook
for spreading disinformation against her and other candidates.
In late January, Warren pledged that her campaign would not share
fake news or promote fraudulent accounts on social media. It’s part of
her plan to battle disinformation and hold Facebook, Google and Twitter
responsible for its spread.
“Anyone who seeks to challenge and defeat Donald Trump in the 2020
election must be fully prepared to take on the full array of
disinformation that foreign actors and people in and around the Trump
campaign will use to divide Democrats, suppress Democratic votes, and
erode the standing of the Democratic nominee,” Warren said in a written
statement on her campaign website.
She added: “And anyone who seeks to be the Democratic nominee must
condemn the use of disinformation and pledge not to knowingly use it to
benefit their own candidacy or damage others.”
More fuel to that fire came Thursday. Reports that Russia already is
actively meddling in the 2020 race drew concerns from lawmakers. The
news also angered Trump, who expressed fear Democrats would use the
information against him in the campaign. Trump dismissed Joseph Maguire,
former acting director of national intelligence, for telling the House
Intelligence Committee of the interference.
The widespread misuse of social media came to light in early 2018
during the investigation of Cambridge Analytica, a data mining and
analysis firm used by President Trump’s 2016 campaign. Through trickery
and deception, Cambridge Analytica accessed personal information
on 87 million Facebook users without their knowledge and used that data
to target specific readers with fake stories, divisive memes and other
content.
Media executives later were called before Congress to discuss what
they intended to do about disinformation for 2020. Congressional probes
revealed the ease of manipulating their platforms.
Facebook, Twitter and Google have responded with a slew of election
integrity projects such as new restrictions on postings. They also
increasingly try to root out what they call “inauthentic behavior” —
users assuming a false identity.
In response to written questions from IBD, Facebook says the size of
its teams working on safety and security matters is now 35,000, triple
its 2017 level. It also created rapid response centers to monitor
suspicious activity during the 2020 election.
“Since 2017, we’ve made large investments in teams and technologies
to better secure our elections and are deploying them where they will
have the greatest impact,” Facebook spokeswoman Katie Derkits said in a
written statement.
Twitter Bans Political Ads In 2020 Election
In late October, Twitter Chief Executive Jack Dorsey banned all
political advertising from his network. Google quickly followed suit,
putting limits on political ads across some of its properties, including
YouTube.
“As caucuses and primaries for the 2020 presidential election get
underway, we’ll build on our efforts to protect the public conversation
and enforce our policies against platform manipulation,” Carlos Monje,
Twitter’s director of public policy and philanthropy, told Investor’s
Business Daily in written remarks. “We take the learnings from every
recent election around the world and use them to improve our election
integrity work.”
In September, Twitter suspended more than 10,000 accounts
across six countries. The company said the accounts actively spread
disinformation and encouraged unrest in politically sensitive regions.
YouTube and Google plan to restrict how precisely political advertisers can target an audience on their services.
Playing Whack-A-Mole With Facebook Fake News
Will these efforts make a difference in the 2020 election?
Research suggests social media firms will play a game of
whack-a-mole. They’ve deleted thousands of inauthentic accounts with
millions of followers. But that hasn’t stopped people from finding new
ways to get back online and send out fake news.
In the most recent takedown of accounts by Facebook, Russia was the
largest target. Facebook removed 118 accounts, groups and pages that
targeted Ukraine citizens. Other Russia sites focused on its involvement
in Syria and ethnic tensions in Crimea.
“Although the people behind this network attempted to conceal their
identities and coordination, our investigation found links to Russian
military intelligence services,” Facebook said in a blog post announcing
the slate of removals.
Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said the
social media company also removed 11 accounts distributing fake news
from Iran. The accounts focused mostly on U.S.-Iran relations,
Christianity and the upcoming election.
“We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge,” Gleicher wrote.
Emerging Threat Of Deepfakes In 2020 Election
In December, Facebook and Twitter disabled a global network of 900
pages, groups and accounts sending pro-Trump messages. The fake news
accounts managed to avoid detection as being inauthentic. And they used
photos generated with the aid of artificial intelligence. The campaign
was based in the U.S. and Vietnam.
“There’s no question that social media has really changed the way
that we talk about politics,” said Deen Freelon, a media professor at
the University of North Carolina at Chapel Hill. “The No. 1 example is
our president who, whether you like him or not, uses social media in
ways that are unprecedented for a president and I would say any
politician.”
The other fake news threat that social media companies face is from
deepfakes. The level of realism in deepfakes has increased vastly from
just a year ago, analysts say.
Using artificial intelligence technology, deepfake purveyors replace a
person in an existing image or video with someone else’s
likeness. Users also employ artificial intelligence tools in deepfakes
to misrepresent an event that occurred. Deepfakes can even manufacture
an event that never took place.
“Deepfakes are pretty scary to me,” said Freelon. “But I also think
the true impact of deepfakes won’t become apparent until the technology
gets developed a bit more.”
Cheapfakes: A Simpler Kind Of Fake News
Simpler versions of deepfakes get the name “cheapfakes,” or videos altered with traditional editing tools or low-end technology.
An example of a cheapfake that went viral was an altered video of
House Speaker Nancy Pelosi. The edited video slowed down her speech to
make her seem inebriated. That prompted right-wing cable news pundits to
question Pelosi’s mental health and fitness to serve office.
YouTube removed the video. Facebook did not. Only videos generated by
artificial intelligence to depict people saying fictional things would
be removed, Facebook said. It eventually placed a warning label on the Pelosi video.
In January, Facebook took steps to ban many types of misleading
videos from its site. It was part of a push against deepfake content and
online misinformation campaigns.
Facebook said in a blog post
that these fake news videos distort reality and present a “significant
challenge” for the technology industry. The rules will not apply to
satire or parody.
In February, Twitter changed its video policies, saying it would more
aggressively scrutinize fake or altered photos and videos. Starting in
March, Twitter will add labels or take down tweets carrying manipulated
images and videos, it said in a blog post.
But are the hurdles too high to surmount? A Massachusetts Institute
of Technology study last year concluded fake news is more likely to go
viral than other news. And it showed that a false story reached 1,500
people six times quicker than a true story.
As to why falsehoods perform so well, the MIT team settled on the hypothesis that fake news is more “novel” than real news. Subsequently, it evokes more emotion than the average tweet or post.
Ordinary social media users play a role in spreading fake news as
well. The determining factor for whether people spread disinformation is
the number of times they see it.
People who repeatedly encounter a fake news item may feel less
unethical about sharing it on social media. That comes regardless of
whether they believe it is accurate, according to a study published in the journal Psychological Science.
“Even when they know it’s false, if they repeatedly encounter it,
they feel it’s less unethical to share and they’re less likely to
censor,” said Daniel Effron, professor of Organizational Behavior at the
London Business School and an author of the study. “It suggests that
social media companies need a different approach to combating the spread
of disinformation.”
Letting Consumers Decide On Fake News
The findings carry heavy implications for industry executives hoping to stop 2020 election fake news on social media.
“We suggest that efforts to fight disinformation should consider how
people judge the morality of spreading it, not just whether they believe
it,” Effron said.
After the Cambridge Analytica scandal, Facebook promised to do
better, and rolled out a number of reforms. But in October, Zuckerberg
delivered a strongly worded address at Georgetown University, defending
unfettered speech, including paid advertising.
Zuckerberg says he wants to avoid policing what politicians can and
cannot say to constituents. Facebook should allow its social media users
to make those decisions for themselves, he contends.
Facebook officials repeatedly warn against significant changes to its
rules for political or issue ads. Such changes could make it hard for
less well-funded groups to raise money for the 2020 election, they say.
“We face increasingly sophisticated attacks from nation states like
Russia, Iran and China,” Zuckerberg said. “But, I’m confident that we’re
more prepared now because we’ve played a role in defending against
election interference in more than 200 elections around the world since
2016.”
Posted by AGORACOM-JC
at 10:54 AM on Friday, February 21st, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Latest AI could one day take over as the biggest editor of Wikipedia
“There are so many updates constantly needed to Wikipedia articles. It would be beneficial to automatically modify exact portions of the articles, with little to no human intervention,†said Darsh Shah, a PhD student in MIT’s Computer Science and AI Laboratory, who is one of the lead authors.
Researchers have developed an AI that can automatically rewrite
outdated sentences on Wikipedia, drastically reducing the need for human
editing.
Despite thousands of volunteer editors dedicating many hours towards keeping Wikipedia up to date, editing an estimated 52m articles
seems like an almost impossible task. However, researchers from MIT are
set to unveil a new AI that could be used to automatically update any
inaccuracies on the online encyclopaedia, thereby giving human editors a
robotic helping hand.
In a paper presented at the AAAI Conference on AI, the researchers
described a text-generating system that pinpoints and replaces specific
information in relevant Wikipedia sentences, while keeping the language
similar to how humans write and edit.
The idea is that humans could type an unstructured sentence with the
updated information into an interface, without the need to worry about
grammar. The AI then searches Wikipedia for the right pages and outdated
information, which it then updates in a human-like style.
The researchers are hopeful that, down the line, it could be possible
to build an AI that can do the entire process automatically. This would
mean it could scour the web for updated news on a topic and replace the
text.
Taking on ‘fake news’
“There are so many updates constantly needed to Wikipedia articles.
It would be beneficial to automatically modify exact portions of the
articles, with little to no human intervention,†said Darsh Shah, a PhD
student in MIT’s Computer Science and AI Laboratory, who is one of the
lead authors.
“Instead of hundreds of people working on modifying each Wikipedia
article, then you’ll only need a few, because the model is helping or
doing it automatically. That offers dramatic improvements in
efficiency.â€
Looking beyond Wikipedia, the study also put forward the AI’s
potential benefits as a tool to eliminate bias when training detectors
of so-called ‘fake news’. Some of these detectors train on datasets of
agree-disagree sentence pairs to verify a claim by matching it to given
evidence.
“During training, models use some language of the human-written
claims as ‘give-away’ phrases to mark them as false, without relying
much on the corresponding evidence sentence,†Shah said. “This reduces
the model’s accuracy when evaluating real-world examples, as it does not
perform fact-checking.â€
By applying their AI to the agree-disagree method of disinformation
detection, an augmented dataset used by the researchers was able to
reduce the error rate of a popular detector by 13pc.
Posted by AGORACOM-JC
at 2:45 PM on Wednesday, February 19th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
The Rise of Deepfakes
Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness
In recent months videos of influential celebrities and politicians have surfaced displaying a false and augmented reality of one’s believes or gestures
Deepfakes leverage powerful techniques from machine learning and
artificial intelligence to manipulate and generate visual and audio
content with a high potential to deceive. The purpose of this article is
to enhance and promote efforts into research and development and not to
promote or aid in the creation of nefarious content.
Introduction
Deepfakes are synthetic media in which a person in an
existing image or video is replaced with someone else’s likeness. In
recent months videos of influential celebrities and politicians have
surfaced displaying a false and augmented reality of one’s believes or
gestures.
Whilst deep learning has been successfully applied to solve various
complex problems ranging from big data analytics to that of computer
vision the need to control the content generated is crucial alongside
that of it’s availability to the public.
Within recent months, a number of mitigation mechanisms have been
proposed and cited with the use of Neural Networks and Artificial
Intelligence being at the heart of them. From this, we can distinguish
that a proposal for technologies that can automatically detect and
assess the integrity of visual media is therefore indispensable and in
great need if we wish to fight back against adversarial attacks.
(Nguyen, 2019)
Early 2017
Deepfakes as we know them first started to gain attention in December
2017, after Vice’s Samantha Cole published an article on Motherboard.
The article talks about the manipulation of celebrity faces to
recreate famous scenes and how this technology can be misused for
blackmail and illicit purposes.
The videos were significant because they marked the first notable
instance of a single person who was able to easily and quickly create
high-quality and convincing deepfakes.
Cole goes on to highlight the juxtaposition in society as these tools
are made free by corporations for students to gain sufficient knowledge
and key skills to enhance their general studies at University and
school.
Open-source machine learning tools like TensorFlow, which Google
makes freely available to researchers, graduate students, and anyone
with an interest in machine learning. — Samantha Cole
Whilst deepfakes have the potential to differ in general quality from
previous efforts of superimposing faces onto other bodies. A good
deepfake, created by Artificial Intelligence that has been trained on
hours of high-quality footage creates such extremely high-quality
content humans struggle to understand whether it is real or not. In
turn, researches have shown interest in developing neural networks to
help understand the accuracy of such videos. From this, they are able to
then distinguish them as fake.
In general, a good deepfake can be found where the insertions around
the mouth are seamless alongside having smooth head movements and
appropriate coloration to surroundings. Gone have the days of simply
superimposing a head onto a body and animating it by hand as the
erroneous is still noticeable leading to dead context and mismatches.
Early 2018
In January 2018, a proprietary desktop application called FakeApp was
launched. This app allows users to easily create and share videos with
their faces swapped with each other. As of 2019, FakeApp has been
superseded by open-source alternatives such as Faceswap and the command
line-based DeepFaceLab. (Nguyen, 2019)
With the availability of this technology being so high, websites such
as GitHub have sprung to life in offering new mythologies of combatting
such attacks. Within the paper ‘Using Capsule Networks To Detect Forged Images and Videos’
Huy goes on to talk about the ability to use forged images and videos
to bypass facial authentication systems in secure environments.
The quality of manipulated images and videos has seen significant
improvement with the development of advanced network architectures and
the use of large amounts of training data that previously wasn’t
available.
Later 2018
Platforms such as Reddit start to ban deepfakes after fake news and
videos that started circling from specific communities on their site.
Reddit took it on themself to delete these communities in a stride to
protect their own.
A few days later BuzzFeed publishes a frighteningly realistic video
that went viral. The video showed Barack Obama in a deepfake. Unlike the
University of Washington video, Obama was made to say words that
weren’t his own, in turn helping to raise light to this technology.
Below is a video BuzzFeed created with Jordan Peele as part of a campaign to raise awareness of this software.
Early 2019
In the last year, several manipulated videos of politicians and other
high-profile individuals have gone viral, highlighting the continued
dangers of deepfakes, and forcing large platforms to take a position.
Following BuzzFeed’s disturbingly realistic Obama deepfake, instances
of manipulated videos of other high-profile subjects began to go viral,
and seemingly fool millions of people online.
Despite most of the videos being even more crude than deepfakes —
using rudimentary film editing rather than AI — the videos sparked
sustained concern about the power of deepfakes and other forms of video
manipulation while forcing technology companies to take a stance on what
to do with such content. (Business Insider, 2019).
Posted by AGORACOM-JC
at 5:05 PM on Tuesday, February 18th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
‘Wake up, Zuck’: Protesters gather outside of Facebook founder’s home, demand regulation of political ads
On Monday morning around 10 a.m., around 50 protesters gathered
outside of Facebook founder and CEO Mark Zuckerberg’s home in the
Mission District in protest of the social media giant’s use of personal
data and refusal to regulate misleading political advertisements.
“We’re sick and tired of waiting for the government to regulate
Facebook,†said Tracy Rosenberg, the executive director of Media
Alliance and one of the protest’s organizers. “You’re profiting off of
us — you’re selling our information.â€
The protesters chanted “Wake up Zuck!†and “fake news, real hate†and
carried signs that said “Stop the Lies, Protect our democracy†and
“break up Facebook.†In colorful chalk on the sidewalk in front of the
house, demonstrators wrote phrases like: “Facebook is a Russian assetâ€;
“don’t sell my private dataâ€; and “history will write your epitaph as
the man who broke democracy.â€
In November, Twitter outright banned political ads, and Google said it would limit the targeting of political ads
on its search engine and on its video streaming platform YouTube.
Facebook has resisted such changes in policy in the face of criticism.
Zuckerberg in December told CBS This Morning
that, “in a democracy,†people should “make their own judgments†about
what politicians say. “I don’t think a private company should be
censoring politicians or news,†he said.
But protesters say that very mindset is destroying democracy, rather than upholding its values.
“We like many others and the organizations that put this rally
together feel that Facebook is a dramatic threat to our democratic
systems around the world,†said Ted Lewis, an activist with Global
Exchange, an organization that advocates for human rights and
alternatives to capitalism. “Facebook needs to take responsibility for
what they’re doing — they need to get the lies off of their platform.â€
Lewis said, specifically, Facebook’s hands-off policy around
political advertising is especially troubling. “Political advertising
could contain the most blatant falsehood and they refuse to do anything
about it,†Lewis said.
Zuckerberg is likely not spending his President’s Day holiday inside
of his Mission District manse — as he has some 10 places to call “homeâ€
and mainly resides in Palo Alto.
Other protesters bemoaned Facebook’s laissez-faire approach to the
spread of misinformation, especially as the 2020 presidential election
nears. “You can say anything you want,†said Erin Fisher, an activist
with Campaign to Regulate and Break Up Big Tech. “Facebook is the most
important. They’re monetizing propaganda.â€
“This is one of the pillars of the fight in 2020,†Fisher said, referring to the upcoming November election.
By around 11 a.m. protesters largely dispersed and a few police officers supervised the scene.
Photo by Loi Almeron
Tracy Rosenberg (left), the executive director of Media Alliance, holds a bullhorn.
Posted by AGORACOM-JC
at 1:00 PM on Friday, February 14th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Deepfakes and deep media: A new security battleground
In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media
Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning
Deepfakes — media that takes a person in an existing image, audio
recording, or video and replaces them with someone else’s likeness using
AI — are multiplying quickly. That’s troubling not only because these
fakes might be used to sway opinions during an election or implicate a
person in a crime, but because they’ve already been abused to generate pornographic material of actors and defraud a major energy producer.
In anticipation of this new reality, a coalition of academic
institutions, tech firms, and nonprofits are developing ways to spot
misleading AI-generated media. Their work suggests that detection tools
are a viable short-term solution but that the deepfake arms race is just
beginning.
Deepfake text
The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath, but cutting-edge language models can now write with humanlike pith and cogency. San Francisco research firm OpenAI’s GPT-2 takes seconds to craft passages in the style of a New Yorkerarticle or brainstorm game scenarios. Of greater concern, researchers
at Middlebury Institute of International Studies’ Center on Terrorism,
Extremism, and Counterterrorism (CTEC) hypothesize that GPT-2 and others
like it could be tuned to propagate white supremacy, jihadist Islamism,
and other threatening ideologies.
Above: The frontend for GPT-2, AI research firm OpenAI’s trained language model.Image Credit: OpenAI
In pursuit of a system that can detect synthetic content, researchers
at the University of Washington’s Paul G. Allen School of Computer
Science and Engineering and the Allen Institute for Artificial
Intelligence developed Grover,
an algorithm they claim was able to pick out 92% of deepfake-written
works on a test set compiled from the open source Common Crawl corpus.
The team attributes its success to Grover’s copywriting approach, which
they say helped familiarize it with the artifacts and quirks of
AI-originated language.
A team of scientists hailing from Harvard and the MIT-IBM Watson AI Lab separately released The Giant Language Model Test Room,
a web environment that seeks to determine whether text was written by
an AI model. Given a semantic context, it predicts which words are most
likely to appear in a sentence, essentially writing its own text. If
words in a sample being evaluated match the top 10, 100, or 1,000
predicted words, an indicator turns green, yellow, or red, respectively.
In effect, it uses its own predictive text as a benchmark for spotting
artificially generated content.
Deepfake videos
State-of-the-art video-generating AI is just as capable (and dangerous) as its natural language counterpart, if not more so. An academic paper
published by Hong Kong-based startup SenseTime, the Nanyang
Technological University, and the Chinese Academy of Sciences’ Institute
of Automation details a framework that edits footage by using audio to
synthesize realistic videos. And researchers at Seoul-based Hyperconnect
recently developed a tool — MarioNETte
— that can manipulate the facial features of a historical figure,
politician, or CEO by synthesizing a reenacted face animated by the
movements of another person.
Even the most realistic deepfakes contain artifacts that give them
away, however. “Deepfakes [produced by] generative [systems] learn a
data set of actual images in videos, to which you add new images and
then generate a new video with the new images,†Ishai Rosenberg, head of
the deep learning group at cybersecurity company Deep Instinct, told
VentureBeat via email. “The result is that the output video has subtle
differences because there are changes in the distribution of the data
that is generated artificially by the deepfake and the distribution of
the data in the original source video. These differences, which can be
referred to as ‘glimpses in the matrix,’ are what the deepfake detectors
are able to distinguish.â€
Above: Two deepfake videos produced using state-of-the-art methods.Image Credit: SenseTime
Last summer, a team from the University of California, Berkeley and the University of Southern California trained a model
to look for precise “facial action units†— data points of people’s
facial movements, tics, and expressions, including when they raise their
upper lips and how their heads rotate when they frown — to identify
manipulated videos with greater than 90% accuracy. Similarly, in August
2018 members of the Media Forensics program at the U.S. Defense Advanced
Research Projects Agency (DARPA) tested systems that could detect AI-generated videos from cues like unnatural blinking, strange head movements, odd eye color, and more.
Several startups are in the process of commercializing comparable deepfake video detection tools. Amsterdam-based Deeptrace Labs
offers a suite of monitoring products that purport to classify
deepfakes uploaded on social media, video hosting platforms, and
disinformation networks. Dessa has proposed techniques for improving deepfake detectors trained on data sets of manipulated videos. And Truepic raised an $8 million funding round in July 2018
for its video and photo deepfake detection services. In December 2018,
the company acquired another deepfake “detection-as-a-service†startup —
Fourandsix — whose fake image detector was licensed by DARPA.
Above: Deepfake images generated by an AI system.
Beyond developing fully trained systems, a number of companies have
published corpora in the hopes that the research community will pioneer
new detection methods. To accelerate such efforts, Facebook — along with
Amazon Web Services (AWS), the Partnership on AI, and academics from a
number of universities — is spearheading the Deepfake Detection
Challenge. The Challenge includes a data set of video samples labeled to
indicate which were manipulated with AI. In September 2019, Google
released a collection
of visual deepfakes as part of the FaceForensics benchmark, which was
cocreated by the Technical University of Munich and the University
Federico II of Naples. More recently, researchers from SenseTime
partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0, a data set for face forgery detection that they claim is the largest of its kind.
Deepfake audio
AI and machine learning aren’t suited just to video and text synthesis — they can clone voices, too. Countlessstudies
have demonstrated that a small data set is all that’s required to
recreate the prosody of a person’s speech. Commercial systems like those
of Resemble
and Lyrebird need only minutes of audio samples, while sophisticated
models like Baidu’s latest Deep Voice implementation can copy a voice
from a 3.7-second sample.
Deepfake audio detection tools are not yet abundant, but solutions are beginning to emerge.
Several months ago, the Resemble team released an open source tool dubbed Resemblyzer,
which uses AI and machine learning to detect deepfakes by deriving
high-level representations of voice samples and predicting whether
they’re real or generated. Given an audio file of speech, it creates a
mathematical representation summarizing the characteristics of the
recorded voice. This enables developers to compare the similarity of two
voices or suss out who’s speaking at any given moment.
In January 2019, as part of its Google News Initiative, Google
released a corpus of speech containing “thousands†of phrases spoken by
the company’s text-to-speech models. The samples were drawn from English
articles spoken by 68 different synthetic voices and covered a variety
of regional accents. The corpus is available to all participants of ASVspoof 2019, a competition that aims to foster countermeasures against spoofed speech.
A lot to lose
No detector has achieved perfect accuracy, and researchers haven’t
yet figured out how to determine deepfake authorship. Deep Instinct’s
Rosenberg anticipates this is emboldening bad actors intent on
distributing deepfakes. “Even if a malicious actor had their [deepfake]
caught, only the [deepfake] itself holds the risk of being busted,†he
said. “There is minimal risk to the actor of getting caught. Because the
risk is low, there is little deterrence to creating deepfake[s].â€
Rosenberg’s theory is supported by a report
from Deeptrace, which found 14,698 deepfake videos online during its
most recent tally in June and July 2019 — an 84% increase within a
seven-month period. The vast majority of those (96%) consist of
pornographic content featuring women.
Considering those numbers, Rosenberg argues that companies with “a
lot to lose†from deepfakes should develop and incorporate deepfake
detection technology — which he considers akin to antimalware and
antivirus — into their products. There’s been movement on this front;
Facebook announced
in early January that it will use a combination of automated and manual
systems to detect deepfake content, and Twitter recently proposed flagging deepfakes and removing those that threaten harm.
Of course, the technologies underlying deepfake generators are merely
tools — and they have enormous potential for good. Michael Clauser,
head of the data and trust practice at consultancy Access Partnership,
points out that the technology has already been used to improve medical
diagnoses and cancer detection, fill gaps in mapping the universe, and
better train autonomous driving systems. He therefore cautions against
blanket campaigns to block generative AI.
“As leaders begin to apply existing legal principles like slander and
defamation to emerging deepfake use cases, it’s important not to throw
out the baby with the bathwater,†Clauser told VentureBeat via email.
“Ultimately, the case law and social norms around the use of this
emerging technology [haven’t] matured sufficiently to create bright red
lines on what constitutes fair use versus misuse.â€