Posted by AGORACOM-JC
at 1:12 PM on Thursday, January 23rd, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
BBC partnered with Google and Facebook on general election fake news
The BBC partnered with Google (GOOGL), Microsoft (MSFT), Facebook (FB), and other news outlets to stymie the spread of fake news during December’s UK general election, its director-general Tony Hall said on Thursday.
Hall said that the BBC had been working “collaboratively†with the
Wall Street Journal, the Financial Times, India’s Hindu newspaper on a
partnership with Microsoft and Google to identify misinformation, as
part of a previously announced initiative.
But Hall said on Thursday for the first time that it had been used prior to last month’s UK vote.
The BBC in September announced that it was working with the
technology companies to develop an early warning system to use during
elections or when lives may be at risk, calling the move a “crucialâ€
step to fight disinformation.
The plan also includes media education, voter information plans, and shared learning initiatives.
The initiative works to de-emphasise stories that are just “plain
wrong,†Hall said, speaking during a panel discussion at the World
Economic Forum in Davos.
“We’ve tried this out on paper exercises, but we tried it for real
… in the UK election, and it worked. That combination and contact
between media that people trust and Google, Facebook, and whatever. It
worked. And we took down some stuff which was just plain wrong — in
copyright terms, but just wrong.â€
“By the way, we haven’t talked about this anywhere yet, but why not here?â€
Neither the BBC nor Google immediately responded to a request for
more information about how the initiative was used in the general
election.
Earlier in the talk, Hall noted that the BBC was still the most trusted source of news in the UK.
“For us, for the BBC, people still trust us more than any other form of media in the UK. Globally, trust is very very high.â€
“And why do people use us? They may use three, four, five sources of
news each day, but they come to us because they want to check whether [a
story] is right,” he said.
Posted by AGORACOM-JC
at 2:59 PM on Wednesday, January 22nd, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Building a Lie Detector for Images
A new paper from UC Berkeley and Adobe researchers declares war on fake images
Leveraging a custom dataset and fresh evaluation metric, the research team introduces a general image forensics approach that achieves high average precision in the detection of CNN-generated imagery
By: Synced
The Internet is full of fun fake images — from flying sharks and cows
on cars to a dizzying variety of celebrity mashups. Hyperrealistic
image and video fakes generated by convolutional neural networks (CNNs)
however are no laughing matter — in fact they can be downright
dangerous. Deepfake porn reared its ugly head in 2018, fake political
speeches by world leaders have cast doubt on news sources, and during
the recent Australian bushfires manipulated images mislead people
regarding the location and size of fires. Fake images and videos are
giving AI a black eye — but how can the machine learning community fight
back?
A new paper from UC Berkeley and Adobe researchers declares war on
fake images. Leveraging a custom dataset and fresh evaluation metric,
the research team introduces a general image forensics approach that
achieves high average precision in the detection of CNN-generated
imagery
Spotting such generated images may seem to be a
relatively simple task — just train a classifier using fake images
versus real images. In fact, the challenge is far more complicated for a
number of reasons. Fake images would likely be generated from different
datasets, which would incorporate different dataset biases. Fake
features are more difficult to detect when the training dataset of the
model differs from the dataset used to generate the fake image. Also,
network architectures and loss functions can quickly evolve beyond the
abilities of a fake image detection model. Finally, images may be
pre-processed or post-processed, which increases the difficulty in
identifying common features across a set of fake images.
To
address these and other issues, the researchers built a dataset of
CNN-based generation models spanning a variety of architectures,
datasets and loss functions. Real images were then pre-processed and an
equal number of fake images generated from each model — from GANs to
deepfakes. Due to its high variety, the resulting dataset minimizes
biases from either training datasets or model architectures.
The fake image detection model was built on ProGAN, an unconditional
GAN model for random image generation with simple CNN based structure,
and trained on the new dataset. Evaluated on various CNN image
generating methods, the model’s average precision was significantly
higher than the control groups.
Data augmentation is another approach the researchers used to improve
detection of fake images that had been post-processed after generation.
The training images (fake/real) underwent several additional
augmentation variants, from Gaussian blur to JPEG compression.
Researchers found that including data augmentation in the training set
significantly increased model robustness, especially when dealing with
post-processed images.
Researchers find the “fingerprint†of CNN-generated images.
The researchers note however that even the best detector will still
have trade-offs between true detection and false-positive rates, and it
is very likely a malicious user could simply handpick a simple fake
image that passes the detection threshold. Another concern is that the
post-processing effects added to fake images may increase detection
difficulty, since the fake image fingerprints might be distorted during
the post-processing. There are also many fake images that were not
generated but rather photoshopped, and the detector won’t work on images
produced through such shallow methods
The new study does a fine job of identifying the fingerprint of
images doctored with various CNN-based image synthesis methods. The
researchers however caution that this is one battle — the war on fake
images has only just begun.
Posted by AGORACOM-JC
at 11:00 AM on Monday, January 20th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
“Rated falseâ€: Here’s the most interesting new research on fake news and fact-checking
What better way to start the new year than by learning new things about how best to battle fake news and other forms of online misinformation?
Below is a sampling of the research published in 2019 — seven journal articles that examine fake news from multiple angles, including what makes fact-checking most effective and the potential use of crowdsourcing to help detect false content on social media.
Our friends at Journalist’s Resource,
that’s who. JR is a project of the Shorenstein Center on Media,
Politics and Public Policy at the Harvard Kennedy School, and they spend
their time examining the new academic literature in media, social
science, and other fields, summarizing the high points and giving you a
point of entry.
Here, JR’s managing editor, Denise-Marie Ordway, sums up some of the
most compelling papers on fake news and fact-checking published in 2019.
(You can also read some of her other roundups focusing on research from
2018 and 2017.)
What better way to start the new year than by learning new things
about how best to battle fake news and other forms of online
misinformation? Below is a sampling of the research published in 2019 —
seven journal articles that examine fake news from multiple angles,
including what makes fact-checking most effective and the potential use
of crowdsourcing to help detect false content on social media.
Because getting good news is also a great way to start 2020, I
included a study that suggests President Donald Trump’s “fake newsâ€
tweets aimed at discrediting news coverage could actually help
journalists. The authors of that paper recommend journalists “engage in a
sort of news jujitsu, turning the negative energy of Trump’s tweets
into a force for creating additional interest in news.â€
This study provides several new insights about the most effective
ways to counter fake news on social media. Researchers found that when
fake news headlines were flagged with a tag that says “Rated false,â€
people were less likely to accept the headline as accurate than when
headlines carried a “Disputed†tag. They also found that posting a
general warning telling readers to beware of misleading content could
backfire. After seeing a general warning, study participants were less
likely to believe true headlines and false ones.
The authors note that while their sample of 2,994 U.S. adults isn’t
nationally representative, the feedback they got demonstrates that
online fake news can be countered “with some degree of success.†“The
findings suggest that the specific warnings were more effective because
they reduced belief solely for false headlines and did not create
spillover effects on perceived accuracy of true news,†they write.
It would be time-consuming and expensive to hire crowds of
professional fact-checkers to find and flag all the false content on
social media. But what if the laypeople who use those platforms pitched
in? Could they accurately assess the trustworthiness of news websites,
even if prior research indicates they don’t do a good job judging the
reliability of individual news articles? This research article, which
examines the results of two related experiments with almost 2,000
participants, finds the idea has promise.
“We find remarkably high agreement between fact-checkers and
laypeople,†the authors write. “This agreement is largely driven by both
laypeople and fact-checkers giving very low ratings to hyper-partisan
and fake news sites.â€
The authors note that in order to accurately assess sites, however,
people need to be familiar with them. When news sites are new or
unfamiliar, they’re likely to be rated as unreliable, the authors
explain. Their analysis also finds that Democrats were better at gauging
the trustworthiness of media organizations than Republicans — their
ratings were more similar to those of professional fact checkers.
Republicans were more distrusting of mainstream news organizations.
When Trump turns to Twitter to accuse legitimate news outlets of
being “fake news,†does the public’s view of journalists change? Are
people who read his tweets less likely to believe news coverage? To
investigate such questions, researchers conducted two studies, during
which they showed some participants a sampling of the president’s “fake
news†tweets and asked them to read a news story.
Here’s what the researchers learned: The more tweets people chose to
read, the greater their intent to read more news in the future. As
participants read more tweets, their assessments of news stories’ and
journalists’ credibility also rose. “If anything, we can conclude that
Trump’s tweets about fake news drive greater interest in news more
generally,†the authors write.
The authors’ findings, however, cannot be generalized beyond the
individuals who participated in the two studies — 331 people for the
first study and then 1,588 for the second, more than half of whom were
undergraduate students.
Based on their findings, the researchers offer a few suggestions for
journalists. “In the short term,†they write, “if journalists can push
out stories to social media feeds immediately after Trump or others
tweet about legitimate news as being ‘fake news,’ then practitioners may
disarm Trump’s toxic rhetoric and even enhance the perceived
credibility of and demand for their own work. Using hashtags, quickly
posting stories in response to Trump, and replying directly to him may
also tether news accounts to the tweets in social media feeds.â€
“Who shared it?: Deciding what news to trust on social mediaâ€:
From NORC at the University of Chicago and the American Press
Institute, published in Digital Journalism. By David Sterrett, Dan
Malato, Jennifer Benz, Liz Kantor, Trevor Tompson, Tom Rosenstiel, Jeff
Sonderman, and Kevin Loker.
This study looks at whether news outlets or public figures have a
greater influence on people’s perception of a news article’s
trustworthiness. The findings suggest that when a public figure such as
Oprah Winfrey or Dr. Oz shares a news article on social media, people’s
attitude toward the article is linked to how much they trust the public
figure. A news outlet’s reputation appears to have far less impact.
In fact, researchers found mixed evidence that audiences will be more
likely to trust and engage with news if it comes from a reputable news
outlet than if it comes from a fake news website. The authors write that
“if people do not know a [news outlet] source, they approach its
information similarly to how they would a [news outlet] source they know
and trust.â€
The authors note that the conditions under which they conducted the
study were somewhat different from those that participants would likely
encounter in real life. Researchers asked a nationally representative
sample of 1,489 adults to read and answer questions about a simulated
Facebook post that focused on a news article, which appeared to have
been shared by one of eight public figures. In real life, these adults
might have responded differently had they spotted such a post on their
personal Facebook feeds, the authors explain.
Still, the findings provide new insights on how people interpret and
engage with news. “For news organizations who often rely on the strength
of their brands to maintain trust in their audience, this study
suggests that how people perceive their reporting on social media may
have little to do with that brand,†the authors write. “A greater
presence or role for individual journalists on social networks may help
them boost trust in the content they create and share.â€
This paper looks at changes in the volume of misinformation
circulating on social media. The gist: Since 2016, interactions with
false content on Facebook have dropped dramatically but have risen on
Twitter. Still, lots of people continue to click on, comment on, like
and share misinformation.
The researchers looked at how often the public interacted with
stories from 569 fake news websites that appeared on Facebook and
Twitter between January 2015 and July 2018. They found that Facebook
engagements fell from about 160 million a month in late 2016 to about 60
million a month in mid-2018. On Twitter, material from fake news sites
was shared about 4 million times a month in late 2016 and grew to about 5
million shares a month in mid-2018.
The authors write that the evidence is “consistent with the view that
the overall magnitude of the misinformation problem may have declined,
possibly due to changes to the Facebook platform following the 2016
election.â€
This study looks at the cognitive mechanisms behind belief in fake
news by investigating whether fake news has gained traction because of
political partisanship or because some people lack strong reasoning
skills. A key finding: Adults who performed better on a cognitive test
were better able to detect fake news, regardless of their political
affiliation or education levels and whether the headlines they read were
pro-Democrat, pro-Republican or politically neutral. Across two studies
conducted with 3,446 participants, the evidence suggests that
“susceptibility to fake news is driven more by lazy thinking than it is
by partisan bias per se,†the authors write.
The authors also discovered that study participants who supported
Trump had a weaker capacity for differentiating between real and fake
news than did those who supported 2016 presidential candidate Hillary
Clinton. The authors write that they are not sure why that is, but it
might explain why fake news that benefited Republicans or harmed
Democrats seemed more common before the 2016 national election.
Even as the number of fact-checking outlets continues to grow
globally, individual studies of their impact on misinformation have
provided contradictory results. To better understand whether
fact-checking is an effective means of correcting political
misinformation, scholars from three universities teamed up to synthesize
the findings of 30 studies published or released between 2013 and 2018.
Their analysis reveals that the success of fact-checking efforts varies
according to a number of factors.
The resulting paper offers numerous insights on when and how fact-checking succeeds or fails. Some of the big takeaways:
— Fact-checking messages that feature graphical elements such as
so-called “truth scales†tended to be less effective in correcting
misinformation than those that did not. The authors point out that “the
inclusion of graphical elements appears to backfire and attenuate
correction of misinformation.â€
— Fact-checkers were more effective when they tried to correct an
entire statement rather than parts of one. Also, according to the
analysis, “fact-checking effects were significantly weaker for
campaign-related statements.â€
— Fact-checking that refutes ideas that contradict someone’s personal
ideology was more effective than fact-checking aimed at debunking ideas
that match someone’s personal ideology.
— Simple messages were more effective. “As a whole, lexical
complexity appears to detract from fact-checking efforts,†the authors
explain.
Posted by AGORACOM-JC
at 7:12 AM on Monday, January 20th, 2020
Secured an additional contract with a division of LOTTE for approximately $600,000
Contract is renewal from last year, and it is for 12 months monthly subscription.
TORONTO, Jan. 20, 2020 — Datametrex AI Limited (the “Company†or Datametrexâ€) (TSXV: DM) (FSE: D4G) is pleased to announce it has secured an additional contract with a division of LOTTE for approximately $600,000. The contract is renewal from last year, and it is for 12 months monthly subscription.
“I am thrilled to start the new year with a large contract from
LOTTE. Our team is doing an excellent job servicing LOTTE as they
continue to execute on our “land and expand†strategy. Generating more
SaaS business is one of our key objectives as it will help to smooth out
our lumpier government contracts,†says Marshall Gunter, CEO of the
Company.
The Company also wishes to provide an update on the previously announced license sale to GreenInsightz.
Given the challenging environment in the sector, GreenInsightz and
Datametrex have agreed to rework the purchase terms as follows:
$250,000 CAD cash payment
30% of GreenInsightz equity position
About Datametrex
Datametrex AI Limited is a technology focused company with exposure
to Artificial Intelligence and Machine Learning through its wholly owned
subsidiary, Nexalogy (www.nexalogy.com).
This news release contains “forward-looking information†within
the meaning of applicable securities laws. All statements contained
herein that are not clearly historical in nature may constitute
forward-looking information. In some cases, forward-looking information
can be identified by words or phrases such as “mayâ€, “willâ€, “expectâ€,
“likelyâ€, “shouldâ€, “wouldâ€, “planâ€, “anticipateâ€, “intendâ€,
“potentialâ€, “proposedâ€, “estimateâ€, “believe†or the negative of these
terms, or other similar words, expressions and grammatical variations
thereof, or statements that certain events or conditions “may†or “willâ€
happen, or by discussions of strategy.
Readers are cautioned to consider these and other factors,
uncertainties and potential events carefully and not to put undue
reliance on forward-looking information. The forward-looking information
contained herein is made as of the date of this press release and is
based on the beliefs, estimates, expectations and opinions of management
on the date such forward-looking information is made. The Company
undertakes no obligation to update or revise any forward-looking
information, whether as a result of new information, estimates or
opinions, future events or results or otherwise or to explain any
material difference between subsequent actual events and such
forward-looking information, except as required by applicable law.
Neither the TSX Venture Exchange nor its Regulation Services
Provider (as that term is defined in the policies of the TSX Venture
Exchange) accepts responsibility for the adequacy or accuracy of this
release.
Posted by AGORACOM-JC
at 9:45 PM on Sunday, January 19th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Why Facebook, Twitter and governments are concerned about deepfakes
Facebook recently announced it has banned deepfakes from its social media platforms ahead of the upcoming 2020 US presidential elections.
The move came days before a US House Energy and Commerce hearing on manipulated media content, titled “Americans at Risk: Manipulation and Deception in the Digital Age.â€
By: Giorgia Guantario
In a blog post,
Monika Bickert, Facebook’s Vice President of Global Policy Management,
explained that the ban will concern all content that “has been edited or
synthesised – beyond adjustments for clarity or quality – in ways that
aren’t apparent to an average person and would likely mislead someone
into thinking that a subject of the video said words that they did not
actually say,†as well as content that is “the product of artificial
intelligence or machine learning that merges, replaces or superimposes
content onto a video, making it appear to be authentic.â€
The move came days before a US House Energy and Commerce hearing on manipulated media content, titled “Americans at Risk: Manipulation and Deception in the Digital Age.â€
Twitter
has also been in the process of coming up with its own deepfake
policies, asking its community for help in drafting them, although
nothing has come out as of yet.
But what are deepfakes? And why are social media platforms and governments so concerned about them?
Artificial Intelligence has been the hot topic of 2019 – this vast
and game changing technology has opened new doors for what organisations
can achieve thanks to technology. However, with all the good, such as
facial recognition or automation, also came some bad.
In the decade of fake news and misinformation, there has always been a
general understanding that although social media posts, clickbait
websites, and text content in general, were not to be fully trusted,
videos and audios were safe from the rise of deception – that is until
deepfakes entered the scene.
According to Merriam-Webster,
the term deepfake is “typically used to refer to a video that has been
edited using an algorithm to replace the person in the original video
with someone else (especially a public figure) in a way that makes the
video look authentic.â€
The fake in the word is pretty self-explanatory – these videos are
not real. The deep comes from deep learning, a subset of artificial
intelligence that utilises different layers of artificial neural
networks. Specifically, deepfakes employ two sets of algorithms, one to
create the video, and the second to determine if it is fake. The first
learns from the second to create a perfectly unidentifiable fake video.
Although the technology behind these videos is very fascinating, the
improper use of deepfakes has raised questions and concerns, and its
newfound mainstream status is not to be underestimated.
The beginning of the new decade saw TikTok’s parent company ByteDance under accusations of developing
a feature, referred to as “Face Swap“, using deepfakes technology.
ByteDance has denied the accusations, but the possibility of such
feature to become available to everyone raises concerns as to the use
the general public would make of it.
The most famous example is Chinese deepfakes app Zao, which
superimposes a photo of the user’s face onto a person in a video or GIF.
While Zao’s mainly faced privacy issues –the first version of the user
agreement stated that people who uploaded their photos surrendered
intellectual property right to their face– the real concern stems from
the use people will actually do of such a controversial technology if it
were to become available to a wider audience. At the time, Chinese
online payment system Alipay responded to fears over fraudulent use of Zao
saying that the current facial swapping technology “cannot deceive
[their]
payment apps†– but this doesn’t mean that the technology is not
evolving and couldn’t pose a threat in the future.
Another social network to make headlines in the first week of 2020 with relation to deepfakes is Snapchat – the company also decided to invest in its own deepfake technology. The social network bought deepfake maker AI Factory for US $166M
and the acquisition resulted in a new Snapchat feature called “Cameosâ€
that works in the same way deepfakes videos do – users can use their
selfies to become part of a selection of videos and essentially create
content that looks real, but that has never happened.
Deepfakes have been around for a while now – the most prevalent use
of this technology is in pornography, which has seen a growing number of
women, especially celebrities, becoming the protagonists of
pornographic content without their consent. The trend started on Reddit,
where pornographic deepfakes featuring the faces of actress Gal Gadot,
singers Taylor Swift and Ariana Grande, amongst others, grew in
popularity. Last year, deepfake pornography accounted for 96 percent of
the 14678 deepfake videos online, according to a report by Amsterdam-based company Deeptrace.
The remaining four percent, although small, could be just as
dangerous, and even change the global political and social landscape.
In response to Facebook’s decision to not take down the “shallowfakeâ€
(videos manipulated with basic editing tools or intentionally placed
out of context) video of US House Speaker Nancy Pelosi appearing to be
slurring her words, a team which included UK artist Bill Posters posted a
deepfake video of Mark Zuckerberg giving an appalling speech that
boasted his “total control of billions of people’s stolen data, all
their secrets, their lives, their futures.†The artists aim, they said,
was to interrogate the power of new forms of computational propaganda.
Other examples of very credible deepfake videos see Barack Obama
deliver a speech on the dangers of false information (the irony!), or in
a much more worrying use of the technology, cybercriminals mimicking a
CEO’s voice to demand a cash-transfer.
There is clearly a necessity to address deepfakes on a number of
fronts to avoid them becoming a powerful tool of misinformation.
For starters, although the commodification of this technology can be
frightening, it also raises people’s level of awareness, and puts them
in a position to question the credibility of the videos and audio
they’re watching or listening to. It is up to the watcher to check if
videos are real or not, just as it is when it comes to fake news.
Moreover, the same technology that created the issue could be the
answer to solving it. Last month, Facebook, in cooperation with Amazon,
Microsoft and Partnership on AI, launched a competition called the “Deepfake Detection Challengeâ€
to create automated tools, using AI technology, that can spot
deepfakes. At the same time, the AI Foundation also announced they are
building a deepfake detection tool for the general public.
Regulators have also started moving in the right direction to avoid
the misuse of this technology. US Congress held its first hearing on
deepfakes in June 2019, due to growing concerns over the impact deepfake
could have on the upcoming US presidential elections; while, as in the
case of Facebook and Twitter, social media platforms are under more and
more pressure to take action against misinformation, which now includes
deepfake videos and audios.
Posted by AGORACOM-JC
at 3:53 PM on Thursday, January 16th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
House Intelligence Committee chairman praised Facebook policy on deepfakes
Other lawmakers not so impressed; add other social media platforms not doing enough
House Permanent Select Committee on Intelligence Chairman Rep. Adam
Schiff (D-CA) said Facebook’s announcement this past week of its “new
policy which will ban intentionally misleading deepfakes from its
platforms is a sensible and responsible step, and I hope that others
like YouTube and Twitter will follow suit.â€
Schiff cautioned, however, that, “As with any new policy, it will be
vital to see how it is implemented, and particularly whether Facebook
can effectively detect deepfakes at the speed and scale required to
prevent them from going viral,†emphasizing that “the damage done by a
convincing deepfake, or a cruder piece of misinformation, is
long-lasting, and not undone when the deception is exposed, making
speedy takedowns the utmost priority.â€
Schiff added he’ll “also be focused on how Facebook deals with other
harmful disinformation like so-called ‘cheapfakes,’ which are not
covered by this new policy because they are created with less
sophisticated techniques but nonetheless purposefully and maliciously
distort an existing piece of media.â€
Not all lawmakers – or privacy rights advocates and groups —
concerned about this problem, though, were as impressed as Schiff with
Facebook’s new policy, Enforcing Against Manipulated Media, which was
announcement by Facebook Vice President for Global Policy Management
Monika Bickert only days before she testified
last week before the House Committee on Energy and Commerce
Subcommittee on Consumer Protection and Commerce hearing on, “Americans
at Risk: Manipulation and Deception in the Digital Age.â€
Subcommittee Chairwoman Rep. Jan Schakowsky (D-IL), chastised
“Congress [for having] unfortunately taken a laissez faire approach to
regulating unfair and deceptive practices online over the past decade
and platforms have let them flourish,†the result of which has been “big
tech failed to respond to the grave threat posed by deep-fakes, as
evidenced by Facebook scrambling to announce a new policy that strikes
me as wholly inadequate, since it would have done nothing to prevent the
altered video of Speaker Pelosi that amassed millions of views and
prompted no action by the online platform.â€
Similarly, Democratic Presidential candidate Joe Biden’s spokesman
Bill Russo stated, “Facebook’s announcement is not a policy meant to fix
the very real problem of disinformation that is undermining face in our
electoral process, but is instead an illusion of progress. Banning
deepfakes should be an incredibly low floor in combating
disinformation.â€
Schakowsky and other subcommittee members didn’t seem much assuaged
by either Bickert or the other witnesses who testified at the hearing
that Facebook’s policy goes far enough.
She declared that, “Underlying all of this is Section 230 of the
Communications Decency Act, which provided online platforms like
Facebook a legal liability shield for 3rd party content. Many have
argued that this liability shield resulted in online platforms not
adequately policing their platforms, including online piracy and
extremist content. Thus, here we are, with big tech wholly unprepared to
tackle the challenges we face today,†which she described as “a topline
concern for this subcommittee.†We “must protect consumers regardless
of whether they are online or not. For too long, big tech has argued
that ecommerce and digital platforms deserved special treatment and a
light regulatory touch.â€
In her opening statement, Schakowsky further noted that the Federal
Trade Commission “works to protect Americans from many unfair and
deceptive practices, but a lack of resources, authority, and even a lack
of will has left many American consumers feeling helpless in the
digital world. Adding to that feeling of helplessness, new technologies
are increasing the scope and scale of the problem. Deepfakes,
manipulated video, dark patterns, bots, and other technologies are
hurting us in direct and indirect ways.â€
“People share millions of photos and videos on Facebook every day,
creating some of the most compelling and creative visuals on our
platform,†Bickert said in announcing Facebook’s policy, but conceded
“some of that content is manipulated, often for benign reasons, like
making a video sharper or audio more clear. But there are people who
engage in media manipulation in order to mislead,†and these
“manipulations can be made through simple technology like Photoshop or
through sophisticated tools that use artificial intelligence or ‘deep
learning’ techniques to create videos that distort reality – usually
called deepfakes.â€
“While these videos are still rare on the Internet†Bickert said,
“they [nevertheless] present a significant challenge for our industry
and society as their use increases.â€
“As we enter 2020, the problem of disinformation, and how it can
spread rapidly on social media, is a central and continuing national
security concern, and a real threat to the health of our democracy,â€
Schiff said, noting that “for more than a year, I’ve been pushing
government agencies and tech companies to recognize and take action
against the next wave of disinformation that could come in the form of
‘deepfakes’ — AI-generated video, audio, and images that are difficult
or impossible to distinguish from real thing.â€
Schiff pointed to experts who testified during an open hearing of the
Intelligence Committee last year that “the technology to create
deepfakes is advancing rapidly and widely available to state and
non-state actors, and has already been used to target private
individuals …â€
Schiff said in his response to Facebook’s policy that he intends “to
continue to work with government agencies and the private sector to
advance policies and legislation to make sure we’re ready for the next
wave of disinformation online, including by improving detection
technologies, something which the recently passed Intelligence
Authorization Act facilitates with a new prize competition,†which Biometric Update earlier reported on.
Bickert said Facebook’s “approach has several components, from
investigating AI-generated content and deceptive behaviors like fake
accounts, to partnering with academia, government and industry to
exposing people behind these efforts,†underscoring that “collaboration
is key. Across the world, we’ve been driving conversations with more
than 50 global experts with technical, policy, media, legal, civic and
academic backgrounds to inform our policy development and improve the
science of detecting manipulated media,†and, “as a result of these
partnerships and discussions, we are strengthening our policy toward
misleading manipulated videos that have been identified as deepfakes.â€
“Going forward,†she stated, Facebook “will remove misleading manipulated media if it meets the specific detailed criteria she briefly outlined in announcing the social media giant’s new policy.
She described criteria as applying specifically to content which “has
been edited or synthesized – beyond adjustments for clarity or quality –
in ways that aren’t apparent to an average person and would likely
mislead someone into thinking that a subject of the video said words
that they did not actually say, and, it is the product of artificial
intelligence or machine learning that merges, replaces or superimposes
content onto a video, making it appear to be authentic.â€
However, she called attention to the fact that the new policy “does
not extend to content that is parody or satire, or video that has been
edited solely to omit or change the order of words,†highlighting that,
“consistent with our existing policies, audio, photos or videos, whether
a deepfake or not, will be removed from Facebook if they violate any of
our other Community Standards including those governing nudity, graphic violence, voter suppression, and hate speech.â€
She further stated that “videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers,
which include over 50 partners worldwide fact-checking in over 40
languages,†under the new Facebook policy. And, “If a photo or video is
rated false or partly false by a fact-checker, we significantly reduce
its distribution in News Feed, and reject it if it’s being run as an
ad.â€
“And, critically,†she stressed, “people who see it, try to share it,
or have already shared it, will see warnings alerting them that it’s
false.â€
Bickert said the company believes that “this approach is critical to
our strategy, and one we heard specifically from our conversations with
experts,†exclaiming that “if we simply removed all manipulated videos
flagged by fact-checkers as false, the videos would still be available
elsewhere on the Internet or social media ecosystem.†Thus, she
expressed, “by leaving them up and labelling them as false, we’re
providing people with important information and context.â€
“Our enforcement strategy against misleading manipulated media also
benefits from our efforts to root out the people behind these efforts,â€
she continued, pointing out that, “Just last month, we identified and
removed a network using AI-generated photos to conceal their fake
accounts,†and Facebook “teams continue to proactively hunt for fake
accounts and other coordinated inauthentic behavior.â€
“We are also engaged in the identification of manipulated content, of
which deepfakes are the most challenging to detect,†she continued,
explaining “that’s why last September we launched the Deep Fake
Detection Challenge, which has spurred people from all over the world to
produce more research and open source tools to detect deepfakes.â€
Meanwhile, in a separate effort by Facebook, the company has
“partnered with Reuters, the world’s largest multimedia news provider,
to help newsrooms worldwide to identify deepfakes and manipulated media
through a free online training course,†Bickert adding, noting that
“news organizations increasingly rely on third parties for large volumes
of images and video, and identifying manipulated visuals is a
significant challenge. This program aims to support newsrooms trying to
do this work.â€
She concluded by saying that, “As these partnerships and our own
insights evolve, so too will our policies toward manipulated media. In
the meantime, we’re committed to investing within Facebook and working
with other stakeholders in this area to find solutions with real
impact.â€
“Facebook wants you to think the problem is video-editing technology,
but the real problem is Facebook’s refusal to stop the spread of
disinformation,†House Speaker Nancy Pelosi Deputy Chief of Staff Drew
Hammill responded in a tweet.
Facebook was roundly chastised for seemingly only to be concerned
about deepfake videos rather than all the other tech that’s been used –
and admitted by Facebook — to manipulate audio and text that’s also
deliberately meant to deceive viewers and readers.
“Consider the scale. Facebook has more than 2.7 billion users, more
than the number of followers of Christianity. YouTube has north of 2
billion users, more than the followers of Islam. Tech platforms arguably
have more psychological influence over two billion people’s daily
thoughts and actions when considering that millions of people spend
hours per day within the social world that tech has created, checking
hundreds of times a day,†the subcommittee heard from Center for Humane Technology President and Co-Founder Tristan Harris.
“In several developing countries like the Philippines, Facebook has
100 percent penetration. Philippines journalist Maria Ressa calls it the
first ‘Facebook nation.’ But what happens when infrastructure is left
completely unprotected, and vast harms emerge as a product of tech
companies’ direct operation and profit?â€
Declaring that “social organs of society [are] left open for
deception, Harris warned that “these private companies have become the
eyes, ears, and mouth by which we each navigate, communicate and make
sense of the world. Technology companies manipulate our sense of
identity, self-worth, relationships, beliefs, actions, attention,
memory, physiology and even habit-formation processes, without proper
responsibility.â€
“Technology,†he said, “has become the filter by which we are
experiencing and making sense of the real world,†and, “in so doing,
technology has directly led to the many failures and problems that we
are all seeing: fake news, addiction, polarization, social isolation,
declining teen mental health, conspiracy thinking, erosion of trust,
breakdown of truth.â€
“But, while social media platforms have become our cultural and
psychological infrastructure on which society works, commercial
technology companies have failed to mitigate deception on their own
platforms from deception,†Harris direly warned. “Imagine a nuclear
power industry creating the energy grid infrastructure we all rely on,
without taking responsibility for nuclear waste, grid failures, or
making sufficient investments to protect it from cyber attacks. And
then, claiming that we are personally responsible for buying radiation
kits to protect ourselves from possible nuclear meltdowns.â€
“By taking over more and more of the ‘organs’ needed for society to
function, social media has become the de facto psychological
infrastructure that has created conditions that incentivize mass
deception at industrialized scales,†he quantified the issue, starkly
adding, “Technology companies have covertly ‘tilted’ the playing field
of our individual and collective attention, beliefs and behavior to
their private commercial benefit,†and that, “naturally, these tools and
capabilities tend to favor the sole pursuit of private profit far more
easily and productively than any ‘dual purpose’ benefits they may also
have at one time — momentarily — and occasionally had for culture or
societyâ€
Hill staffers involved in this issue advised to watch for “more
aggressive†legislation emanating from “the variety of committees and
subcommittees†with authority “to do something.â€
Indeed. Energy and Commerce Committee Chairman Frank Pallone, Jr. (D-NJ), said in his opening statement
that Congress needs to move “forward to beginning to get answers “so
that we can start to provide more transparency and tools for consumers
to fight misinformation and deceptive practices.â€
“While computer scientists are working on technology that can help
detect each of these deceptive techniques, we are in a technological
arms race. As detection technology improves, so does the deceptive
technology. Regulators and platforms trying to combat deception are left
playing whack-a-mole,†he acknowledged.
“Unrelenting advances in these technologies and their abuse raise
significant questions for all of us,†he concluded, asking, “What is the
prevalence of these deceptive techniques,†and, “how are these
techniques actually affecting our actions and decisions?â€
But, more importantly – from a distinctly legislatively regulatory position – he posited, “What steps are companies and regulators taking to mitigate consumer fraud and misinformation?â€
Posted by AGORACOM-JC
at 1:05 PM on Wednesday, January 15th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Instagram begins to hide retouched images
With the desire to stop the images that convey fake news on its
platform, Instagram is starting to hide pictures that have been
artistically retouched.
In order to fight fake news, Instagram announced some new features
last month. Like content considered offensive, Instagram now blurs
images that convey false information. The social network further limits
the scope of the suspicious publication and does not make it appear in
the Explorer menu, or via hashtags. If the war on fake news starts with a
good intention, it would seem that the algorithm of the social network
works a little too well. Certain artistic photos retouched in a significant way have thus been assimilated to fake news and have been hidden on the social network.
As spotted PetaPixel , Toby Harriman, a photographer based
in San Francisco, realized this while browsing his Instagram feed. He
explains that he fell for the first time on the famous screen indicating
that the hidden publication would be false information. Curious, the
photographer still tried to click, before realizing that it was only a
photo of a man from behind surrounded by mountains of all colors.
We understand the reason why Instagram considered that the image
conveyed false information, since it was heavily retouched in order to
change the color of the mountains. It is clear, however, that the author
of the photo had an artistic approach here and did not seek to convey
false information. Officially, Instagram recognizes that its fake news
detection system uses “a combination of user feedback and technology†.
The verified photo is then sent to independent fact-chekers, who
determine whether the photo distorts reality. If so, Instagram will
limit the scope of the post, and hide it from the users’ news feed.
Posted by AGORACOM-JC
at 1:01 PM on Tuesday, January 14th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
From the AI arms race to adversarial AI
The AI arms race is on, and it’s a cat and mouse game we see every day in our threat intelligence work
As new technology evolves, our lives become more convenient, but cybercriminals see new opportunities to attack users
The AI arms race is on, and it’s a cat and mouse game we see every
day in our threat intelligence work. As new technology evolves, our
lives become more convenient, but cybercriminals see new opportunities
to attack users. Whether it’s trying to circumvent antivirus software, or trying to install malware or ransomware
on a user’s machine, to abusing hacked devices to create a botnet or
taking down websites and important server infrastructures, getting ahead
of the bad guys is the priority for security providers. AI has
increased the sophistication of attacks, making it increasingly
unpredictable and difficult to mitigate against.
Increased Systematic Attacks
AI has reduced the manpower needed to carry out a cyber-attack. As
opposed to manually developing malware code, this process has become
automated, reducing the time, effort and expense that goes into these
attacks. The result: attacks become increasingly systematic and can be
carried out on a larger, grander scale.
Societal Change and New Norms
Along with cloud computing services,
the growth of AI has brought many tech advancements, but unless
carefully regulated it risks changing certain aspects of society. A
prime example of this is the use of facial recognition technology by the
police and local government authorities. San Francisco hit the
headlines this year when it became the first US city to ban the
technology.
This was seen as a huge victory – the technology carried far more
risks than benefits and question marks over inaccuracy and racial bias
were raised. AI technology is not perfect and is only as reliable and
accurate as the data that feeds it. As we head into a new decade,
technology companies and law makers need to work together to ensure
these developments are suitably regulated and used responsibly.
Changing the way we look at information
We’re now in the era of fake news, misinformation and deep fakes. AI
has made it even easier to create and spread misleading and fake
information. This problem is exacerbated by the fact that we
increasingly consume information in digital echo chambers, making it
harder to access unbiased information.
While responsibility lies with the tech companies that host and share
this content, education in data literacy will become more important in
2020 and beyond. An increasing focus on teaching the public how to
scrutinise information and data will be vital.
More Partnerships to Combat Adversarial AI
In order to combat the threat from adversarial AI, we hope to see
even greater partnerships between technology companies and academic
institutions. This is precisely why Avast has partnered with The Czech
Technical University in Prague to advance research in the field of artificial intelligence.
Avast’s rich threat data from over 400 million devices globally have
been combined with the CTU’s study of complex and evasive threats in
order to pre-empt and inhibit attacks from cybercriminals. The goals of
the laboratory include publishing breakthrough research in this field
and to enhance Avast’s malware detection engine, including its AI-based
detection algorithms.
As we head into a new decade AI will continue to impact and change
technology and society around us, especially with the increase in smart home devices. However, despite the negative associations, there’s a lot more good to be gained from artificial intelligence than bad.
Tools are only as helpful as those who wield them. The biggest
priority in the years ahead will be cross-industry and government
collaboration, to use AI for good and prohibit those who attempt to
abuse it.
Posted by AGORACOM-JC
at 12:15 PM on Monday, January 13th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Young people buying into ‘fake news’
By: Esther Cepeda
My son, his best friend, Dave, and I were chatting over a pizza last
weekend when Dave dropped some (absolutely incorrect) information: The
elderly are forgoing nursing homes for cruise ships, because the room
and board cost about the same, plus you get entertainment and travel.
Again — this is not a real phenomenon. A few healthy, affluent
retirees have spent a few years this way, but the cruise ship industry
is in no way prepared to offer extended care for masses of frail elderly
adults with complex medical conditions like chronic diseases and memory
problems.
When I prompted our friend for more information, he said it made
sense because cruise ships have onboard medical staff and morgues.
When further pressed — in my son’s spirited retelling, I’m described
as in a rabid state, pouncing on his innocent pal — Dave said he’d
definitely read a news story about it.
Errrrr, actually, he knew he’d definitely seen it somewhere.
Mmmmmm, maybe on Reddit?
My son acts like at this point I had fire blazing from my eyes. I’ll only admit that I was alarmed.
Dave is a bright young man who attended an excellent high school,
just completed his first semester of college at a fancy East Coast
university and is generally thoughtful and curious about the world.
But he passed on information he believed was fact because he saw
“something†on a news aggregation and message board site, or
“somewhere.â€
This gem about retiring to a cruise ship has been around since at
least 2003, according to the fact-checking site Snopes.com. It started
out as a bit of viral e-lore, and there have been a few
examples of real-life extended stays. But today, otherwise legitimate
news-gathering organizations post branded, sponsored-content “articlesâ€
(these are paid advertisements) about how to plan such a retirement
alongside real news that was reported by professional journalists and
vetted by editors.
I’m not picking on a kid I care about — he’s just an example of how
incredibly ill-equipped our young people are to navigate an internet
that’s loaded with fake news, junk science and other “informationâ€
designed to fool them and everyone else.
In a 2018-19 national assessment of U.S. high school students,
researchers at Stanford University found that two-thirds couldn’t tell
the difference between reported news stories and advertisements set off
by the words “sponsored content†on the homepage of a popular news
website.
And more than one-third of middle school students in the U.S. said
that they “rarely†or “never†learned how to judge the reliability of
sources, according to an analysis of 2018 survey data from The Nation’s
Report Card by the Reboot Foundation, a Paris-based nonprofit that
promotes the teaching of evidence-based reasoning skills.
But while it’s clear that students must be taught media-literacy
skills, there are few teachers prepared to do so. Many people, not just
teachers, tend to believe that their maturity and life experience make
them naturally media literate — i.e., not likely to fall for fake news
or bad sources of information.
A small 2011 study of the effectiveness of teacher training on media
literacy found that eight hours of in-person training — quite a lot by
the common standards of professional development — prepared someone to
pass on such skills. And the study also showed that, like anyone else,
teachers need systematic, direct instruction on media literacy, and it
must be practiced over time.
The bright side is that it’s not rocket science. For the average
reader, becoming media literate is generally simple: Find some good
sources, check bold assertions and be aware of any fine print, like the
basis of an author’s expertise or their potential financial interest.
Now, no one can check every fact in every bit of text they read, but a
high level of skepticism is warranted in this time of newsy
advertisements and active disinformation campaigns. If it sounds too
good (or too bad) to be true, it probably is. And since those types of
pieces of “information†are what drive clicks, views and “reader
engagement,†they’ve proliferated.
Do yourself and your loved ones a service, bookmark a few key
fact-checking websites and use them regularly (an extensive list can be
found in the appendix of the Reboot Foundation’s report, at
reboot-foundation.org/fighting-fake-news).
Posted by AGORACOM-JC
at 1:24 PM on Thursday, January 9th, 2020
SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
New tool uses AI to flag fake news for media fact-checkers
A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories.
The tool uses deep-learning AI algorithms to determine if claims
made in posts or stories are supported by other posts and stories on the
same subject.
By: University of Waterloo
A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories.
The tool, developed by researchers at the University of Waterloo,
uses deep-learning AI algorithms to determine if claims made in posts or
stories are supported by other posts and stories on the same subject.
“If they are, great, it’s probably a real story,” said Alexander
Wong, a professor of systems design engineering at Waterloo. “But if
most of the other material isn’t supportive, it’s a strong indication
you’re dealing with fake news.”
Researchers were motivated to develop the tool by the proliferation
of online posts and news stories that are fabricated to deceive or
mislead readers, typically for political or economic gain.
Their system advances ongoing efforts to develop fully automated
technology capable of detecting fake news by achieving 90 per cent
accuracy in a key area of research known as stance detection.
Given a claim in one post or story and other posts and stories on the
same subject that have been collected for comparison, the system can
correctly determine if they support it or not nine out of 10 times.
That is a new benchmark for accuracy by researchers using a large
dataset created for a 2017 scientific competition called the Fake News
Challenge.
While scientists around the world continue to work towards a fully
automated system, the Waterloo technology could be used as a screening
tool by human fact-checkers at social media and news organizations.
“It augments their capabilities and flags information that doesn’t
look quite right for verification,” said Wong, a founding member of the
Waterloo Artificial Intelligence Institute. “It isn’t designed to
replace people, but to help them fact-check faster and more reliably.”
AI algorithms at the heart of the system were shown tens of thousands
of claims paired with stories that either supported or didn’t support
them. Over time, the system learned to determine support or non-support
itself when shown new claim-story pairs.
“We need to empower journalists to uncover truth and keep us
informed,” said Chris Dulhanty, a graduate student who led the project.
“This represents one effort in a larger body of work to mitigate the
spread of disinformation.”