Posted by AGORACOM-JC
at 8:22 AM on Thursday, March 26th, 2020
Secured contracts for approximately $1,100,000 CAD for its services
The contracts are from Governments Hyosung Company and various divisions of Lotte including an initial contract with Canon Korea Business Solutions
TORONTO, March 26, 2020 — Datametrex AI Limited (the “Company†or “Datametrexâ€) (TSXV: DM) (FSE: D4G) (OTC: DTMXF) is pleased to announce that it has secured contracts for approximately $1,100,000 CAD for its services. The contracts are from Governments Hyosung Company and various divisions of Lotte including an initial contract with Canon Korea Business Solutions. Canon Korea Business Solutions was created in 1985 when Canon and Lotte created a joint venture company to service the Korean markets.
“I am thrilled to provide this update to our shareholders. Our sales
team is doing a fantastic job opening new doors and extending contracts
with existing clients. Our original goal of a “land and expand†strategy
with is paying off nicely and we look forward to continuing the growth
trajectory,†says Marshall Gunter CEO of Datametrex AI.
About Datametrex AI Limited
Datametrex AI Limited is a technology focused company with exposure
to Artificial Intelligence and Machine Learning through its wholly owned
subsidiary, Nexalogy (www.nexalogy.com).
Additional information on Datametrex is available at: www.datametrex.com
For further information, please contact:
Marshall Gunter – CEO Phone: (514) 295-2300 Email: [email protected]
Neither the TSX Venture Exchange nor its Regulation Services
Provider (as that term is defined in the policies of the TSX Venture
Exchange) accepts responsibility for the adequacy or accuracy of this
release.
Forward-Looking Statements
This news release contains “forward-looking information” within
the meaning of applicable securities laws. All statements contained
herein that are not clearly historical in nature may constitute
forward-looking information. In some cases, forward-looking information
can be identified by words or phrases such as “may”, “will”, “expect”,
“likely”, “should”, “would”, “plan”, “anticipate”, “intend”,
“potential”, “proposed”, “estimate”, “believe” or the negative of these
terms, or other similar words, expressions and grammatical variations
thereof, or statements that certain events or conditions “may” or “will”
happen, or by discussions of strategy.
Readers are cautioned to consider these and other factors,
uncertainties and potential events carefully and not to put undue
reliance on forward-looking information. The forward-looking information
contained herein is made as of the date of this press release and is
based on the beliefs, estimates, expectations and opinions of management
on the date such forward-looking information is made. The Company
undertakes no obligation to update or revise any forward-looking
information, whether as a result of new information, estimates or
opinions, future events or results or otherwise or to explain any
material difference between subsequent actual events and such
forward-looking information, except as required by applicable law.
Posted by AGORACOM-JC
at 5:15 PM on Tuesday, March 24th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
This is How Malicious Deepfakes Can Be Beat
The growth of image manipulation techniques is eroding both trust and informed decision-making.
Although it is impossible to ID and prove fakes in real time, we can ascertain which images are truthful.
Software already exists that can verify images’ provenance – the next step will be hardware-based.
Today, the world captures over 1.2 trillion digital images and
videos annually – a figure that increases by about 10% each year.
Around 85% of those images are captured using a smartphone, a device
carried by over 2.7 billion people around the world.
But as image capture rates increase, so does the rate of image
manipulation. In recent years the level and speed of audio visual (AV)
manipulation has surprised even the most seasoned experts. The advent of
generative adversarial networks (GANs) – or ‘deepfakes’ – has captured
the majority of headlines because of their ability to completely
undermine any confidence in visual truth.
And even if deepfakes never proliferate in the public domain, the
world has nevertheless been upended by ‘cheapfakes’ – a term that refers
to more rudimentary image manipulation methods such as photoshopping,
rebroadcasting, speeding and slowing video, and other relatively
unsophisticated techniques. Cheapfakes have already been the main tool
in the proliferation of disinformation and online fraud, which have had
significant impacts on businesses and society.
The growth of image manipulation has made it more difficult to make
sound decisions based on images and videos – something businesses and
individuals are doing at an increasing rate. This includes personal
decisions ranging from purchases on peer-to-peer marketplaces, meeting
people when online dating, or voting; and business decisions, like
fulfilling an insurance claim or executing a loan. Even globally
important decisions are impacted, such as the international response to
images and videos displaying atrocities or egregious violence in
conflict zones or non-permissive areas, and much more.
Each of these very different use cases highlights two contradictory
trends; we rely on images and videos more than ever before, but we trust
them less than we ever have. This is a significant gap that is growing
by the day and has forced government and technologists to invest in
image-detection technology.
Unfortunately, there is no sustainable way to detect fake images in
real time and at scale. This sobering fact will likely not change
anytime soon.
There are several reasons for this. First, almost all metadata is
lost, stripped or altered as an image travels through the internet. By
the time that image hits a detection system, it will be impossible to
reproduce lost metadata – and therefore details like the original date,
time, and location of an image will likely remain unknown.
Second, almost all digital images are instantly compressed and
resized once they are shared across the internet; while some
manipulations are benign (such as recompression), others may be
significant and intended to deceive the content consumer. In either
case, the recompression and resizing of images as they are uploaded and
transmitted makes it difficult, if not impossible, to detect pixel-level
manipulations due to the loss of fidelity in the actual photo.
Third, when an automated or machine-learning-based detection
technique is identified and democratized, bad actors will quickly
identify a workaround in order to remain undetectable.
What makes detection even more difficult is social media, which
disseminates content – fake or real – in seconds. Those intent on
deceiving can inject fake content onto social media platforms instantly.
Even successful debunking would likely be too late to stop the fake
content from spreading, and cognitive dissonance and bias would more
greatly influence consumers’ decisions.
So if detection will not work, how do we arm people, businesses and
the international community with the tools to make better decisions?
Through images’ provenance. If the world cannot prove what is fake, then
it must prove what is real.
Today, technology does exist – such as Controlled Capture, software
developed by my company, Truepic – that is able to both establish the
provenance of images and to verify critical metadata at the point of
capture. This is possible thanks to advances in smartphone tech,
cellular networks, computer vision and blockchain. However, to truly
restore trust in images on a global level, the use of verified imagery
will need to scale beyond software to hardware.
To achieve this ambitious goal, image veracity technology will need
to be embedded into the chipsets that power smartphones. Truepic is
working with Qualcomm Technologies, the largest maker of smartphone
chipsets, to demonstrate the viability of this approach. Once complete,
this integration would allow smartphone makers to include a ‘verified’
mode to each phone’s native camera app – thus putting verified image
technology into the hands of hundreds of millions of users. The end
result will be cryptographically-signed images with verified provenance,
empowering decision-makers to make smart choices on a personal,
business or global scale. This is the future of decision-making in the
era of disinformation and deepfakes.
Posted by AGORACOM-JC
at 7:40 AM on Tuesday, March 24th, 2020
Announced that following establishment of interoperability between NexaIntelligence tech and Netanomics ORA-pro,
Nexalogy is becoming an affiliate member of the Carnegie Mellon University Center for Informed Democracy and Social Cybersecurity (IDeaS)
TORONTO, March 24, 2020 – Datametrex AI Limited (the “Company†or Datametrexâ€) (TSXV: DM) (FSE: D4G) is pleased to announce that following establishment of interoperability between NexaIntelligence tech and Netanomics ORA-pro, Nexalogy is becoming an affiliate member of the Carnegie Mellon University Center for Informed Democracy and Social Cybersecurity (IDeaS).
Dr. Kathleen Carley, from IDeaS commented “We look forward to working
with Nexalogy. They provide a unique and significant
technology, NexaIntelligence, that will help us understand the spread of
information and disinformation. We are delighted that they will be
affiliates of the Informed Democracy and Social-cybersecurity center
(IDeaS).â€
“Nexalogy is continuing its ‘Land and Expand’ approach to the USA
market and membership in Carnegie Mellon University IDeaS will be a key
component of networking and research collaboration in these efforts,â€
says Marshall Gunter, CEO of the Company.
Datametrex AI Limited is a technology focused company with exposure
to Artificial Intelligence and Machine Learning through its wholly owned
subsidiary, Nexalogy (www.nexalogy.com).
This news release contains “forward-looking information†within
the meaning of applicable securities laws. All statements contained
herein that are not clearly historical in nature may constitute
forward-looking information. In some cases, forward-looking information
can be identified by words or phrases such as “mayâ€, “willâ€, “expectâ€,
“likelyâ€, “shouldâ€, “wouldâ€, “planâ€, “anticipateâ€, “intendâ€,
“potentialâ€, “proposedâ€, “estimateâ€, “believe†or the negative of these
terms, or other similar words, expressions and grammatical variations
thereof, or statements that certain events or conditions “may†or “willâ€
happen, or by discussions of strategy.
Readers are cautioned to consider these and other factors,
uncertainties and potential events carefully and not to put undue
reliance on forward-looking information. The forward-looking information
contained herein is made as of the date of this press release and is
based on the beliefs, estimates, expectations and opinions of management
on the date such forward-looking information is made. The Company
undertakes no obligation to update or revise any forward-looking
information, whether as a result of new information, estimates or
opinions, future events or results or otherwise or to explain any
material difference between subsequent actual events and such
forward-looking information, except as required by applicable law.
Neither the TSX Venture Exchange nor its Regulation Services
Provider (as that term is defined in the policies of the TSX Venture
Exchange) accepts responsibility for the adequacy or accuracy of this
release.
Posted by AGORACOM-JC
at 3:00 PM on Thursday, March 19th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
How Coronavirus is Impacting Cyberspace
Hackers were also strategizing to spread fake news to create further confusion
By investigating the dark web marketplace, CYFIRMA uncovered illicit groups selling organic medicine claiming to cure and eradicate the COVID-19 virus
These discussions in the hackers’ communities were carried out in Mandarin, Japanese and English
These are interesting times – the world is witnessing an
unprecedented onslaught of upheavals not just in the ‘real-world’ but
also in the cyber world. We greeted 2020 gingerly knowing the trade war
between the U.S. and China was going to bring about economic uncertainty
but little did we know a global pandemic was upon us, with the
Coronavirus having an impact even on cyberspace.
By CYFIRMA RESEARCH
While healthcare workers are battling the COVID-19 virus, countries are in lockdown mode, and the global economy hangs in the balance, another war is raging in cyberspace.
Cyber risks and threats have multiplied with many more attack
vectors, and hackers’ techniques evolving faster than ever, blending
technical prowess with sophisticated social engineering. The current
challenge with the virus pandemic is a test of nations’ and businesses’
preparedness and resiliency on all fronts.
CYFIRMA’s threat visibility and intelligence research
revealed a massive increase of over 600% of cyberthreat indicators
related to the Coronavirus pandemic from February to early March.
Threat indicators are made up of conversations observed and uncovered
in the dark web, hackers’ forums, and closed communities. What our
researchers have seen and heard in these communities do not bode well
for governments and businesses – hackers are hard at work, actively
planning how to leverage this climate of fear and uncertainty to attain
their political and financial objectives.
The United States Computer Emergency Readiness Team (US-CERT) has
sent out alerts on scams tricking people into revealing personal
information or donating to fraudulent charities, all under the pretext
of helping to contain and manage the coronavirus. The Federal Trade
Commission has also warned about similar scams.
CYFIRMA’s research team and
multiple security vendors have reported that threat actors have used
fear tactics to spread malware, including LokiBot, RemcosRAT, TrickBot,
and FormBook.
These hackers’ communities span far and wide, communicating in
Cantonese, Mandarin, Russian, English, and Korean, unleashing campaigns
one after another to wreak havoc on unsuspecting nations and
enterprises.
On Dark Web
forums, a group from Hong Kong hatched a plan to create a new phishing
campaign targeting the population from mainland China. The group aimed
to create distrust and incite social unrest by assigning blame to the
Chinese Communist Party.
A deeper analysis of hackers’ conversations also revealed groups from
Taiwan discussing similar phishing and spam campaigns, specifically
targeting influential persons in mainland China to cause further unrest.
Korean-speaking hackers were planning to make financial gains using
sophisticated phishing campaigns, loaded with sensitive data
exfiltration malware and creating a new variant of EMOTET virus (EMOTET
is a malware strain that was first detected in 2014 and is one of the
most prevalent threats in 2019). These hackers were planning to target
Japan, Australia, Singapore, and the U.S.
CYFIRMA’s researchers also observed North Korean hackers targeting
South Korean businesses. The phishing email had the Korean language
title “Coronavirus Correspondenceâ€, tricking recipients into opening
them and launching malware into machines and networks.
With COVID-19, many hacker groups were observed to be using brand
impersonation with fake emails claiming to represent authoritative
bodies such as the Centers for Disease Control (CDC) and the World
Health Organization (WHO). The subject line and content of these emails
were very enticing, offering news updates and cures to the ailment.
We also noticed coronavirus-themed emails designed to look like
emails from the organizations’ leadership team and sent to all
employees.
Embedded with malware that would infect corporate networks, these
phishing attacks deploy social engineering tactics to steal data and
assets.
Other than unleashing cyberattacks to steal data, we also witnessed
the planning of fake websites to sell face masks and other health
apparatus using bitcoin in China, Japan, and the US.
To aggravate matters, hackers were also strategizing to spread fake
news to create further confusion. By investigating the dark web
marketplace, CYFIRMA uncovered illicit groups selling organic medicine
claiming to cure and eradicate the COVID-19 virus. These discussions in
the hackers’ communities were carried out in Mandarin, Japanese and
English.
A
new malware called ‘CoronaVP’ was being discussed by a Russian hacking
community; this could lead to a new ransomware or EMOTET strain,
designed to steal personal information.
Hackers leveraging on the COVID-19 pandemic are motivated by a
combination of personal financial gain as well as political espionage to
cause social upheavals. Threat actors in the world of cybercrimes are
well-equipped with tools, technology, expertise and financing to further
both commercial and political agendas. In our hyper-connected digital
world, cyber-crime is a lucrative business, and we should expect attacks
to be more frequent and more sophisticated as the pandemic continues to
cast a shadow over the global economy.
What we have witnessed in the field of cyber-intelligence has taught
us the importance of staying vigilant, and frequently, the most
dangerous forces at work are those we cannot see.
The importance of relevant and timely threat intelligence cannot be
over-emphasized as early detection of cyber threats could save
organizations from hefty financial penalties and irreversible brand
damage.
Posted by AGORACOM-JC
at 12:53 PM on Wednesday, March 18th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
This stance-detecting AI will help us fact-check fake news
Fighting fake news has become a growing problem in the past few years, and one that begs for a solution involving artificial intelligence
Verifying the near-infinite amount of content being generated on news websites, video streaming services, blogs, social media, etc. is virtually impossible
Fighting fake news is a much more complicated challenge.
Fact-checking websites such as Snopes, FactCheck.org, and PolitiFact do a
decent job of impartially verifying rumors, news, and remarks made by
politicians. But they have limited reach.
It would be unreasonable to expect current artificial intelligence
technologies to fully automate the fight against fake news. But there’s
hope that the use of deep learning can help automate some of the steps of the fake news detection pipeline and augment the capabilities of human fact-checkers.
In a paper presented at the 2019 NeurIPS AI conference,
researchers at DarwinAI and Canada’s University of Waterloo presented
an AI system that uses advanced language models to automate stance
detection, an important first step toward identifying disinformation.
The automated fake-news detection pipeline
Before creating an AI system that can fight fake news, we must first
understand the requirements of verifying the veracity of a claim. In
their paper, the AI researchers break down the process into the
following steps:
Retrieving documents that are relevant to the claim
Detecting the stance or position of those documents with respect to the claim
Calculating a reputation score for the document, based on its source and language quality
Verify the claim based on the information obtained from the relevant documents
Instead of going for an end-to-end AI-powered fake-news detector that
takes a piece of news as input and outputs “fake†or “realâ€, the
researchers focused on the second step of the pipeline. They created an
AI algorithm that determines whether a certain document agrees,
disagrees, or takes no stance on a specific claim.
Using transformers to detect stance
This is not the first effort to use AI for stance detection. Previous
research has used various AI algorithms and components, including
recurrent neural networks (RNN), long short-term memory (LSTM) models,
and multi-layer perceptrons, all relevant and useful artificial neural network (ANN) architectures.
The efforts have also leveraged other research done in the field, such
as work on “word embeddings,†numerical vector representations of
relationships between words that make them understandable for neural
networks.
However, while those techniques have been efficient for some tasks
such as machine translation, they have had limited success on stance
detection. “Previous approaches to stance detection were typically
earmarked by hand-designed features or word embeddings, both of which
had limited expressiveness to represent the complexities of language,â€
says Alex Wong, co-founder and chief scientist at DarwinAI.
The new technique uses a transformer, a type of deep learning
algorithm that has become popular in the past couple of years.
Transformers are used in state-of-the-art language models such as GPT-2 and Meena. Though transformers still suffer from the fundamental flaws, they are much better than their predecessors in handling large corpora of text.
Transformers use special techniques to find the relevant bits of
information in a sequence of bytes instead. This enables them to become
much more memory-efficient than other deep learning algorithms in
handling large sequences. Transformers are also an unsupervised machine learning algorithm, which means they don’t require the time- and labor-intensive data-labeling work that goes into most contemporary AI work.
“The beauty of bidirectional transformer language models is that they
allow very large text corpuses to be used to obtain a rich, deep
understanding of language,†Wong says. “This understanding can then be
leveraged to facilitate better decision-making when it comes to the
problem of stance detection.â€
Transformers come in different flavors. The University of Waterloo
researchers used a variation of BERT (RoBERTa), also known as deep
bidirectional transformer. RoBERTa, developed by Facebook in 2019, is an open-source language model.
Transformers still require very large compute resources in the training phase (our back-of-the-envelope calculation of Meena’s training costs amounted
to approx. $1.5 million). Not everyone has this kind of money to spare.
The advantage of using ready models like RoBERTa is that researchers
can perform transfer learning,
which means they only need to fine-tune the AI for their specific
problem domain. This saves them a lot of time and money in the training
phase.
“A significant advantage of deep bidirectional transformer language
models is that we can harness pre-trained models, which have already
been trained on very large datasets using significant computing
resources, and then fine-tune them for specific tasks such as
stance-detection,†Wong says.
Using transfer learning, the University of Waterloo researchers were
able to fine-tune RoBERTa for stance-detection with a single Nvidia
GeForce GTX 1080 Ti card (approx. $700).
The stance dataset
For stance detection, the researchers used the dataset used in the Fake News Challenge (FNC-1),
a competition launched in 2017 to test and expand the capabilities of
AI in detecting online disinformation. The dataset consists of 50,000
articles as training data and a 25,000-article test set. The AI takes as
input the headline and text of an article, and outputs the stance of
the text relative to the headline. The body of the article may agree or
disagree with the claim made in the headline, may discuss it without
taking a stance, may be unrelated to the topic.
The RoBERTa-based stance-detection model presented by the University
of Waterloo researchers scored better than the AI models that won the
original FNC competition as well as other algorithms that have been
developed since.
Fake News Challenge (FNC-1) results: The first three rows are the
language models that won the original competition (2017). The next five
rows are AI models that have been developed in the following years. The
final row is the transformer-based approach proposed by researchers at
the University of Waterloo.
The organizers of FNC-1 have gone to great lengths to make the
benchmark dataset reflective of real-world scenarios. They have derived
their data from the Emergent Project, a real-time rumor tracker created
by the Tow Center for Digital Journalism at Columbia University. But
while the FNC-1 dataset has proven to be a reliable benchmark for stance
detection, there is also criticism that it is not distributed enough to represent all classes of outcomes.
“The challenges of fake news are continuously evolving,†Wong says.
“Like cybersecurity, there is a tit-for-tat between those spreading
misinformation and researchers combatting the problem.â€
The limits of AI-based stance detection
One of the very positive aspects of the work done by the researchers
of the University of Waterloo is that they have acknowledged the limits
of their deep learning model (a practice that I wish some large AI
research labs would adopt as well).
For one thing, the researchers stress that this AI system will be one
of the many pieces that should come together to deal with fake news.
Other tools that need to be developed in the area of gathering
documents, verifying their reputation, and making a final decision about
the claim in question. Those are active areas of research.
The researchers also stress the need to integrate AI tools into
human-controlled procedures. “Provided these elements can be developed,
the first intended end-users of an automated fact-checking system should
be journalists and fact-checkers. Validation of the system through the
lens of experts of the fact-checking process is something that the
system’s performance on benchmark datasets cannot provide,†the
researchers observe in their paper.
The researchers explicitly warn about the consequences of blindly
trusting machine learning algorithms to make decisions about truth. “A
potential unintended negative outcome of this work is for people to take
the outputs of an automated fact-checking system as the definitive
truth, without using their own judgment, or for malicious actors to
selectively promote claims that may be misclassified by the model but
adhere to their own agenda,†the researchers write.
Image credit: Depositphotos
This is one of many projects that show the benefits of combining artificial intelligence and human expertise.
“In general, we combine the experience and creativity of human beings
with the speed and meticulousness afforded by AI. To this end, AI
efforts to combat fake news are simply tools that fact-checkers and
journalists should use before they decide if a given article is
fraudulent,†Wong says. “What an AI system can do is provide some statistical assurance about
the claims in a given news piece. That is, given a headline, they can
surface that, for example, 5,000 ‘other’ articles disagree with the
claim whereas only 50 support it. Such as distinction would serve a
warning to the individual to doubt the veracity of what they are
reading.â€
One of the central efforts of DarwinAI, Wong’s company, is to tackle AI’s explainability problem.
Deep learning algorithms develop very complex representations of their
training data, and it’s often very difficult to understand the factors
behind their output. Explainable AI aims to bring transparency to deep
learning decision-making. “In the case of misinformation, our goal is to
provide journalists with an understanding of the critical factors that
led to a piece of news being classified as fake,†Wong says.
The team’s next step is to tackle reputation-assessment to validate
the truthfulness of an article through its source and linguistics
characteristics.
Posted by AGORACOM-JC
at 5:22 PM on Monday, March 16th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Synthetic media: The real trouble with deepfakes
By M. Mitchell Waldrop
The snapshots above look like people you’d know. Your daughter’s best friend from college, maybe? That guy from human resources at work? The emergency-room doctor who took care of your sprained ankle? One of the kids from down the street?
“Deepfakes play to our weaknesses,†explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of “Truth Decay,â€
Nope. All of these images are “deepfakes†— the nickname for
computer-generated, photorealistic media created via cutting-edge
artificial intelligence technology. They are just one example of what
this fast-evolving method can do. (You could create synthetic images
yourself at ThisPersonDoesNotExist.com.) Hobbyists, for example, have used the same AI techniques to populate YouTube with a host of startlingly lifelike video spoofs
— the kind that show real people such as Barack Obama or Vladimir Putin
doing or saying goofy things they never did or said, or that revise
famous movie scenes to give actors like Amy Adams or Sharon Stone the
face of Nicolas Cage. All the hobbyists need is a PC with a high-end
graphics chip, and maybe 48 hours of processing time.
It’s good fun, not to mention jaw-droppingly impressive. And coming
down the line are some equally remarkable applications that could make
quick work out of once-painstaking tasks: filling in gaps and scratches
in damaged images or video; turning satellite photos into maps; creating
realistic streetscape videos to train autonomous vehicles; giving a
natural-sounding voice to those who have lost their own; turning
Hollywood actors into their older or younger selves; and much more.
Deepfake artificial-intelligence methods can map the face of, say,
actor Nicolas Cage onto anyone else — in this case, actor Amy Adams in
the film Man of Steel.
Yet this technology has an obvious — and potentially enormous — dark
side. Witness the many denunciations of deepfakes as a menace,
Facebook’s decision in January to ban (some) deepfakes outright and
Twitter’s announcement a month later that it would follow suit.
“Deepfakes play to our weaknesses,†explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of “Truth Decay,â€
a 2018 RAND report about the diminishing role of facts and data in
public discourse. When we see a doctored video that looks utterly real,
she says, “it’s really hard for our brains to disentangle whether that’s
true or false.†And the internet being what it is, there are any number
of online scammers, partisan zealots, state-sponsored hackers and other
bad actors eager to take advantage of that fact.
“The threat here is not, ‘Oh, we have fake content!’†says Hany
Farid, a computer scientist at the University of California, Berkeley,
and author of an overview of image forensics in the 2019 Annual Review of Vision Science.
Media manipulation has been around forever. “The threat is the
democratization of Hollywood-style technology that can create really
compelling fake content.†It’s photorealism that requires no skill or
effort, he says, coupled with a social-media ecosystem that can spread
that content around the world with a mouse click.
Posted by AGORACOM-JC
at 7:00 PM on Sunday, March 15th, 2020
Until now, investor participation in Artificial Intelligence has been the domain of mega companies and those funded by Silicon Valley. Small cap investors can finally consider participating in the great future of A.I. through Datametrex AI (DM: TSXV) (Soon To Be Nexaology) who has achieved the following over the past few months:
Q3 Revenues Of $1.6 million, an increase of 186%
9 Month Revenues Of $2.56M an increase of 37%
Repeat Contracts Of $1M and $600,000 With Korean Giant LOTTE
$954,000 Contract With Canadian Department of Defence To Fight Social Media Election Meddling
Participation In NATO Research Task Group On Social Media Threat Detection
When a small cap Artificial Intelligence company is successfully
deploying its technology with military and conglomerates, smart
investors have to take a closer look.
That look can begin with our latest interview of Datametrex CEO,
Marshall Gunter, who talks to us about the use of the Company’s
Artificial Intelligence to discover and eliminate US Presidential
election meddling. The fake news isn’t just targeting candidates
specifically, it also targets wedge issues such as abortion cases now
before the US Supreme Court and even the Coronavirus.
Watch this interview on one of your favourite screens or hit play and listen to the audio as you drive.
Posted by AGORACOM-JC
at 4:13 PM on Friday, March 13th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
As videos faked using artificial intelligence grow increasingly
sophisticated, experts in Switzerland are re-evaluating the risks its
malicious use poses to society – and finding innovative ways to stop the
perpetrators.
In a computer lab on the vast campus of the Swiss Federal Institute
of Technology Lausanne (EPFL), a small team of engineers is
contemplating the image of a smiling, bespectacled man boasting a rosy
complexion and dark curls.
“Yes, that’s a good one,†says lead researcher Touradj Ebrahimi, who
bears a passing resemblance to the man on the screen. The team has
expertly manipulated Ebrahimi’s head shot with an online image of Tesla
founder Elon Musk to create a deepfake – a digital image or
video fabricated through artificial intelligence.
It’s one of many fake illustrations – some more realistic than others – that Ebrahimi’s teamexternal link has created as they develop software, together with cyber security firm Quantum Integrityexternal link (QI), which can detect doctored images, including deepfakes.
Using machine learning, the same process behind the creation of
deepfakes, the software is learning to tell the difference between the
genuine and the forged: a “creator†feeds it fake images, which a
“detector†then tries to find.
“With lots of training, machines can help to detect forgery the same
way a human would,†explains Ebrahimi. “The more it’s used, the better
it becomes.â€
Forged photos and videos have existed since the advent of multimedia.
But AI techniques have only recently allowed forgers to alter faces in a
video or make it appear the person is saying something they never did.
Over the last few years, deepfake technology has spread faster than most
experts anticipated.
The team at EPFL have created the image in the centre by using deep
learning techniques to alter the headshot of Ebrahimi (right) and a
low-resolution image of Elon Musk in profile found on the
Internet.​​​​​​​
(EPFL/MMSPG/swissinfo)
“Precisely because it is moving so fast, we need to map where this
could go – what sectors, groups and countries might be affected,†says
its deputy director, Aengus Collins.
Although much of the problem with malign deepfakes involves their use
in pornography, there is growing urgency to prepare for cases in which
the same techniques are used to manipulate public opinion.
A fast-moving field
When Ebrahimi first began working with QI on detection software three
years ago, deepfakes were not on the radar of most researchers. At the
time, QI’s clients were concerned about doctored pictures of accidents
used in fraudulent car and home insurance claims. By 2019, however,
deepfakes had developed a level of sophistication that the project
decided to dedicate much more time to the issue.
“I am surprised, as I didn’t think [the technology] would move so fast,†says Anthony Sahakian, QI chief executive.
Sahakian has seen firsthand just how far deepfake techniques have
come to achieve realistic results, most recently the swapping of faces
on a passport photo that manages to leave all the document seals intact.
Posted by AGORACOM-JC
at 7:35 AM on Thursday, March 12th, 2020
Announce that Democracy Labs successfully used Nexalogy’s technology to monitor #covid19 and #coronavirus to identify misinformation campaigns and Fake News
Democracy Labs is a US based organization providing a hub for ongoing technology and creative innovation that serves progressive campaigns and organizations at the national, state, and local levels.
TORONTO, March 12, 2020 — Datametrex AI Limited (the “Company†or Datametrexâ€) (TSXV: DM) (FSE: D4G) is pleased to announce that Democracy Labs successfully used Nexalogy’s technology to monitor #covid19 and #coronavirus to identify misinformation campaigns and Fake News. Democracy Labs is a US based organization providing a hub for ongoing technology and creative innovation that serves progressive campaigns and organizations at the national, state, and local levels. In addition to misinformation about Covid-19 DemLabs has also used Nexalogy tech to examine Islamophobia against U.S. Representative Rashida Tlaib.
Key takeaways:
450,000 tweets analyzed from March 1st through 4th using hashtag #covid19
Russia Today suggested that the U.S.A. primaries be cancelled and was promoted by BOTS
Results of these campaigns can be found by clicking the attached links:
Datametrex AI Limited is a technology focused company with exposure
to Artificial Intelligence and Machine Learning through its wholly owned
subsidiary, Nexalogy (www.nexalogy.com).
This news release contains “forward-looking information†within
the meaning of applicable securities laws. All statements contained
herein that are not clearly historical in nature may constitute
forward-looking information. In some cases, forward-looking information
can be identified by words or phrases such as “mayâ€, “willâ€, “expectâ€,
“likelyâ€, “shouldâ€, “wouldâ€, “planâ€, “anticipateâ€, “intendâ€,
“potentialâ€, “proposedâ€, “estimateâ€, “believe†or the negative of these
terms, or other similar words, expressions and grammatical variations
thereof, or statements that certain events or conditions “may†or “willâ€
happen, or by discussions of strategy.
Readers are cautioned to consider these and other factors,
uncertainties and potential events carefully and not to put undue
reliance on forward-looking information. The forward-looking information
contained herein is made as of the date of this press release and is
based on the beliefs, estimates, expectations and opinions of management
on the date such forward-looking information is made. The Company
undertakes no obligation to update or revise any forward-looking
information, whether as a result of new information, estimates or
opinions, future events or results or otherwise or to explain any
material difference between subsequent actual events and such
forward-looking information, except as required by applicable law.
Neither the TSX Venture Exchange nor its Regulation Services
Provider (as that term is defined in the policies of the TSX Venture
Exchange) accepts responsibility for the adequacy or accuracy of this
release.
Posted by AGORACOM-JC
at 5:01 PM on Wednesday, March 11th, 2020
Until now, investor participation in Artificial Intelligence has
been the domain of mega companies and those funded by Silicon Valley.
Small cap investors can finally consider participating in the great
future of A.I. through Datametrex AI (DM: TSXV) (Soon To Be Nexaology)
who has achieved the following over the past few months:
Q3 Revenues Of $1.6 million, an increase of 186%
9 Month Revenues Of $2.56M an increase of 37%
Repeat Contracts Of $1M and $600,000 With Korean Giant LOTTEÂ Â
$954,000 Contract With Canadian Department of Defence To Fight Social Media Election Meddling
Participation In NATO Research Task Group On Social Media Threat DetectionÂ
When a small cap Artificial Intelligence company is successfully
deploying its technology with military and conglomerates, smart
investors have to take a closer look.
That look can begin with our latest interview of Datametrex CEO,
Marshall Gunter, who talks to us about the use of the Company’s
Artificial Intelligence to discover and eliminate US Presidential
election meddling. The fake news isn’t just targeting candidates
specifically, it also targets wedge issues such as abortion cases now
before the US Supreme Court and even the Coronavirus.
Watch this interview on one of your favourite screens or hit play and listen to the audio as you drive.