Agoracom Blog Home

Posts Tagged ‘AI’

#Covid19 fake news hacks its way onto government blockchain website – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 12:45 PM on Friday, March 27th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Covid-19 fake news hacks its way onto government blockchain website

By: Mariana López

  • On March 14, the government in Argentina disclosed that its system had effectively been hacked.
  • Perpetrator(s) uploaded false information regarding guidelines for public officials on handling the coronavirus (Covid-19) onto the country’s official bulletin website, which just so happens to use blockchain technology. 

As a result, officials took the site temporarily offline.

Correspondingly, another issuance will be necessary to disclaim the false statements posted on its 34,239 editions.

Have the blockchain gods forsaken the government of Buenos Aires? Not exactly. Blockchain isn’t bullet-proof.

Hacked! Why Argentina’s case is a big deal

Perhaps you’re wondering, “what’s the big deal? It’s just a bulletin.”

No, it’s not just a bulletin. 

Many countries have their own official bulletin or gazette wherein laws, notifications, or other big-deal, high-level government information is formally announced.

In Argentina, it’s known as the Boletín Oficial. Mexico’s is christiend the Diario Oficial de la Federación. In the US it’s called the Federal Register.

And the fact that something of such substantial importance in government communications was hacked is both alarming and interesting.

First off, blockchain-based systems are often hailed as more fool-proof to this type of manipulation. 

And that’s because each block within the chain is supposed to have its own unique cryptographic fingerprint and use what’s known as a “consensus protocol.” Through this protocol, the nodes on the network share and record transactional history. 

Thanks to these mechanisms, in theory, not just any outsider can show up and manipulate the data.

But with some creativity and determination, hackers can bust through blockchain’s apparently impenetrable defenses.

Secondly, everyone is well aware of how fake news can make its way onto social media. As a result, we’re consistently advised to only rely on official sources, like government websites, for more information on the pandemic. 

The hacking of a government outlet like Buenos Aires’ means that no source is 100 percent safe and fool-proof to being used as a platform to broadcast false statements.

That’s why we should make an effort to consult additional sources for more information. Especially for a topic as sensitive as healthcare.

And remember, if the government is hackable, so are you. So take the necessary precautions to protect your own data and systems.

Source: https://www.contxto.com/en/argentina/argentina-government-hacked-spread-fake-news/

#AI and #deepfake: #COVID19 poses new challenges for detecting deceptive tech – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:53 PM on Thursday, March 26th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

AI and deepfake: COVID-19 poses new challenges for detecting deceptive tech

  • With the rise of artificial intelligence and deepfake technology, incidents of widespread fraud around the world are showing us how easy it is to manipulate people in times of crisis
  • Shows the dangerous potential technology has to expand the boundaries of art until, at least, a control mechanism can be established

by Şule Güner

An elderly man wearing a face mask who collapsed and died lays on a street near a hospital as another man rides a bicycle past him in Wuhan, China, Jan. 30, 2020. (AFP Photo)  

With the rise of artificial intelligence and deepfake technology, incidents of widespread fraud around the world are showing us how easy it is to manipulate people in times of crisis and shows in full the dangerous potential technology has to expand the boundaries of art until, at least, a control mechanism can be established

“Have we come to a point whereby our recent technological developments have greater enabled the spread of false information?” This question has been a hot ethical topic over the last decade but unfortunately, examples are plenty enough to guarantee most would answer a resounding “yes.” At the moment, it seems we live in an age where the risk of digital manipulation is, in fact, a daily one.

News covering the coronavirus, which has since December spread from China to cover almost the entire world, is perhaps the most pertinent example. Over the period between December and January, when China was caught in the midst of an intense struggle against the outbreak, several pieces of footage circulated around the country’s social media platforms, seemingly showing people having collapsed in the middle of the street. The videos left the Chinese public quite perplexed as to their authenticity, yet no sooner had the videos been shared than they had gone viral all over the world.

Meanwhile, a few weeks ago, the world’s top political and economic elites were similarly deceived by a rather old-fashioned trick. Anthony Lasarevitsch (L) and Gilbert Chikli deceived their victims by impersonating French Foreign Minister Jean-Yves Le Drian. (AFP Photo)

Deception via masks

Four years ago, French and Israeli citizens Gilbert Chikli and Anthony Lasarevitsch called a list of highly influential people, which included King Philippe of Belgium, Gabon President Ali Bongo, the CEO of Lafarge, various members of the clergy and charity bosses and tricked them by posing as French Minister of Foreign Affairs Jean-Yves Le Drian. The pair even managing to squeeze money out of a number of those on their hit list at meetings arranged via Skype.

The fraudsters sported a silicone mask of Le Drian and hung a picture of the former French President François Hollande up on the wall behind them, demanding money for an extraordinary and urgent situation. The duo managed to pull the trick off a number of times, making a total of 55 million euros ($59.53 million) between 2016 and 2017. After fleeing to Ukraine in 2017 before being sent back to France, they received prison terms and fines at a hearing in France in early March. However, this situation has proved that even something as cliche as a fake mask can be enough to pull the wool over some. President Nixon poses at his White House desk in this March 23, 1970 file photo. (AP Photo)

This being the case, just think what could be achieved with what is known as “deepfake technology,” which enables fraudsters to deceive hundreds or even tens of thousands of people at one time. Unfortunately, people will continue to suffer as a result of these highly potent deception tactics, at least until we manage to create new tech that can distinguish between what is real and what is fake, or related measures are taken.

While creators of deepfake videos make more and more incredibly realistic videos every day, the latest example of this was the manipulation of former U.S. President Richard Nixon’s historic moon landing speech. With AI-manipulated deepfake tech, the president, who served between 1969-1974, was made to say the moon landing mission had failed. The authenticity, life-likeness of the video, which was created by the MIT Center for Advanced Virtuality, who joined up with machine learning experts from Ukraine and Israel, deeply shocked everyone who witnessed it while also raising concerns that this technology, if it were to fall into the wrong hands, could manipulate millions.

News on the Turkish front

COVID-19 diagnostic kit gives results in 90 minutes The pathogen kit needs only 90 minutes to see if the patient has contracted the coronavirus. (AA Photo)

In partnership with the Public Health General Directorate’s (HSGM) Virology Laboratory, Bioeksen, a Turkish company within the Istanbul Technical University (İTÜ) ARI Teknokent, has announced that it has developed a pathogen kit that can detect the coronavirus in just 90 minutes. The kit has already entered mass production and is expected to strengthen the hand of health personnel in the fight against the coronavirus pandemic that continues to sweep the world.

Canan Zöhre Ketre Kolukırık, the founder of Bioeksen, said when the coronavirus outbreak first broke out, HSGM labs and Bioeksen took swift action, coordinating efforts under the guidance of the World Health Organization (WHO) and developing a kit to allow for the quick detection of the coronavirus in just under two weeks. Bioeksen subsequently launched production of the kit, she added.

Stating that the group had contributed substantial amounts to the field of innovative biotechnology, Kolukırık said: “Our innovative solutions, which reduce the dayslong analysis of pathogen diagnosis to mere hours, have enabled us to progress rapidly in our sector. Bioeksen has not only remained the manufacturer of consumable products but has also become a groundbreaker for Turkey as the developer and launcher of the first 100% domestic robotic molecular analyzer.”

Source: https://www.dailysabah.com/life/ai-and-deepfake-covid-19-poses-new-challenges-for-detecting-deceptive-tech/news

Datametrex $DM.ca Executes $1.1M In New Contracts

Posted by AGORACOM-JC at 8:22 AM on Thursday, March 26th, 2020
  • Secured contracts for approximately $1,100,000 CAD for its services
  • The contracts are from Governments Hyosung Company and various divisions of Lotte including an initial contract with Canon Korea Business Solutions

TORONTO, March 26, 2020 — Datametrex AI Limited (the “Company” or “Datametrex”) (TSXV: DM) (FSE: D4G) (OTC: DTMXF) is pleased to announce that it has secured contracts for approximately $1,100,000 CAD for its services. The contracts are from Governments Hyosung Company and various divisions of Lotte including an initial contract with Canon Korea Business Solutions. Canon Korea Business Solutions was created in 1985 when Canon and Lotte created a joint venture company to service the Korean markets.

“I am thrilled to provide this update to our shareholders. Our sales team is doing a fantastic job opening new doors and extending contracts with existing clients. Our original goal of a “land and expand” strategy with is paying off nicely and we look forward to continuing the growth trajectory,” says Marshall Gunter CEO of Datametrex AI.

About Datametrex AI Limited

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com).

Additional information on Datametrex is available at: www.datametrex.com

For further information, please contact:

Marshall Gunter – CEO
Phone: (514) 295-2300
Email: [email protected]

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

This is How Malicious #Deepfakes Can Be Beat – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:15 PM on Tuesday, March 24th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

This is How Malicious Deepfakes Can Be Beat

  • The growth of image manipulation techniques is eroding both trust and informed decision-making.
  • Although it is impossible to ID and prove fakes in real time, we can ascertain which images are truthful.
  • Software already exists that can verify images’ provenance – the next step will be hardware-based.

By Qrius

Today, the world captures over 1.2 trillion digital images and videos annually – a figure that increases by about 10% each year. Around 85% of those images are captured using a smartphone, a device carried by over 2.7 billion people around the world.

But as image capture rates increase, so does the rate of image manipulation. In recent years the level and speed of audio visual (AV) manipulation has surprised even the most seasoned experts. The advent of generative adversarial networks (GANs) – or ‘deepfakes’ – has captured the majority of headlines because of their ability to completely undermine any confidence in visual truth.

And even if deepfakes never proliferate in the public domain, the world has nevertheless been upended by ‘cheapfakes’ – a term that refers to more rudimentary image manipulation methods such as photoshopping, rebroadcasting, speeding and slowing video, and other relatively unsophisticated techniques. Cheapfakes have already been the main tool in the proliferation of disinformation and online fraud, which have had significant impacts on businesses and society.

The growth of image manipulation has made it more difficult to make sound decisions based on images and videos – something businesses and individuals are doing at an increasing rate. This includes personal decisions ranging from purchases on peer-to-peer marketplaces, meeting people when online dating, or voting; and business decisions, like fulfilling an insurance claim or executing a loan. Even globally important decisions are impacted, such as the international response to images and videos displaying atrocities or egregious violence in conflict zones or non-permissive areas, and much more.

Each of these very different use cases highlights two contradictory trends; we rely on images and videos more than ever before, but we trust them less than we ever have. This is a significant gap that is growing by the day and has forced government and technologists to invest in image-detection technology.

Unfortunately, there is no sustainable way to detect fake images in real time and at scale. This sobering fact will likely not change anytime soon.

There are several reasons for this. First, almost all metadata is lost, stripped or altered as an image travels through the internet. By the time that image hits a detection system, it will be impossible to reproduce lost metadata – and therefore details like the original date, time, and location of an image will likely remain unknown.

Second, almost all digital images are instantly compressed and resized once they are shared across the internet; while some manipulations are benign (such as recompression), others may be significant and intended to deceive the content consumer. In either case, the recompression and resizing of images as they are uploaded and transmitted makes it difficult, if not impossible, to detect pixel-level manipulations due to the loss of fidelity in the actual photo.

Third, when an automated or machine-learning-based detection technique is identified and democratized, bad actors will quickly identify a workaround in order to remain undetectable.

What makes detection even more difficult is social media, which disseminates content – fake or real – in seconds. Those intent on deceiving can inject fake content onto social media platforms instantly. Even successful debunking would likely be too late to stop the fake content from spreading, and cognitive dissonance and bias would more greatly influence consumers’ decisions.

So if detection will not work, how do we arm people, businesses and the international community with the tools to make better decisions? Through images’ provenance. If the world cannot prove what is fake, then it must prove what is real.

Today, technology does exist – such as Controlled Capture, software developed by my company, Truepic – that is able to both establish the provenance of images and to verify critical metadata at the point of capture. This is possible thanks to advances in smartphone tech, cellular networks, computer vision and blockchain. However, to truly restore trust in images on a global level, the use of verified imagery will need to scale beyond software to hardware.

To achieve this ambitious goal, image veracity technology will need to be embedded into the chipsets that power smartphones. Truepic is working with Qualcomm Technologies, the largest maker of smartphone chipsets, to demonstrate the viability of this approach. Once complete, this integration would allow smartphone makers to include a ‘verified’ mode to each phone’s native camera app – thus putting verified image technology into the hands of hundreds of millions of users. The end result will be cryptographically-signed images with verified provenance, empowering decision-makers to make smart choices on a personal, business or global scale. This is the future of decision-making in the era of disinformation and deepfakes.

Source: https://qrius.com/this-is-how-malicious-deepfakes-can-be-beat/

Carnegie Mellon University Ideas Uses DataMetrex $DM.ca Nexalogy To Study Disinformation

Posted by AGORACOM-JC at 7:40 AM on Tuesday, March 24th, 2020
  • Announced that following establishment of interoperability between NexaIntelligence tech and Netanomics ORA-pro,
  • Nexalogy is becoming an affiliate member of the Carnegie Mellon University Center for Informed Democracy and Social Cybersecurity (IDeaS)

TORONTO, March 24, 2020 – Datametrex AI Limited (the “Company” or Datametrex”) (TSXV: DM) (FSE: D4G) is pleased to announce that following establishment of interoperability between NexaIntelligence tech and Netanomics ORA-pro, Nexalogy is becoming an affiliate member of the Carnegie Mellon University Center for Informed Democracy and Social Cybersecurity (IDeaS).

Dr. Kathleen Carley, from IDeaS commented “We look forward to working with Nexalogy.  They provide a unique and significant technology, NexaIntelligence, that will help us understand the spread of information and disinformation.  We are delighted that they will be affiliates of the Informed Democracy and Social-cybersecurity center (IDeaS).”

“Nexalogy is continuing its ‘Land and Expand’ approach to the USA market and membership in Carnegie Mellon University IDeaS will be a key component of networking and research collaboration in these efforts,” says Marshall Gunter, CEO of the Company.

The IDeaS website can be found here:

https://www.cmu.edu/ideas-social-cybersecurity/index.html

The Netanomics website can be found here:

http://netanomics.com/

About Datametrex

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com).

For further information, please contact:

Marshall Gunter – CEO
Email: [email protected]Phone: 514-295-2300

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

You can go to jail for spreading fake news about #Covid19 – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 12:38 PM on Monday, March 23rd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

You can go to jail for spreading fake news about Covid-19

  • As the coronavirus (Covid-19) spreads, so does the misinformation
  • Recently referred to by the WHO as an “infodemic”, the volume of information that is both true and false has been communicated across all platforms globally

Geraint Crwys-Williams, chief business officer, Primedia Group and acting CEO, Primedia Broadcasting says, “Now, more than ever, the role of accountable and credible media has come to the fore. Government officials and healthcare professionals are using trusted broadcast media and digital platforms of established, verified, media outlets to circulate correct information on Covid-19. There has been a particular focus also on debunking the myths and misinformation in circulation, which is an important role of accountable media as a public service.”

On Wednesday, the Minister for Cooperative Governance and Traditional Affairs, Dr Nkosazana Dlamini-Zuma, set out the Regulations in terms of Section 27 (2) of the Disaster Management Act. According to the Government Gazette, “Any person who publishes any statement, through any medium, including social media, with the intention to deceive any other person about— (a) Covid-19; (b) Covid-19 infection status of any person; or (c) any measure taken by the Government to address Covid-19, commits an offence and is liable on conviction to a fine or imprisonment for a period not exceeding six months, or both such fine and imprisonment.”

Despite this, hoaxes are still being posted on social media, and are gaining traction. The most recent fake news post is a Facebook account purportedly belonging to President Cyril Ramaphosa that told South Africans to stay indoors at 10am as helicopters would be spraying chemicals across the country against coronavirus. 8,000 social media users spread that news onwards.

Adds Crwys-Williams, “We urge all South Africans to be mindful of the source of information that they receive. Misinformation does not just cause unnecessary panic; it also puts citizens at risk. We have a duty of care to our employees, our communities and our audience to provide accurate, informative communication to ensure we play our part in reducing, not just the spread of the virus, but of unnecessary panic too.”

He adds that simply sharing misinformation could make someone complicit in the crime, even though this was not the intention.

“We recommend that South Africans go to their trusted news sources such as credible broadcast, print and online media for updates. The South African Government is being vigilant about ensuring that correct information is being disseminated across these channels. They also have a WhatsApp group on 060 012 3456 that offers up-to-date information – simply type ‘hi’ to be included.”

Source: https://www.bizcommunity.com/Article/196/740/201880.html

How #Coronavirus is Impacting Cyberspace – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 3:00 PM on Thursday, March 19th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

How Coronavirus is Impacting Cyberspace

  • Hackers were also strategizing to spread fake news to create further confusion
  • By investigating the dark web marketplace, CYFIRMA uncovered illicit groups selling organic medicine claiming to cure and eradicate the COVID-19 virus
  • These discussions in the hackers’ communities were carried out in Mandarin, Japanese and English

By CISOMAG

These are interesting times – the world is witnessing an unprecedented onslaught of upheavals not just in the ‘real-world’ but also in the cyber world. We greeted 2020 gingerly knowing the trade war between the U.S. and China was going to bring about economic uncertainty but little did we know a global pandemic was upon us, with the Coronavirus having an impact even on cyberspace.

By CYFIRMA RESEARCH

While healthcare workers are battling the COVID-19 virus, countries are in lockdown mode, and the global economy hangs in the balance, another war is raging in cyberspace.

Cyber risks and threats have multiplied with many more attack vectors, and hackers’ techniques evolving faster than ever, blending technical prowess with sophisticated social engineering. The current challenge with the virus pandemic is a test of nations’ and businesses’ preparedness and resiliency on all fronts.

CYFIRMA’s threat visibility and intelligence research revealed a massive increase of over 600% of cyberthreat indicators related to the Coronavirus pandemic from February to early March.

Threat indicators are made up of conversations observed and uncovered in the dark web, hackers’ forums, and closed communities. What our researchers have seen and heard in these communities do not bode well for governments and businesses – hackers are hard at work, actively planning how to leverage this climate of fear and uncertainty to attain their political and financial objectives.

The United States Computer Emergency Readiness Team (US-CERT) has sent out alerts on scams tricking people into revealing personal information or donating to fraudulent charities, all under the pretext of helping to contain and manage the coronavirus. The Federal Trade Commission has also warned about similar scams.

CYFIRMA’s research team and multiple security vendors have reported that threat actors have used fear tactics to spread malware, including LokiBot, RemcosRAT, TrickBot, and FormBook.

These hackers’ communities span far and wide, communicating in Cantonese, Mandarin, Russian, English, and Korean, unleashing campaigns one after another to wreak havoc on unsuspecting nations and enterprises.

On Dark Web forums, a group from Hong Kong hatched a plan to create a new phishing campaign targeting the population from mainland China. The group aimed to create distrust and incite social unrest by assigning blame to the Chinese Communist Party.

A deeper analysis of hackers’ conversations also revealed groups from Taiwan discussing similar phishing and spam campaigns, specifically targeting influential persons in mainland China to cause further unrest.

Korean-speaking hackers were planning to make financial gains using sophisticated phishing campaigns, loaded with sensitive data exfiltration malware and creating a new variant of EMOTET virus (EMOTET is a malware strain that was first detected in 2014 and is one of the most prevalent threats in 2019). These hackers were planning to target Japan, Australia, Singapore, and the U.S.

CYFIRMA’s researchers also observed North Korean hackers targeting South Korean businesses. The phishing email had the Korean language title “Coronavirus Correspondence”, tricking recipients into opening them and launching malware into machines and networks.

With COVID-19, many hacker groups were observed to be using brand impersonation with fake emails claiming to represent authoritative bodies such as the Centers for Disease Control (CDC) and the World Health Organization (WHO). The subject line and content of these emails were very enticing, offering news updates and cures to the ailment.

We also noticed coronavirus-themed emails designed to look like emails from the organizations’ leadership team and sent to all employees.

Embedded with malware that would infect corporate networks, these phishing attacks deploy social engineering tactics to steal data and assets.

Other than unleashing cyberattacks to steal data, we also witnessed the planning of fake websites to sell face masks and other health apparatus using bitcoin in China, Japan, and the US.

To aggravate matters, hackers were also strategizing to spread fake news to create further confusion. By investigating the dark web marketplace, CYFIRMA uncovered illicit groups selling organic medicine claiming to cure and eradicate the COVID-19 virus. These discussions in the hackers’ communities were carried out in Mandarin, Japanese and English.

A new malware called ‘CoronaVP’ was being discussed by a Russian hacking community; this could lead to a new ransomware or EMOTET strain, designed to steal personal information.

Hackers leveraging on the COVID-19 pandemic are motivated by a combination of personal financial gain as well as political espionage to cause social upheavals. Threat actors in the world of cybercrimes are well-equipped with tools, technology, expertise and financing to further both commercial and political agendas. In our hyper-connected digital world, cyber-crime is a lucrative business, and we should expect attacks to be more frequent and more sophisticated as the pandemic continues to cast a shadow over the global economy.

What we have witnessed in the field of cyber-intelligence has taught us the importance of staying vigilant, and frequently, the most dangerous forces at work are those we cannot see.

The importance of relevant and timely threat intelligence cannot be over-emphasized as early detection of cyber threats could save organizations from hefty financial penalties and irreversible brand damage.

Source: https://www.cisomag.com/cyberthreats-due-to-coronavirus/

This stance-detecting #AI will help us fact-check fake news – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 12:53 PM on Wednesday, March 18th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

This stance-detecting AI will help us fact-check fake news

By: Ben Dickson
  • Fighting fake news has become a growing problem in the past few years, and one that begs for a solution involving artificial intelligence
  • Verifying the near-infinite amount of content being generated on news websites, video streaming services, blogs, social media, etc. is virtually impossible

There has been a push to use machine learning in the moderation of online content, but those efforts have only had modest success in finding spam and removing adult content, and to a much lesser extent detecting hate speech.

Fighting fake news is a much more complicated challenge. Fact-checking websites such as Snopes, FactCheck.org, and PolitiFact do a decent job of impartially verifying rumors, news, and remarks made by politicians. But they have limited reach.

It would be unreasonable to expect current artificial intelligence technologies to fully automate the fight against fake news. But there’s hope that the use of deep learning can help automate some of the steps of the fake news detection pipeline and augment the capabilities of human fact-checkers.

In a paper presented at the 2019 NeurIPS AI conference, researchers at DarwinAI and Canada’s University of Waterloo presented an AI system that uses advanced language models to automate stance detection, an important first step toward identifying disinformation.

The automated fake-news detection pipeline

Before creating an AI system that can fight fake news, we must first understand the requirements of verifying the veracity of a claim. In their paper, the AI researchers break down the process into the following steps:

  • Retrieving documents that are relevant to the claim
  • Detecting the stance or position of those documents with respect to the claim
  • Calculating a reputation score for the document, based on its source and language quality
  • Verify the claim based on the information obtained from the relevant documents

Instead of going for an end-to-end AI-powered fake-news detector that takes a piece of news as input and outputs “fake” or “real”, the researchers focused on the second step of the pipeline. They created an AI algorithm that determines whether a certain document agrees, disagrees, or takes no stance on a specific claim.

Using transformers to detect stance

This is not the first effort to use AI for stance detection. Previous research has used various AI algorithms and components, including recurrent neural networks (RNN), long short-term memory (LSTM) models, and multi-layer perceptrons, all relevant and useful artificial neural network (ANN) architectures. The efforts have also leveraged other research done in the field, such as work on “word embeddings,” numerical vector representations of relationships between words that make them understandable for neural networks.

However, while those techniques have been efficient for some tasks such as machine translation, they have had limited success on stance detection. “Previous approaches to stance detection were typically earmarked by hand-designed features or word embeddings, both of which had limited expressiveness to represent the complexities of language,” says Alex Wong, co-founder and chief scientist at DarwinAI.

The new technique uses a transformer, a type of deep learning algorithm that has become popular in the past couple of years. Transformers are used in state-of-the-art language models such as GPT-2 and Meena. Though transformers still suffer from the fundamental flaws, they are much better than their predecessors in handling large corpora of text.

Transformers use special techniques to find the relevant bits of information in a sequence of bytes instead. This enables them to become much more memory-efficient than other deep learning algorithms in handling large sequences. Transformers are also an unsupervised machine learning algorithm, which means they don’t require the time- and labor-intensive data-labeling work that goes into most contemporary AI work.

“The beauty of bidirectional transformer language models is that they allow very large text corpuses to be used to obtain a rich, deep understanding of language,” Wong says. “This understanding can then be leveraged to facilitate better decision-making when it comes to the problem of stance detection.”

Transformers come in different flavors. The University of Waterloo researchers used a variation of BERT (RoBERTa), also known as deep bidirectional transformer. RoBERTa, developed by Facebook in 2019, is an open-source language model.

Transformers still require very large compute resources in the training phase (our back-of-the-envelope calculation of Meena’s training costs amounted to approx. $1.5 million). Not everyone has this kind of money to spare. The advantage of using ready models like RoBERTa is that researchers can perform transfer learning, which means they only need to fine-tune the AI for their specific problem domain. This saves them a lot of time and money in the training phase.

“A significant advantage of deep bidirectional transformer language models is that we can harness pre-trained models, which have already been trained on very large datasets using significant computing resources, and then fine-tune them for specific tasks such as stance-detection,” Wong says.

Using transfer learning, the University of Waterloo researchers were able to fine-tune RoBERTa for stance-detection with a single Nvidia GeForce GTX 1080 Ti card (approx. $700).

The stance dataset

For stance detection, the researchers used the dataset used in the Fake News Challenge (FNC-1), a competition launched in 2017 to test and expand the capabilities of AI in detecting online disinformation. The dataset consists of 50,000 articles as training data and a 25,000-article test set. The AI takes as input the headline and text of an article, and outputs the stance of the text relative to the headline. The body of the article may agree or disagree with the claim made in the headline, may discuss it without taking a stance, may be unrelated to the topic.

The RoBERTa-based stance-detection model presented by the University of Waterloo researchers scored better than the AI models that won the original FNC competition as well as other algorithms that have been developed since.

Fake News Challenge (FNC-1) results: The first three rows are the language models that won the original competition (2017). The next five rows are AI models that have been developed in the following years. The final row is the transformer-based approach proposed by researchers at the University of Waterloo.

To be clear, developing AI benchmarks and evaluation methods that are representative of the messiness and unpredictability of the real world is very difficult, especially when it comes to natural language processing.

The organizers of FNC-1 have gone to great lengths to make the benchmark dataset reflective of real-world scenarios. They have derived their data from the Emergent Project, a real-time rumor tracker created by the Tow Center for Digital Journalism at Columbia University. But while the FNC-1 dataset has proven to be a reliable benchmark for stance detection, there is also criticism that it is not distributed enough to represent all classes of outcomes.

“The challenges of fake news are continuously evolving,” Wong says. “Like cybersecurity, there is a tit-for-tat between those spreading misinformation and researchers combatting the problem.”

The limits of AI-based stance detection

One of the very positive aspects of the work done by the researchers of the University of Waterloo is that they have acknowledged the limits of their deep learning model (a practice that I wish some large AI research labs would adopt as well).

For one thing, the researchers stress that this AI system will be one of the many pieces that should come together to deal with fake news. Other tools that need to be developed in the area of gathering documents, verifying their reputation, and making a final decision about the claim in question. Those are active areas of research.

The researchers also stress the need to integrate AI tools into human-controlled procedures. “Provided these elements can be developed, the first intended end-users of an automated fact-checking system should be journalists and fact-checkers. Validation of the system through the lens of experts of the fact-checking process is something that the system’s performance on benchmark datasets cannot provide,” the researchers observe in their paper.

The researchers explicitly warn about the consequences of blindly trusting machine learning algorithms to make decisions about truth. “A potential unintended negative outcome of this work is for people to take the outputs of an automated fact-checking system as the definitive truth, without using their own judgment, or for malicious actors to selectively promote claims that may be misclassified by the model but adhere to their own agenda,” the researchers write.

Image credit: Depositphotos

This is one of many projects that show the benefits of combining artificial intelligence and human expertise. “In general, we combine the experience and creativity of human beings with the speed and meticulousness afforded by AI. To this end, AI efforts to combat fake news are simply tools that fact-checkers and journalists should use before they decide if a given article is fraudulent,” Wong says. “What an AI system can do is provide some statistical assurance about the claims in a given news piece.  That is, given a headline, they can surface that, for example, 5,000 ‘other’ articles disagree with the claim whereas only 50 support it. Such as distinction would serve a warning to the individual to doubt the veracity of what they are reading.”

One of the central efforts of DarwinAI, Wong’s company, is to tackle AI’s explainability problem. Deep learning algorithms develop very complex representations of their training data, and it’s often very difficult to understand the factors behind their output. Explainable AI aims to bring transparency to deep learning decision-making. “In the case of misinformation, our goal is to provide journalists with an understanding of the critical factors that led to a piece of news being classified as fake,” Wong says.

The team’s next step is to tackle reputation-assessment to validate the truthfulness of an article through its source and linguistics characteristics.

Source: https://thenextweb.com/neural/2020/03/14/this-stance-detecting-ai-will-help-us-fact-check-fake-news-syndication/

Synthetic media: The real trouble with #deepfakes – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:22 PM on Monday, March 16th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company announced three $1M contacts in Q3-2019. Click here for more info.

Synthetic media: The real trouble with deepfakes

By M. Mitchell Waldrop

  • The snapshots above look like people you’d know. Your daughter’s best friend from college, maybe? That guy from human resources at work? The emergency-room doctor who took care of your sprained ankle? One of the kids from down the street?
  • “Deepfakes play to our weaknesses,” explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of “Truth Decay,”

Nope. All of these images are “deepfakes” — the nickname for computer-generated, photorealistic media created via cutting-edge artificial intelligence technology. They are just one example of what this fast-evolving method can do. (You could create synthetic images yourself at ThisPersonDoesNotExist.com.) Hobbyists, for example, have used the same AI techniques to populate YouTube with a host of startlingly lifelike video spoofs — the kind that show real people such as Barack Obama or Vladimir Putin doing or saying goofy things they never did or said, or that revise famous movie scenes to give actors like Amy Adams or Sharon Stone the face of Nicolas Cage. All the hobbyists need is a PC with a high-end graphics chip, and maybe 48 hours of processing time.

It’s good fun, not to mention jaw-droppingly impressive. And coming down the line are some equally remarkable applications that could make quick work out of once-painstaking tasks: filling in gaps and scratches in damaged images or video; turning satellite photos into maps; creating realistic streetscape videos to train autonomous vehicles; giving a natural-sounding voice to those who have lost their own; turning Hollywood actors into their older or younger selves; and much more.

Deepfake artificial-intelligence methods can map the face of, say, actor Nicolas Cage onto anyone else — in this case, actor Amy Adams in the film Man of Steel.

Yet this technology has an obvious — and potentially enormous — dark side. Witness the many denunciations of deepfakes as a menace, Facebook’s decision in January to ban (some) deepfakes outright and Twitter’s announcement a month later that it would follow suit.

“Deepfakes play to our weaknesses,” explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of “Truth Decay,” a 2018 RAND report about the diminishing role of facts and data in public discourse. When we see a doctored video that looks utterly real, she says, “it’s really hard for our brains to disentangle whether that’s true or false.” And the internet being what it is, there are any number of online scammers, partisan zealots, state-sponsored hackers and other bad actors eager to take advantage of that fact.

“The threat here is not, ‘Oh, we have fake content!’” says Hany Farid, a computer scientist at the University of California, Berkeley, and author of an overview of image forensics in the 2019 Annual Review of Vision Science. Media manipulation has been around forever. “The threat is the democratization of Hollywood-style technology that can create really compelling fake content.” It’s photorealism that requires no skill or effort, he says, coupled with a social-media ecosystem that can spread that content around the world with a mouse click.

Source: https://www.knowablemagazine.org/article/technology/2020/synthetic-media-real-trouble-deepfakes

INTERVIEW: Datametrex $DM- The Small Cap #AI Company That NATO And Canadian Defence Are Using To Fight Fake News & Social Media Threats

Posted by AGORACOM-JC at 7:00 PM on Sunday, March 15th, 2020

Until now, investor participation in Artificial Intelligence has been the domain of mega companies and those funded by Silicon Valley.  Small cap investors can finally consider participating in the great future of A.I. through Datametrex AI (DM: TSXV) (Soon To Be Nexaology) who has achieved the following over the past few months:

  • Q3 Revenues Of $1.6 million,  an increase of 186%
  • 9 Month Revenues Of $2.56M an increase of 37%
  • Repeat Contracts Of $1M and $600,000 With Korean Giant LOTTE   
  • $954,000 Contract With Canadian Department of Defence To Fight Social Media Election Meddling
  • Participation In NATO Research Task Group On Social Media Threat Detection 

When a small cap Artificial Intelligence company is successfully deploying its technology with military and conglomerates, smart investors have to take a closer look.   That look can begin with our latest interview of Datametrex CEO, Marshall Gunter, who talks to us about the use of the Company’s Artificial Intelligence to discover and eliminate US Presidential election meddling.  The fake news isn’t just targeting candidates specifically, it also targets wedge issues such as abortion cases now before the US Supreme Court and even the Coronavirus.   Watch this interview on one of your favourite screens or hit play and listen to the audio as you drive.