Agoracom Blog Home

Posts Tagged ‘datametrex’

Datametrex $DM.ca Obtains Rights to Import and Sell COVID-19 Test Kits From South Korea

Posted by AGORACOM-JC at 7:15 AM on Thursday, April 16th, 2020
  • Entered into an agreement on April 13, 2020 securing the rights to import the iONEBIO INC’s iLAMP Novel-CoV19 Detection Kit (real-time Reverse Transcription LAMP-PCR assay system) into Canada
  • Under the terms of this agreement, Datametrex was also given rights to sell the tests into other countries around the globe, including the United States
  • iONEBIO Inc. claims their test kits provide results within approximately 15 to 20 minutes with 99.9% accuracy

TORONTO, April 16, 2020  — Datametrex AI Limited (the “Company” or “Datametrex”) is pleased to announce that entered into an agreement on April 13, 2020 securing the rights to import the iONEBIO INC’s iLAMP Novel-CoV19 Detection Kit (real-time Reverse Transcription LAMP-PCR assay system) into Canada, Under the terms of this agreement, Datametrex was also given rights to sell the tests into other countries around the globe, including the United States. Datametrex has developed strong relationships with many large multi-national companies in South Korea.  As a result of these relationships, the Canadian Embassy in Seoul contacted Datametrex to ask for help in procuring rapid test kits.

The COVID-19 test kits are manufactured by iONEBIO INC. in South Korea, and are the same test kits that have been successfully used in South Korea in their “drive-through” testing stations. This kit has also been used to test every traveller entering into South Korea and those testing positive for Covid-19 were immediately isolated.  Datametrex believes a key factor that allowed South Korea to slow the spread of COVID-19 was its ability to swiftly identify and quarantine those infected by testing millions of people using these test kits  These test kits were first approved for use by the Korean Ministry of Food and Drug Safety on August 30, 2019.  In addition, an independent study was conducted by DankukUniversity (DKU) in Seoul, South Korea, and it was completed and approved on April 2, 2020.

Health Canada must approve these COVID-19 test kits before they can be used in Canada. The Company is currently working with Health Canada to have the approval of these kits fast tracked.  These test kits are currently in use in some European and Asian countries outside of South Korea. Approval work has commenced with the FDA in the United States to obtain FDA approval and to authorize the tests under the Emergency Use Authorization Program run by US Center for Disease Control (CDC).

iONEBIO Inc. claims their test kits provide results within approximately 15 to 20 minutes with 99.9% accuracy. Each kit contains 288 individual tests, all of which can be completed in one hour. By utilizing these kits, screening stations can be set up almost anywhere and will allow for the early detection and swift quarantine of infected persons, which is believed to be a key factor in slowing the spread of the novel coronavirus.  Early detection using rapid tests will also provide further protection to Canada’s front-line workers, especially health care professionals.

South Korea has used this kit to test millions of its citizens and, as of April 14, 2020, there have been 10,654 cases of Covid-19 and 222 deaths reported. In North America, where the use of a rapid test has not been implemented on a large scale, there have been, as at April 14, 2020, 25,680 cases and 780 deaths in Canada and 588,465 cases and 23,711 death in the US.  (All data collected by Datametrex’s Covid-19 dashboard, which is available at http://www.datametrex.com/covid-board.html)

“We strongly believe these kits will assist Canada in slowing the spread of Covid-19 and ultimately save lives. It’s incredibly rewarding for us to be able to help Canada combat the spread of COVID-19,” said Andrew Ryu, Chairman of the Company.

If the test kits are approved by Health Canada and the Canadian government purchases these test kits as expected, the Canadian government will be required to pay the kits in advance. Datametrex expects that all of its costs and expenses related to the import of these tests will be satisfied out of the purchase price for the tests paid for by the Canadian government.

The Company is not making any express or implied claims that it has the ability to treat the Covid-19 virus at this time.

About Datametrex AI Limited

Datametrex AI Limited is a technology-focused company with exposure to Artificial Intelligence and Machine Learning through its wholly-owned subsidiary, Nexalogy (www.nexalogy.com).
Additional information on Datametrex is available at www.datametrex.com

For further information, please contact:

Marshall Gunter – CEO
Phone: (514) 295-2300
Email: [email protected]

Jeff Stevens- Co-Founder
Phone: (647) 400-8494
Email: [email protected]

Neither the TSX Venture Exchange nor it’s Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations, and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

Deepfakes 2.0: The New Era of “Truth Decay” – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:27 PM on Wednesday, April 15th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company is working with US Government agencies on Covid19 and Coronavirus fake news and disinformation Click here for more info.

Deepfakes 2.0: The New Era of “Truth Decay”

  • Truth is under attack
  • In this post-truth environment, one person’s truth is no longer another’s truth, and information can be weaponized to cause financial or even reputational harm
by Brig. Gen. R. Patrick Huston and Lt. Col. M. Eric Bahm 

“An unexciting truth may be eclipsed by a thrilling lie.” — Aldous Huxley

Deepfake technology has exploded in the last few years. Deepfakes use artificial intelligence (AI) “to generate, alter or manipulate digital content in a manner that is not easily perceptible by humans.” The goal is to create digital video and audio that appears “real.” A picture used to be worth a thousand words – and a video worth a million – but deepfake technology means that “seeing” is no longer “believing.”  From fake evidence to election interference, deepfakes threaten local and global stability.

The first generation (Deepfakes 1.0) was largely used for entertainment purposes. Videos were modified or made from scratch in the pornography industry and to create spoofs of politicians and celebrities. The next generation (Deepfakes 2.0) is far more convincing and readily available. Deepfakes 2.0 are poised to have profound impacts. According to some technologists and lawyers who specialize in this area, deepfakes pose “an extraordinary threat to the sound functioning of government, foundations of commerce and social fabric.”

The Scope of the Problem

Truth is under attack. In this post-truth environment, one person’s truth is no longer another’s truth, and information can be weaponized to cause financial or even reputational harm. While the harmful use of (mis)information has been around for centuries, technology now allows this to happen at a speed and scale never before seen.  With the proliferation of technology, a teenager sitting at home can create and distribute a high-quality deepfake video on her smartphone in a single afternoon. According to Matthew Turek, a program manager for the Defense Advanced Research Projects Agency (DARPA), “We don’t know where this is going to end. We may be headed toward a zero trust environment.”

Criminals could use deepfakes to defraud victims, manipulate markets, and submit false evidence to courts. Authoritarian governments could use deepfakes to target public opinion and foreign adversaries could use them to erode trust in governments. The proliferation of Deepfake 2.0 technology allows this to be done easily, cheaply, and on a grand scale. RAND recently called this “truth decay.” In fact, the mere idea that this technology could be used to manipulate public opinion is already causing some to start questioning the validity of real events and un-doctored video.

Imagine the following possibilities:

  • Fake Evidence: Manipulated videos being used as evidence in court.
  • Sparking a war: a fake video of Israeli soldiers physically assaulting a Palestinian child could spark a new wave of violence in Israel.
  • Manipulating Markets: fake videos of a CEO used to disrupt an initial public offering.
  • Creating Political Fissures: fake videos intended to sow discord between foreign allies.
  • Influencing Elections: A doctored video of a politician looking sick designed to tip the scales of an election.

Deepfakes 2.0 pose a massive threat for the United States and other Western democracies that value truth, individual liberties, and the independence of the media.

Solutions — A Holistic Framework

How do we prepare for this new era of disruptive technology? It will take a whole of society approach where government, academia, and corporations work collaboratively with international partners and individual citizens. This comprehensive method recognizes that each sector possesses unique strengths, capabilities, and limitations. Finland is widely viewed as the gold standard for this approach in confronting sophisticated disinformation efforts. In 2014, the Finnish elections were the target of a disinformation campaign widely attributed to Russia.  The Finnish government took note and began to aggressively formulate a national strategy, including a national education initiative.  The Finnish recognized, “[i]t’s not just a government problem, the whole society has been targeted.”

The Finnish model includes both technical and non-technical solutions. Finnish schools stress critical thinking and media literacy, teaching students of all ages to be discerning consumers of information. The Finnish have also established a non-partisan journalistic fact-checking service: FaktaBaari. The Finnish model provides a useful starting point for a U.S. model tailored to our unique social, cultural, and legal considerations.

Technical Solutions

Detection. The plan to counter Deepfakes 2.0 must start with detection. Several companies are already developing algorithms using AI to detect deepfakes. For example, Facebook recently announced a partnership with Microsoft and academia to invest in AI systems that identify, flag, and remove harmful deepfakes. The Pentagon is also investing heavily in deepfake-detection technologies such as DARPA’s Media Forensics (MediFor) program to fight AI with AI.

Authentication. We need to establish a credible organization, perhaps through a public-private partnership, to report deepfake detection results. Blockchain technology can create digital fingerprints to help authenticate media. This technology allows videos and photos to be publicly verifiable.

Non-technical Solutions

Education. Over half of Generation Z gets its news and information primarily from social media and messaging apps on their smartphones. Therefore, schools must prioritize critical thinking and media literacy tailored to this new reality. In the decentralized American education system, this requires commitment and resources from federal, state and local governments.

Media Policy. Traditional and social media should assess criteria for evaluating suspicious or unverified, potential deepfakes that could harm society. Some social media sites have already shown a willingness to take down accounts linked to disinformation.

Legislation. Congress is considering multiple legislative proposals, including the DEEPFAKES Accountability Act. Congress should also consider a Finnish-style independent entity that provides confidence or credibility scores for digital content. State governments also play an important role. For example, California recently passed a law restricting the use of deepfakes for political purposes.

Conclusion

There is no doubt that criminals, our adversaries, and other malign actors will use deepfakes to harm the public and manipulate their sense of reality. We need a comprehensive plan to counter this threat. It requires the government, academia, and private industry to work together on both technical and non-technical solutions. Given that it is difficult to change a person’s view once it is formed, speed is a virtue when it comes to detecting deepfakes and educating the public. As the saying goes, “A lie can travel halfway around the world before truth puts on its boots.”

Source: https://www.justsecurity.org/69677/deepfakes-2-0-the-new-era-of-truth-decay/

Healthcare communication post-COVID 19: Need for new approaches and protocols to achieve resilience – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 12:00 PM on Monday, April 13th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company is working with US Government agencies on Covid19 and Coronavirus fake news and disinformation Click here for more info.

Healthcare communication post-COVID 19: Need for new approaches and protocols to achieve resilience

– Health communication was never so important in controlling the disease outbreaks as it is in COVID 19 pandemic

– The alarmingly increasing risk of misinformation through fake news on social media, termed as infodemic by the World Health Organization (WHO); and fear of such outbreaks in the future has put the communication at the core of resilience and response policies.

By: Siddheshwar Shukla

However, the scariest part of this pandemic is infodemic – a huge amount of misinformation and fake news on social media. “Our common enemy is #COVID19, but our enemy is also an “infodemic” of misinformation. To overcome the #coronavirus, we need to urgently promote facts & science, hope & solidarity over despair & division,” said Antonio Guterres, Secretary-General of the United Nations apparently announcing a full-fledged organized war against the COVID 19 infodemic. This proclamation from the top of the UN came about one and half months after WHO Chief Dr. Tedros Adhanom declared fight against the infodemic. “But we’re not just fighting an epidemic; we’re fighting an infodemic. Fake news spreads faster and more easily than this virus, and is just as dangerous,” said Dr. Tedros on February 14 in a press conference in Munich (Germany).

The announcement by the UN Chief was a kind of acceptance that the WHO required enforcement to fight against the infodemic. And, enforcement was rushed on a war footing as the UN and its various organizations such as Global Communication Team, Cyber Security Team of the UN Office of Drugs and Crime (UNODC) and UNESCO joined the WHO’s efforts against the infodemic. Presently, all the UN bodies have joined the fight against misinformation from their perspectives. The pandemic has manifested the extend of misinformation and its impact on communities across age groups. In the initial stage of the COVID 19 outbreak, the misinformation was primarily non-intentional but now it has reached to the level of cybercrime posing threats of cybersecurity throughout the globe.

Fake news about infection and treatment

Detection and prevention are two pillars of the response plan for disease outbreaks like COVID 19 for which no treatment is available.

However, experiences of the ensuing pandemic show the maximum number of the fake was created on detection, diagnosis, prevention, and treatment of the disease. These fake news items are aimed at directly misguiding the healthy and infected persons on contagiousness and potential damages to be caused by the infection. The misguided individuals virtually end up increasing the number of cases in the society through their behaviors. Therefore, providing scientific information to make the people aware of the real causes of infection, do’s and don’ts to be followed during the period of disease outbreaks and appropriate preventive care becomes a top priority. ‘Myth Busters’, the online campaign launched by the WHO is focussed on combating the fake news items of this category thereby enabling the COVID warriors and various stakeholders throughout the world.

Starting from – Garlic, Vitamin C, Hot Water, ten seconds breathing control-diagnosis, the fake news items of this category have now reached to the extent of fake claims that ‘5G mobile networks spread COVID 19‘. “COVID 19 is spread through respiratory droplets when an infected person coughs, sneezes or speaks. People can also be infected by touching a contaminated surface and then their eyes, mouth or nose,’ WHO has reiterated time and again. However, the fake news items are designed so meticulously that they convince uneducated and educated societies as well. The arson attacks on 5G cell towers across Britain and large scale uprising against the rollout of 5G telecommunication networks in the Netherland indicate the capacity of fake news to influence the people in the countries that have achieved cent percent literacy and are considered highly educated societies. Therefore, developed and developing, all countries are vulnerable to fake news but the situation could be more dangerous in developing countries as they have fewer resources to combat the infodemic.

Conspiracy theories on Pandemic

The fake news items and unsubstantiated claims of this category are targeted against communities, personalities, nations, international bodies and also the UN organizations. They may not play a direct role in the outbreak or escalating the intensity of the outbreak but the unsubstantiated claims of this category are dangerous for creating tension between communities thereby posing law and order problems.

India has faced unprecedented fake news for and against Tablighi Jamaat community after an event in Delhi attended by hundreds of foreigners from several countries became the nodal center for novel coronavirus outbreaks and caused a sudden rise in new cases. At the international level, China and the USA have been blaming each other for conspiring in the outbreak of the disease. The unsubstantiated claims of this category were often made by scientists, professionals, legislators, and even the heads of the states.

Besides, the credibility of the national, international and UN organizations is also being questioned. The US President Donald Trump and several other who’s who of the US has alleged that the WHO Chief sided with China and helped in concealing information that caused the outbreak of this magnitude.

The United Nations is yet to develop a mechanism to deal with the infodemic of this category. However, the nations also need to develop mechanisms to deal with conspiracies meant to provoke one community against the other as such strategies will be very crucial in managing the current outbreak and also in developing pandemic resilience policies for future outbreaks.

Cyber Security: Protecting the vulnerable

Due to global lockdown, millions of people have been forced to stay at home. The global economy has come to a standstill but a few businesses have developed the capacity to continue their operations online. Here comes the danger of cybersecurity.

In fact, all the businesses that are operating online due to global lockdown have been exposed to cybercrime. The cybercriminals can aggravate the problem by attacking the communication network of the governments, hospitals, hacking websites, influencing diseases outbreak dashboard, interfering with ‘myth busters’, sending fishing emails, individual emails, fake official order, and miscommunication of various kinds to hamper the fight against the disease outbreak. All these aspects need to be considered in the outbreak/ epidemic/ pandemic disease response plan for COVID 19 and also in preparing outbreak resilience plans for nations or regions in the post-pandemic world. Not only the smaller websites, but the big organization such as Coronavirus statistics site Worldometers.info and the US Department of Health and Human Services have also faced cyber-attacks in the ensuing COVID 19 pandemic.

“The vast majority of cyberattacks – by some estimates, 98 percent – deploy social engineering methods. Cybercriminals are extremely creative in devising new ways to exploit users and technology to access passwords, networks, and data, often capitalizing on popular topics and trends to tempt users into unsafe online behavior,” said Algirde Pipikaite and Nicholas Davis in an analysis published at the World Economic Forum. The authors have suggested three broad strategies – step up cyber-hygiene standards, be extra vigilant on verification and follow official updates. Throughout the world, cyber-crimes have increased during the pandemic.

Children: The most vulnerable

Children are always the most vulnerable to misinformation and attack by cybercriminals. They can be misguided through video games, dangerous challenges, and tasks. These activities may expose them to infections.

COVID 19 is not much fatal to children but there are infectious diseases that can pose risks for children as well. Besides, children could be misguided and used as vectors for spreading the infection in the family, schools, playgrounds, markets, and communities. In the past, there have been several dangerous games such as the Blue Whale game and skull break challenge that had caused the deaths of several children. The governments will have to develop a robust cybersecurity system, cyber monitoring system and strict cyber laws to prevent the children from falling in the trap of cyber criminals during COVID 19 pandemic and also for developing resilience and response plan for handling outbreaks of communicable diseases in the future.

COVID 19 and the dangers of terrorism

“The weaknesses and lack of preparedness exposed by this pandemic provide a window onto how a bio-terrorist attack might unfold and may increase its risks. Non-state groups could gain access to virulent strains that could pose similar devastation to societies around the globe,” said UN chief Antonio Guterres addressing the UN Security Council meeting on April 10. Though any link to the terrorist organizations has not been established in the case of COVID 19, the highly contagious virus has posed a new threat to humanity. The countries will have to amend their laws to ensure the culprits are served strictest punishment, and outbreaks are detected and contained efficiently.

Public Health Communication in the Post-COVID 19 World

Communication has always played an important role in prevention and response plans against outbreaks of infectious diseases but the experience of the COVID 19 pandemic has increased its scope and importance. The role of communication in disseminating scientific information and rejecting fake news has become a top priority.

As the COVID 19 pandemic has taken the whole world into its grip, WHO and all the UN agencies are assisting nations in their fight against the outbreak of the disease. However, this kind of collaboration will be rare in case of local or national level outbreak of diseases in the future. Besides, internet-based technological innovations have provided cybercriminals with several new weapons that they can use by sitting in any corner of the world or through artificial intelligence. The policymakers and administrators will be required to consider several such aspects of health communication which were unheard of in the pre-COVID 19 periods. The existing disease outbreak resilience policies and protocols need a thorough review to deal with COVID 19 pandemic and challenges to be posed in the post-COVID 19 world.

Source: https://www.devdiscourse.com/article/health/1003975-healthcare-communication-post-covid-19-need-for-new-approaches-and-protocols-to-achieve-resilience

Datametrex $DM.ca Secures Additional $250,000 Contracts

Posted by AGORACOM-JC at 7:23 AM on Monday, April 13th, 2020
  • Secured contracts for approximately $250,000 CAD with two existing client
  • First contract is for $130,000 CAD with one of LOTTE Group of companies, Global Logistics division
  • Second contract for approximately $120,000 CAD is with Hyosung Corp., and is a continuation and expansion of the original contract announced last year

TORONTO, April 13, 2020 — Datametrex AI Limited (the “Company” or “Datametrex”) is pleased to announce that it has secured contracts for approximately $250,000 CAD with two existing clients. The first contract is for $130,000 CAD with one of LOTTE Group of companies, Global Logistics division. The second contract for approximately $120,000 CAD is with Hyosung Corp., and is a continuation and expansion of the original contract announced last year.

“This is exciting for the team as we continue to execute on our “land and expand” strategy with global conglomerates. We look forward to continuing to build on the trust we have gained with them going forward. We are proud to be able to continue to secure new business despite the global restrictions as a result of COVID-19, our team is doing a great job working remotely to add value for our stakeholders,” says Marshall Gunter, CEO of the Company.

About Datametrex AI Limited

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com).
Additional information on Datametrex is available at: www.datametrex.com

For further information, please contact:

Marshall Gunter – CEO
Phone: (514) 295-2300
Email: [email protected]

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

Global News Article Featuring Datametrex $DM.ca Work For US Government on #Covid19

Posted by AGORACOM-JC at 11:18 AM on Thursday, April 9th, 2020

TORONTO, April 09, 2020 — Datametrex AI Limited (the “Company” or “Datametrex”) is pleased to share a link to an article released by Global News on April 8, 2020 highlighting the important work the Company completed for the US Government to identify foreign involvement in the social media discussions surrounding COVID-19 and Coronavirus.

“It’s exciting for us as a company to attract mainstream media attention on the work we do. My team is very proud to be one of the pieces used to protect democracy from propaganda in social media. Since releasing the COVID-19 report for the US government, we have had many interviews with journalist and we look forward to continuing to share our technology and findings in this fashion,” says Marshall Gunter, CEO of the Company.

About Datametrex AI Limited

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com).
Additional information on Datametrex is available at: www.datametrex.com

For further information, please contact:

Marshall Gunter – CEO
Phone: (514) 295-2300
Email: [email protected]

Jeff Stevens- Co-Founder
Phone: (647) 400-8494
Email: [email protected]

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

Datametrex $DM.ca Provides Additional Information On Its Co-Bid To The Ministry of Health In South Korea Surrounding #Covid19

Posted by AGORACOM-JC at 7:29 AM on Thursday, April 9th, 2020
  • Lotte Group was one of two companies invited to bid on a contract to provide Artificial Intelligence (“AI”) solutions to the South Korean Ministry of Health- Welfare, Food, & Drug, related to the COVID-19 pandemic
  • Lotte Group has invited Datametrex, a preferred vendor of Lotte, to participate in the bid and to provide the AI technology, should Lotte Group’s bid be successful

TORONTO, April 09, 2020 – Datametrex AI Limited (the “Company” or “Datametrex”) would like to provide additional information to our shareholders and the investment community on a joint venture with Lotte Group announced in the Company’s press release on April 6, 2020.

Lotte Group was one of two companies invited to bid on a contract to provide Artificial Intelligence (“AI”) solutions to the South Korean Ministry of Health- Welfare, Food, & Drug, related to the COVID-19 pandemic. Lotte Group has invited Datametrex, a preferred vendor of Lotte, to participate in the bid and to provide the AI technology, should Lotte Group’s bid be successful. The contract is for the implementation of AI solutions to monitor search engine and SNS welfare, food, and drug beneficiaries that will allow the South Korean government to monitor potentially fraudulent activities with COVID-19 related grants. The AI solution required by the South Korean government will also filter incorrect and unreliable information allowing the Ministry of Health and Welfare to gain a better understanding of the COVID-19 pandemic and enable it to maximize efficiency of its human resources.

In the event Lotte Group is awarded the contract, a joint venture will be formed between Lotte Group, Datametrex, and KT Net, a corporation owned and controlled by the South Korean government. If Lotte Group bid is successful, the parties to the proposed joint venture have agreed to establish a joint venture company (“JV Co.”) with Lotte Group owning 75% of the equity in JV Co., Datametrex owning 15% and KT Net owning 10%. Under terms agreed by the parties, Lotte Group would contribute approximately $450,000 to JV Co., Datametrex will provide its proprietary AI technology and KT Net will provide private blockchain technology. The total revenue expected to be earned from this contract is approximately $1.2M annuum, and Datametrex expects to receive its proportionate share of the revenue after payment of JV Co.’s expenses. There is no assurance that Lotte Group will be successful in its bid.

About Lotte Group

The Lotte Group is an international conglomerate consisting of over 90 business units employing 60,000 people engaged in such diverse industries as candy manufacturing, beverages, hotels, fast food, retail, financial services, heavy chemicals, electronics, IT, construction, publishing, and entertainment.

About Datametrex AI Limited

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com). Additional information on Datametrex is available at: www.datametrex.com.

For further information, please contact:

Marshall Gunter – CEO
Phone: (514) 295-2300
Email: [email protected]

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws. All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

‘Fake news’ increases consumer demands for corporate action – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:20 PM on Wednesday, April 8th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company is working with US Government agencies on Covid19 and Coronavirus fake news and disinformation Click here for more info.

‘Fake news’ increases consumer demands for corporate action

  • “The strongest finding was that consumers expect corporations to take responsibility for combating fake news, even if the company in question was a victim of the fake news story,” Cheng says.
  • “Anyone can spread fake news on social media, and the expectation from consumers is that affected companies should play an active role in addressing it.”

by Matt Shipman, North Carolina State University

New research finds that “fake news” inspires consumers to demand corrective action from companies—even if the company is a victim of the fake news story. The study also supports the idea that most people feel they are better at detecting fake news than other people are—and found that fake news increases calls for improved digital media literacy.

“The idea that I am less influenced by fake news than you are is an example of something called the third-person effect,” says Yang Cheng, first author of the study and an assistant professor of communication at North Carolina State University.

“The third-person effect predicts that people tend to perceive that mass media messages have a greater effect on others than on themselves, and we found that this effect is pronounced among consumers who use social media. We also found that the third-person effect plays a significant role in how people respond to fake news online.”

For this study, the researchers enlisted 661 study participants from across the United States who identified as being Coca-Cola consumers. The researchers first gave the participants an example of a fake news story that circulated in Facebook in 2016, which (falsely) claimed that Coca-Cola had recalled bottles of its Dasani-brand water due to the presence of aquatic parasites. The researchers then asked study participants a range of questions designed to ascertain how the participants felt about fake news and what they felt should be done to address it.

“The strongest finding was that consumers expect corporations to take responsibility for combating fake news, even if the company in question was a victim of the fake news story,” Cheng says. “This is news that public relations professionals can use. It highlights the need for communication professionals to step up and take an active role in responding to fake news items. That could mean collaborating with reporters to provide them with accurate information, or making correct information directly available to the public, or both. But it suggests that simply being quiet and waiting for the crisis to blow over may be unwise.

“Anyone can spread fake news on social media, and the expectation from consumers is that affected companies should play an active role in addressing it.”

The study also found that consumers wanted more to be done to improve media literacy, and that media users should be taught how to evaluate media critically.

The researchers also found that the most powerful factor in triggering these responses from consumers appeared to be the third-party effect. In other words, the people who were most confident in their ability to detect fake news felt most strongly that other people would be influenced by fake news. And highly-confident consumers were the most likely to call for corrective action from corporations and improved media literacy efforts.

“This is an observational study, not an experimental one, so we cannot establish causal relationships,” Cheng says. “But the demand for corporate action is clear—and it is most strongly correlated with the third-person effect.”

Source: https://phys.org/news/2020-04-fake-news-consumer-demands-corporate.html

Microsoft $MSFT claims its #AI framework spots fake news better than state-of-the-art baselines – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 5:09 PM on Tuesday, April 7th, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company is working with US Government agencies on Covid19 and Coronavirus fake news and disinformation Click here for more info.

Microsoft claims its AI framework spots fake news better than state-of-the-art baselines

  • If the system’s accuracy is as claimed and it makes its way into production, it could help combat the spread of false and misleading information about U.S. presidential candidates and other controversial topics
  • A survey conducted in 2018 by the Brookings Institute found that 57% of U.S. adults saw fake news during the 2018 elections and that 19% believe it influenced their vote

Kyle Wiggers

In a study published this week on the preprint server Arxiv.org, Microsoft and Arizona State University researchers propose an AI framework — Multiple sources of Weak Social Supervision (MWSS) — that leverages engagement and social media signals to detect fake news. They say that after training and testing the model on a real-world data set, it outperforms a number of state-of-the-art baselines for early fake news detection.

If the system’s accuracy is as claimed and it makes its way into production, it could help combat the spread of false and misleading information about U.S. presidential candidates and other controversial topics. A survey conducted in 2018 by the Brookings Institute found that 57% of U.S. adults saw fake news during the 2018 elections and that 19% believe it influenced their vote.

Many fake news classifiers in the academic literature rely on signals that require a long time to aggregate, making them unsuitable for early detection, the paper’s coauthors explain. Moreover, some rely solely on signals that are easily influenced by biased or inauthentic user feedback.

In contrast, the researchers’ system employs supervision from multiple sources involving users and their respective social engagements. Specifically, it taps a small amount of manually annotated data and a large amount of weakly annotated data — i.e., data with a lot of noise — for joint training in a meta-learning AI framework.

A module dubbed label weighting network (LWN) models the weight of the weak labels that regulate the learning process of the fake news classifier, taking what the researchers refer to as an instance — for example, a news piece — and its label as input. It outputs a value representing the importance weight for the pair, which determines the influence of the instance in training the fake news classifier. To allow information sharing among different weak signals, a shared feature extractor works alongside the LWN to learn a common representation and to use functions to map features to different weak label sources.

Above: Graphs comparing the performance of Microsoft’s AI with various baseline models.

The Microsoft researchers tapped the open source FakeNewsNet data set to benchmark their system, which contains news content (including meta attributes like body text) with labels annotated by experts from the fact-checking websites GossipCop and PolitiFact, along with social context information such as tweets about news articles. They enhanced it with a corpus of 13 sources, including mainstream British news outlets, such as the BBC and Sky News, and English-language versions of Russian news outlets like RT and Sputnik, with content mostly related to politics.

To generate weak labels, the researchers measured the sentiment scores for users sharing pieces of news and then determined the variance between those scores, such that articles for which the sentiments widely varied were labeled as fake. They also produced sets of people with known public biases and calculated scores based on how closely a user’s interests matched with those sets, operating on the theory that news shared by biased users was more likely to be fake. Lastly, they measured credibility by clustering users based on their meta-information on social media so that users who formed big clusters (which might indicate a bot network or malicious campaign) were considered less credible.

In tests, the researchers say the best-performing model, which incorporated Facebook’s RoBERTA natural language processing algorithm and trained on a combination of clean and weak data, accurately detected fake news in GossipCop and PolitiFact 80% and 82% of the time, respectively. That’s upwards of 7 percentage points better than the baseline models.

The team plans to explore other techniques in future work, like label correction methods for obtaining high-quality weak labels. They also hope to extend their framework to consider other types of weak supervision signals from social networks, leveraging the timestamps of engagements.

These researchers aren’t the only ones attempting to combat the spread of fake news with AI, of course. In a recent study, MIT’s Computer Science and Artificial Intelligence Laboratory developed an AI system to spot misleading news articles. Jigsaw late last year released Assembler, an AI-powered suite of fake news-spotting tools for media organizations. AdVerif.ai, a software-as-a-service platform that launched in beta last year, parses articles for misinformation, nudity, malware, and other problematic content and cross-references a regularly updated database of thousands of fake and legitimate news items. For its part, Facebook has experimented with deploying AI tools that “identify accounts and false news.”

Source: https://venturebeat.com/2020/04/07/microsoft-ai-fake-news-better-than-state-of-the-art-baselines/

Datametrex $DM.ca Co-Bid With #Lotte To The Ministry of Health Surrounding #Covid19

Posted by AGORACOM-JC at 7:30 AM on Monday, April 6th, 2020
  • Established a join venture (JV) with LOTTE to co-bid on contracts with the Ministry of Health in South Korea
  • This is to improve accuracy and transparency in the middle of COVID-19 catastrophe on media, social media, and various other data sources

TORONTO, April 06, 2020 — Datametrex AI Limited (the “Company” or “Datametrex”) is pleased to announce that it has established a join venture (JV) with LOTTE to co-bid on contracts with the Ministry of Health in South Korea. This is to improve accuracy and transparency in the middle of COVID-19 catastrophe on media, social media, and various other data sources.

On April 2, 2020, Datametrex announced that it was awarded preferred vendor status with LOTTE. One of the many benefits of being a preferred vendor is the opportunity to create JV’s and co-bid on contracts with LOTTE’s support. Datametrex and LOTTE recently agreed to create a JV using its AI and other relevant technologies to improve the accuracy and transparency of data which includes ability to filter disinformation for the Ministry of Health Department in South Korea. This opportunity was initiated due to the Coronavirus and COVID-19 catastrophe. South Korea’s health department has been the leader in screening and flattening the curve and is a role model for other countries on how to manage the COVID-19 pandemic.

The Company was successful in pitching the opportunity and create a JV with LOTTE largely due to the work it has done for the US Federal Government on COVID-19 and Coronavirus.   Datametrex released a report on April 1, 2020 which clearly identified Chinese authorities manipulating social media surrounding Coronavirus and COVID-19. Their intent was to sway public opinion to negatively impact the United States Government and President Trump while positively impacting China and President Xi.

Please click the link below to view the report:

http://datametrex.com/investor/covid19-report.html

“I am thrilled to share this update with shareholders, our team has done a fantastic job fast tracking with LOTTE. The work we have done this past year training our technology in various languages such as Korean, Chinese, French, and Russian is really starting to pay off,” says Marshall Gunter, CEO of the Company

About Datametrex AI Limited

Datametrex AI Limited is a technology focused company with exposure to Artificial Intelligence and Machine Learning through its wholly owned subsidiary, Nexalogy (www.nexalogy.com).
Additional information on Datametrex is available at: www.datametrex.com

For further information, please contact:

Marshall Gunter – CEO
Phone: (514) 295-2300
Email: [email protected]

Jeff Stevens- Co-Founder
Phone: (647) 400-8494
Email: [email protected]

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.

Forward-Looking Statements

This news release contains “forward-looking information” within the meaning of applicable securities laws.  All statements contained herein that are not clearly historical in nature may constitute forward-looking information. In some cases, forward-looking information can be identified by words or phrases such as “may”, “will”, “expect”, “likely”, “should”, “would”, “plan”, “anticipate”, “intend”, “potential”, “proposed”, “estimate”, “believe” or the negative of these terms, or other similar words, expressions and grammatical variations thereof, or statements that certain events or conditions “may” or “will” happen, or by discussions of strategy.

Readers are cautioned to consider these and other factors, uncertainties and potential events carefully and not to put undue reliance on forward-looking information. The forward-looking information contained herein is made as of the date of this press release and is based on the beliefs, estimates, expectations and opinions of management on the date such forward-looking information is made. The Company undertakes no obligation to update or revise any forward-looking information, whether as a result of new information, estimates or opinions, future events or results or otherwise or to explain any material difference between subsequent actual events and such forward-looking information, except as required by applicable law.

Social Media Is Full of Bots Spreading #COVID19 Anxiety. Don’t Fall For It – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 3:52 PM on Friday, April 3rd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company is working with US Government agencies on Covid19 and Coronavirus fake news and disinformation Click here for more info.

Social Media Is Full of Bots Spreading COVID-19 Anxiety. Don’t Fall For It

  • COVID-19 is being described as the first major pandemic of the social media age.
  • In troubling times, social media helps distribute vital knowledge to the masses.
  • Unfortunately, this comes with myriad misinformation, much of which is spread through social media bots.

These fake accounts are common on Twitter, Facebook, and Instagram. They have one goal: to spread fear and fake news.

We witnessed this in the 2016 United States presidential elections, with arson rumours in the bushfire crisis, and we’re seeing it again in relation to the coronavirus pandemic.

Busy busting bots

The exact scale of misinformation is difficult to measure. But its global presence can be felt through snapshots of Twitter bot involvement in COVID-19-related hashtag activity.

Bot Sentinel is a website that uses machine learning to identify potential Twitter bots, using a score and rating. According to the site, on March 26 bot accounts were responsible for 828 counts of #coronavirus, 544 counts of #COVID19 and 255 counts of #Coronavirus hashtags within 24 hours.

These hashtags respectively took the 1st, 3rd and 7th positions of all top-trolled Twitter hashtags.

It’s important to note the actual number of coronavirus-related bot tweets are likely much higher, as Bot Sentinel only recognises hashtag terms (such as #coronavirus), and wouldn’t pick up on “coronavirus”, “COVID19” or “Coronavirus”.

How are bots created?

Bots are usually managed by automated programs called bot “campaigns”, and these are controlled by human users.

The actual process of creating such a campaign is relatively simple. There are several websites that teach people how to do this for “marketing” purposes. In the underground hacker economy on the dark web, such services are available for hire.

While it’s difficult to attribute bots to the humans controlling them, the purpose of bot campaigns is obvious: create social disorder by spreading misinformation. This can increase public anxiety, frustration and anger against authorities in certain situations.

A 2019 report published by researchers from the Oxford Internet Institute revealed a worrying trend in organised “social media manipulation by governments and political parties”. They reported:

Evidence of organised social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically.

The modus operandi of bots

Typically, in the context of COVID-19 messages, bots would spread misinformation through two main techniques.

The first involves content creation, wherein bots start new posts with pictures that validate or mirror existing worldwide trends. Examples include pictures of shopping baskets filled with food, or hoarders emptying supermarket shelves. This generates anxiety and confirms what people are reading from other sources.

The second technique involves content augmentation. In this, bots latch onto official government feeds and news sites to sow discord. They retweet alarming tweets or add false comments and information in a bid to stoke fear and anger among users. It’s common to see bots talking about a “frustrating event”, or some social injustice faced by their “loved ones”.

The example below shows a Twitter post from Queensland Health’s official twitter page, followed by comments from accounts named “Sharon” and “Sara” which I have identified as bot accounts. Many real users reading Sara’s post would undoubtedly feel a sense of injustice on behalf of her “mum”.

The official tweet from Queensland Health and the bots’ responses.

While we can’t be 100 percent certain these are bot accounts, many factors point to this very likely being the case. Our ability to accurately identify bots will get better as machine learning algorithms in programs such as Bot Sentinel improve.

How to spot a bot

To learn the characteristics of a bot, let’s take a closer look Sharon’s and Sara’s accounts.

Screenshots of the accounts of ‘Sharon’ and ‘Sara’.

Both profiles lack human uniqueness, and display some telltale signs they may be bots:

  • they have no followers
  • they only recently joined Twitter
  • they have no last names, and have alphanumeric handles (such as Sara89629382)
  • they have only tweeted a few times
  • their posts have one theme: spreading alarmist comments
  • they mostly follow news sites, government authorities, or human users who are highly influential in a certain subject (in this case, virology and medicine).

My investigation into Sharon revealed the bot had attempted to exacerbate anger on a news article about the federal government’s coronavirus response.

The language: “Health can’t wait. Economic (sic) can” indicates a potentially non-native English speaker.

It seems Sharon was trying to stoke the flames of public anger by calling out “bad decisions”.

Looking through Sharon’s tweets, I discovered Sharon’s friend “Mel”, another bot with its own programmed agenda.

Bot ‘Mel’ spread false information about a possible delay in COVID-19 results, and retweeted hateful messages.

What was concerning was that a human user was engaging with Mel.

An account that seemed to belong to a real Twitter user began engaging with ‘Mel’.

You can help tackle misinformation

Currently, it’s simply too hard to attribute the true source of bot-driven misinformation campaigns. This can only be achieved with the full cooperation of social media companies.

The motives of a bot campaign can range from creating mischief to exercising geopolitical control. And some researchers still can’t agree on what exactly constitutes a “bot”.

But one thing is for sure: Australia needs to develop legislation and mechanisms to detect and stop these automated culprits. Organisations running legitimate social media campaigns should dedicate time to using a bot detection tool to weed out and report fake accounts.

And as a social media user in the age of the coronavirus, you can also help by reporting suspicious accounts. The last thing we need is malicious parties making an already worrying crisis worse.

Ryan Ko, Chair Professor and Director of Cyber Security, The University of Queensland.

Source: https://www.sciencealert.com/bots-are-causing-anxiety-by-spreading-coronavirus-misinformation