Posted by AGORACOM-JC
at 9:45 PM on Sunday, January 19th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
Why Facebook, Twitter and governments are concerned about deepfakes
Facebook recently announced it has banned deepfakes from its social media platforms ahead of the upcoming 2020 US presidential elections.
The move came days before a US House Energy and Commerce hearing on manipulated media content, titled “Americans at Risk: Manipulation and Deception in the Digital Age.â€
By: Giorgia Guantario
In a blog post,
Monika Bickert, Facebook’s Vice President of Global Policy Management,
explained that the ban will concern all content that “has been edited or
synthesised – beyond adjustments for clarity or quality – in ways that
aren’t apparent to an average person and would likely mislead someone
into thinking that a subject of the video said words that they did not
actually say,†as well as content that is “the product of artificial
intelligence or machine learning that merges, replaces or superimposes
content onto a video, making it appear to be authentic.â€
The move came days before a US House Energy and Commerce hearing on manipulated media content, titled “Americans at Risk: Manipulation and Deception in the Digital Age.â€
Twitter
has also been in the process of coming up with its own deepfake
policies, asking its community for help in drafting them, although
nothing has come out as of yet.
But what are deepfakes? And why are social media platforms and governments so concerned about them?
Artificial Intelligence has been the hot topic of 2019 – this vast
and game changing technology has opened new doors for what organisations
can achieve thanks to technology. However, with all the good, such as
facial recognition or automation, also came some bad.
In the decade of fake news and misinformation, there has always been a
general understanding that although social media posts, clickbait
websites, and text content in general, were not to be fully trusted,
videos and audios were safe from the rise of deception – that is until
deepfakes entered the scene.
According to Merriam-Webster,
the term deepfake is “typically used to refer to a video that has been
edited using an algorithm to replace the person in the original video
with someone else (especially a public figure) in a way that makes the
video look authentic.â€
The fake in the word is pretty self-explanatory – these videos are
not real. The deep comes from deep learning, a subset of artificial
intelligence that utilises different layers of artificial neural
networks. Specifically, deepfakes employ two sets of algorithms, one to
create the video, and the second to determine if it is fake. The first
learns from the second to create a perfectly unidentifiable fake video.
Although the technology behind these videos is very fascinating, the
improper use of deepfakes has raised questions and concerns, and its
newfound mainstream status is not to be underestimated.
The beginning of the new decade saw TikTok’s parent company ByteDance under accusations of developing
a feature, referred to as “Face Swap“, using deepfakes technology.
ByteDance has denied the accusations, but the possibility of such
feature to become available to everyone raises concerns as to the use
the general public would make of it.
The most famous example is Chinese deepfakes app Zao, which
superimposes a photo of the user’s face onto a person in a video or GIF.
While Zao’s mainly faced privacy issues –the first version of the user
agreement stated that people who uploaded their photos surrendered
intellectual property right to their face– the real concern stems from
the use people will actually do of such a controversial technology if it
were to become available to a wider audience. At the time, Chinese
online payment system Alipay responded to fears over fraudulent use of Zao
saying that the current facial swapping technology “cannot deceive
[their]
payment apps†– but this doesn’t mean that the technology is not
evolving and couldn’t pose a threat in the future.
Another social network to make headlines in the first week of 2020 with relation to deepfakes is Snapchat – the company also decided to invest in its own deepfake technology. The social network bought deepfake maker AI Factory for US $166M
and the acquisition resulted in a new Snapchat feature called “Cameosâ€
that works in the same way deepfakes videos do – users can use their
selfies to become part of a selection of videos and essentially create
content that looks real, but that has never happened.
Deepfakes have been around for a while now – the most prevalent use
of this technology is in pornography, which has seen a growing number of
women, especially celebrities, becoming the protagonists of
pornographic content without their consent. The trend started on Reddit,
where pornographic deepfakes featuring the faces of actress Gal Gadot,
singers Taylor Swift and Ariana Grande, amongst others, grew in
popularity. Last year, deepfake pornography accounted for 96 percent of
the 14678 deepfake videos online, according to a report by Amsterdam-based company Deeptrace.
The remaining four percent, although small, could be just as
dangerous, and even change the global political and social landscape.
In response to Facebook’s decision to not take down the “shallowfakeâ€
(videos manipulated with basic editing tools or intentionally placed
out of context) video of US House Speaker Nancy Pelosi appearing to be
slurring her words, a team which included UK artist Bill Posters posted a
deepfake video of Mark Zuckerberg giving an appalling speech that
boasted his “total control of billions of people’s stolen data, all
their secrets, their lives, their futures.†The artists aim, they said,
was to interrogate the power of new forms of computational propaganda.
Other examples of very credible deepfake videos see Barack Obama
deliver a speech on the dangers of false information (the irony!), or in
a much more worrying use of the technology, cybercriminals mimicking a
CEO’s voice to demand a cash-transfer.
There is clearly a necessity to address deepfakes on a number of
fronts to avoid them becoming a powerful tool of misinformation.
For starters, although the commodification of this technology can be
frightening, it also raises people’s level of awareness, and puts them
in a position to question the credibility of the videos and audio
they’re watching or listening to. It is up to the watcher to check if
videos are real or not, just as it is when it comes to fake news.
Moreover, the same technology that created the issue could be the
answer to solving it. Last month, Facebook, in cooperation with Amazon,
Microsoft and Partnership on AI, launched a competition called the “Deepfake Detection Challengeâ€
to create automated tools, using AI technology, that can spot
deepfakes. At the same time, the AI Foundation also announced they are
building a deepfake detection tool for the general public.
Regulators have also started moving in the right direction to avoid
the misuse of this technology. US Congress held its first hearing on
deepfakes in June 2019, due to growing concerns over the impact deepfake
could have on the upcoming US presidential elections; while, as in the
case of Facebook and Twitter, social media platforms are under more and
more pressure to take action against misinformation, which now includes
deepfake videos and audios.
Posted by AGORACOM-JC
at 9:30 PM on Sunday, January 19th, 2020
SPONSOR: CardioComm Solutions (EKG: TSX-V)
– The heartbeat of cardiovascular medicine and telemedicine. Patented
systems enable medical professionals, patients, and other healthcare
professionals, clinics, hospitals and call centres to access and manage
patient information in a secure and reliable environment.
Scripps Researchers Use mHealth Wearables to Track Flu Outbreaks
The study used data from Fitbit users over two years to determine
who was experiencing a flu-like illness. It shows that mHealth wearables
could be used to identify and possibly even anticipate viral outbreaks.
Led by digital health expert Eric Topol, MD, researchers at the Scripps Research Translational Institute used data from roughly 50,000 people wearing Fitbits between 2016 and 2018 and were able to plot outbreaks of seasonal respiratory infections like the flu.
Researchers found they could identify and possibly even anticipate an outbreak by the activities of Fitbit users who became sick.
January 17, 2020 – Researchers have found a way to use mHealth wearables to tackle population health concerns.
Led by digital health expert Eric Topol, MD, researchers at the
Scripps Research Translational Institute used data from roughly 50,000
people wearing Fitbits between 2016 and 2018 and were able to plot
outbreaks of seasonal respiratory infections like the flu.
The first-of-its-kind study tracked sleep patterns, resting heart
rate (RHR) and activity among users in Texas, California, New York,
Illinois and Pennsylvania, and compared that data to influenza-like
illnesses (ILIs) recorded by the US Centers for Disease Control in those
states.
Researchers found they could identify and possibly even anticipate an
outbreak by the activities of Fitbit users who became sick. People who
develop the flu, they noted, tend to have an elevated RHR, sleep more
and move around less.
“Activity and physiological trackers are increasingly used in the USA
and globally to monitor individual health,†Topol and his colleagues
said in a study published this week in The Lancet.
“By accessing these data, it could be possible to improve real-time and
geographically refined influenza surveillance. This information could
be vital to enact timely outbreak response measures to prevent further
transmission of influenza cases during outbreaks.â€
Joining Topol in the research were Jennifer M. Radin, PhD; Nathan E.
Wineinger, PhD, and Steve R. Steinhubl, MD, all of the San Diego-based
organization, which has conducted dozens of mHealth and telehealth
studies over the past decade.
This study, funded in part by the National Institutes of Health, aims
to improve population health management for a virus that annually
affects 20 percent of children and 7 percent of adults in the US, and
which causes as many as 650,000 deaths worldwide. Traditional
surveillance methods usually lag one to three weeks behind the outbreak,
putting healthcare providers at a disadvantage in curbing the spread of
the virus.
Topol and his colleagues are looking at mHealth to reduce that
disadvantage and give providers and public health officials an
opportunity to stop and outbreak earlier.
There are some challenges. While roughly 10 percent of the US
population, according to a 2016 study, now uses wearables, that
percentage has to be higher to make the results more meaningful. In
addition, any connected health platform used to gather data shoud be
able to draw information from a wide variety of wearables, including
smart watches and smart clothing.
And finally, such a platform would need to be careful to distinguish
behaviors caused by the onset of the flu with normal behaviors, and
sensitive enough to detect those changes in behavior at the earliest
possible moment.
“In the future, wearables could include additional sensors to
prospectively track blood pressure, temperature, electrocardiogram, and
cough analysis, which could be used to further characterize an
individual’s baseline and identify abnormalities,†the study concluded.
“Capturing physiological and behavioral data from a growing number of
wearable device users globally could greatly improve timeliness and
precision of public health responses and even inform individual clinical
care. It could also fill major gaps in regions where influenza
surveillance data are not available.â€
Posted by AGORACOM-JC
at 9:15 PM on Sunday, January 19th, 2020
SPONSOR: BetterU Education Corp.
aims to provide access to quality education from around the world.
The company plans to bridge the prevailing gap in the education and job
industry and enhance the lives of its prospective learners by developing
an integrated ecosystem. Click here for more information.
How Edtech became personalised in the 2010s
The internet is being used to reach this diverse population in the remotest corners, and advanced tech is being used to create new learning experiences
If we look at the new technology accessible to teachers and students today, then we would agree that the accepted way to teach and learn has changed
The integration of technology started with improving classroom
experiences and reached adaptive learning platforms that students can
personalise, says Toppr’s Zishaan Hayath
We are in an era where unprecedented ideas are unfolding in education, driven by technology. Digitising learning content has been imperative, keeping in mind affordability, accessibility and inclusiveness of the large trainable youth population. The internet is being used to reach this diverse population in the remotest corners, and advanced tech is being used to create new learning experiences. If we look at the new technology accessible to teachers and students today, then we would agree that the accepted way to teach and learn has changed. It is undeniable that education has evolved so much, and technology has opened up the world a lot for both students and teachers. In this article, we explore the journey of edtech through this decade that saw it evolve from smart classes to personalised learning apps on smartphones.
EDTECH SOLUTIONS WERE DESIGNED AROUND IMPROVING THE CLASSROOM EXPERIENCE AND HELPING TEACHERS
Integration of technology in the learning and education system is
evidently the greatest change in education in the past decade. The
earliest technology innovations for schools were created around
providing software and hardware to make the classroom experience better.
More emphasis was put on the use of rich multimedia content as a
teaching tool inside classrooms. We saw more and more teachers making
use of overhead projectors and videos during their lessons. This was
then considered to be a revolutionary in-classroom technology,
leveraging a large repository of digital content across virtually all
subjects from kindergarten to Class 12. This new technology helped
schools with better educational resource planning and helped teachers
with better lecture delivery. Performance management and tracking
systems enabled teachers to measure the progress of students
systematically. Such classrooms were called “smart classesâ€. Progress in
technology, however, has led to much more.
INTERNET SHIFTED FOCUS FROM CLASSROOMS TO VIRTUAL CLASSROOMS WITH DIGITISED CONTENT.
Smart class solutions faced challenges like high set-up cost,
hardware maintenance and non-payments by institutions. As a result,
edtech companies started moving to asset-light models. Digitisation of
learning material and availability on platforms, including YouTube,
followed the wave of smart classes. Internet penetration made everything
easier and faster, enabling students to access digital study material
that was informational and interactive and could be accessed anytime,
anywhere. The gap in the ability to access high quality learning
material was shrinking. This boom in digitisation of content helped
scale the concept of pre-recorded online classes in India. The
availability of fast internet connections and easy access allowed
students to be more informed and open to new avenues. ‘In jobs, expertise from experience is no longer critical’
Students were able to take on-demand classes without having to attend
any physical classes. For students, this improved affordability, while
reduced travel time allowed them to study at their own pace and time.
EDTECH STARTED GROWING EXPONENTIALLY WITH LEARNING APPS
As students started accessing learning material over the internet, it
gave rise to a new opportunity. Newly introduced learning apps started
providing content at one place, which was otherwise scattered. The
content was now organised and designed around a teacher’s pedagogy.
Online courses developed by proficient tutors gave students the
experience of real-time learning while sitting in the comfort of their
homes. Edtech saw growth in many disciplines, including primary and
supplementary education, test preparation, reskilling and online
certifications, and language learning. Global institutions started
running online certification courses powered by edtech that helped in
course delivery, examinations and assessments. Indian entrepreneurs made
an impressive effort in following and customising the global trend of
digitisation of the education system. Increasing awareness and higher
disposable income boosted the edtech market and it attracted significant
investments from Indian and global investors.
PERSONALISED LEARNING MARKED THE NEW AGE OF EDTECH
The second half of the last decade saw the use of advanced
technology. Cutting edge tech, including artificial intelligence (AI)
and machine learning (ML), gave rise to education platforms that
addressed the basic problem of the education system of India—the
one-size-fits-all-approach. With a typical classroom having a
teacher-to-student ratio of 1:50, the quality is often compromised and
that’s where technology is useful. Adaptive learning platforms using AI
and ML create personalised learning paths helping students study in the
way they best understand, thus enabling them to learn as per their
needs. Gamification in learning has helped engage students in a
meaningful way, making them genuinely interested in their subject
matter. Why companies will have to fill digital skill gaps soon: Wipro’s Saurabh Govil
Cloud-based learning is fast emerging as the medium to make
personalised and high quality learning available to all students. Live
classes with teachers can be conducted on such platforms, along with
pre-recorded video classes, where the students can access the material
on their own time. Students can now reach out for academic help 24×7.
This is quickly changing the possibilities of delivery mediums when it
comes to affordable access to high-quality learning.
CUSTOMER ACQUISITION AND RETENTION WOULD BE KEY CHALLENGES TO FURTHER GROWTH
Availability and access to the internet are important for all of
these technologies to become relevant to end-users, i.e. students and
teachers. The number of people accessing the internet has grown manifold
over the last decade. However, for a society like India where the
culture of coaching classes is deep-rooted, it is challenging to drive
the adoption of edtech platforms as an alternative. Students, parents
and teachers need to be better informed of the benefits of edtech.
Startups are trying various business models, including free, freemium
and premium subscriptions to drive usage and trial. However, there is a
lot of ground to be covered. As this decade ends, we recognise that the
Indian education system has evolved fast, along with global trends.
Technology has also enabled streamlining of the learning experience,
improved accessibility and offered new resources to students. And there
is only more to come. With one of the largest populations in the world,
stronger implementation of AI and ML will help bring truly adaptive and
personalised platforms addressing the real learning needs of students
and professionals. Edtech is all set to give more accessible,
high-quality and personalised learning and prepare the leaders of
tomorrow.
Posted by AGORACOM-JC
at 9:00 PM on Sunday, January 19th, 2020
SPONSOR:ThreeD Capital Inc. (IDK:CSE)
Led by legendary financier, Sheldon Inwentash, ThreeD is a
Canadian-based venture capital firm that only invests in best of breed
small-cap companies which are both defensible and mass scalable. More
than just lip service, Inwentash has financed many of Canada’s biggest
small-cap exits. Click Here For More Information.
2020 has so far been particularly positive for Bitcoin and the rest of the cryptocurrency market. Starting the year at around $7,100, BTC currently trades at almost $9,000, charting notable increases throughout the entire week.Â
2020 has so far been particularly
positive for Bitcoin and the rest of the cryptocurrency market. Starting
the year at around $7,100, BTC currently trades at almost $9,000,
charting notable increases throughout the entire week.
In the past 24 hours alone, Bitcoin
gained another 3% to its value, increasing from around $8,650 to about
$9,000 from where it retraced a bit and it currently trades at $8,900.
BTC/USD. Source: TradingView
Bitcoin’s total market capitalization
has increased to $162 billion. However, its dominance has sized down to
66.1%, meaning that altcoins have managed to recover and to claim new
grounds.
Indeed, looking at how other
cryptocurrencies besides Bitcoin performed, it’s rather clear that they
are flourishing. All of the projects from the top 20 are in the green,
charting serious gains throughout the entire week. The past 24 hours are
no exception.
Bitcoin SV is once again one of the
best-performing altcoins, increasing by 10% throughout the past 24
hours. Others who marked serious gains include Binance Coin (9.14%),
EOS, (8.84%), Bitcoin Cash (7.8%), and so forth.
$3.2 Million ETH Stolen From UPbit Is Already Laundered: Report Claims. Following
the hack of UPbit which took place in November 2019, it now becomes
clear that $3.2 million from the stolen cryptocurrency has already been
laundered. The report also claims that this happened by using small transactions in a lot of different exchanges.
YouTube Crypto Purge Is Back: Popular YouTuber Davinci Reports He’d Been Blocked From Streaming. Despite issuing a formal apology and saying that the cryptocurrency purge has been a mistake, it appears that YouTube is taking
a charge at content creators once again. Popular cryptocurrency
YouTuber Davinci has said that his channel has been flagged and that he
has been blocked from streaming.
Craig Wright’s Defamation Case Against Hodlnaut Reportedly Dismissed By UK’s High Court. Self-proclaimed Satoshi Nakamoto, Craig Wright, has reportedly seen his defamation case against popular Twitter user Hodlnaut dismissed. The merit for the order is the is lack of jurisdiction but the case will supposedly continue in Norway.
Significant Daily Gainers and Losers
Ethereum Classic (31.45%)
Ethereum Classic (ETC) is undoubtedly
the most significant daily gainer throughout the past 24 hours, at the
time of this writing. Up 31.45% so far, ETC stands at a price of $10 and
a total market capitalization of about $1.1 billion. More
interestingly, ETC saw a surge in its 24-hour trading volume which is
now more than $3.2 billion.
MonaCoin (24.72%)
MonaCoin is another altcoin that
managed to impress in today’s trading session. It’s up about 24 percent
in the past day alone, bringing its price to $1.22 at the time of this
writing. MonaCoin now sits on a market cap of about $80 million and is
the 61st largest cryptocurrency. In terms of 24-hour trading volume,
MonaCoin stands at about $21 million.
Swipe (-11.83%)
Unfortunately, not all altcoins
managed to increase with the rest of the market. Swipe is down about
11.8% and its price reduced to $1.30. The cryptocurrency stands on a
total market cap of about $79 million and saw a trading volume of $14
million in the past 24 hours.
Posted by AGORACOM-JC
at 7:02 AM on Friday, January 17th, 2020
Announced the release of the latest version of VIE.gg (https://vie.gg) the Company’s esports wagering platform
Latest upgrade delivers notable new features, including additional betting options such as Fixed Odds, Pari-mutuel, Fantasy and Pool Betting to complement our main P2P option
BIRKIRKARA, MALTA (January 17, 2020) – Esports Entertainment Group, Inc. (GMBL:OTCQB) (or the “Companyâ€), a licensed online gambling company with a focus on esports wagering and 18+ gaming, is pleased to announce the release of the latest version of VIE.gg (https://vie.gg) the Company’s esports wagering platform.
UPGRADE
DELIVERS LATEST FEATURES AND FULL DEVICE ACCESSABILITY
This latest
upgrade delivers notable new features, including additional betting options
such as Fixed Odds, Pari-mutuel, Fantasy and Pool Betting to complement our
main P2P option.
Furthermore, the
upgrade delivers significant content enhancements, including real-time
streaming and event coverage. Finally,
the upgrades now make VIE.gg (https://vie.gg)
fully compatible with all major desktop, mobile and tablet devices, as well as,
their respective operating systems.
Grant Johnson, CEO of Esports Entertainment Group, stated “This is another
major milestone for our Company. This is our strongest release ever, with every
new feature esports gambling enthusiasts could wish for in a platform. Combined
with our unsurpassed transparency as a result of our status as a fully
reporting public company, we believe VIE.gg is strongly positioned for success
in 2020â€.
In delivering
this upgrade, Esports Entertainment Group partnered with Askott Entertainment,
a Vancouver based software development company that has been building
award-winning online betting and daily fantasy software since 2013.
This
press release is available on our Online Investor Relations Community for
shareholders and potential shareholders to ask questions, receive answers and
collaborate with management in a fully moderated forum https://agoracom.com/ir/EsportsEntertainmentGroup
RedChip
investor relations Esports Entertainment Group Investor Page: http://www.gmblinfo.com
ABOUT ESPORTS ENTERTAINMENT GROUP
Esports Entertainment Group, Inc. is a
licensed online gambling company with a focus on esports wagering and 18+
gaming. Esports Entertainment offers bet exchange style wagering on esports
events in a licensed, regulated and secure platform to the global esports
audience at vie.gg.
In addition, Esports Entertainment intends to offer users from around the world
the ability to participate in multi-player mobile
and PC video game tournaments for cash prizes. Esports Entertainment is led by
a team of industry professionals and technical experts from the online gambling
and the video game industries, and esports. The Company holds a license to
conduct online gambling and 18+ gaming on a global basis in Curacao, Kingdom of
the Netherlands. The Company maintains offices in Malta and Warsaw, Poland.
Esports Entertainment common stock is listed on the OTCQB under the symbol
GMBL. For more information visit www.esportsentertainmentgroup.com
FORWARD-LOOKING STATEMENTS The information contained herein includes forward-looking statements. These
statements relate to future events or to our future financial performance, and
involve known and unknown risks, uncertainties and other factors that may cause
our actual results, levels of activity, performance, or achievements to be
materially different from any future results, levels of activity, performance
or achievements expressed or implied by these forward-looking statements. You
should not place undue reliance on forward-looking statements since they
involve known and unknown risks, uncertainties and other factors which are, in
some cases, beyond our control and which could, and likely will, materially
affect actual results, levels of activity, performance or achievements. Any
forward-looking statement reflects our current views with respect to future
events and is subject to these and other risks, uncertainties and assumptions
relating to our operations, results of operations, growth strategy and
liquidity. We assume no obligation to publicly update or revise these
forward-looking statements for any reason, or to update the reasons actual
results could differ materially from those anticipated in these forward-looking
statements, even if new information becomes available in the future. The safe
harbor for forward-looking statements contained in the Securities Litigation
Reform Act of 1995 protects companies from liability for their
forward-looking statements if they comply with the requirements of the Act.
Posted by AGORACOM-JC
at 4:58 PM on Thursday, January 16th, 2020
SPONSOR:ThreeD Capital Inc. (IDK:CSE) Led by legendary financier, Sheldon Inwentash, ThreeD is a Canadian-based venture capital firm that only invests in best of breed small-cap companies which are both defensible and mass scalable. More than just lip service, Inwentash has financed many of Canada’s biggest small-cap exits. Click Here For More Information.
75% Think Bitcoin Will Double in Price This Year: Crypto Twitter Survey
An economist ran a poll to check the pulse of the bitcoin market. An
ultra bullish atmosphere may signal that a trend reversal is incoming
The bitcoin rally is making many crypto investors euphoric.
An economist ran a poll to check on the pulse of the BTC market.
An ultra bullish atmosphere may be a sign that a trend reversal is incoming.
Over the last couple of weeks, bitcoin has been slaying bears and
disbelievers. On Tuesday, bitcoin printed a fresh 2020 high of
$8,903.20. The crypto token’s renewed bullish vigor is driving many
retail investors into euphoria.
One retail trader is already predicting that bitcoin will hit $20,000. | Source: Twitter
The ecstatic atmosphere probably drove Alex Kruger to measure
community sentiment. The trader and economist ran a poll asking Crypto
Twitter (CT) what they think would be BTC’s 2020 high.
Results reveal that CT is feeling ultra bullish. That’s bad news for bitcoin.
Nearly Half of Survey Participants Believe Bitcoin Would Breach $20,000 This Year
Kruger recently ran a poll that involved the responses of over 4,000
participants. Results show that 47.1% believe that bitcoin would trade
above $20,000 this year. Close to 30% think the coin would settle
between $14,000 and $19,999. The remaining 25% said the cryptocurrency
will trade at $13,999 or lower.
Poll results may foreshadow massive capitulation. | Source: Twitter
The survey reveals that nearly 75% of participants believe that bitcoin will print gains of over 100% this year.
Almost half see the cryptocurrency skyrocketing by over 180%. These are
ultra bullish predictions even by bitcoin’s standards. The results tell
me that it is wise to take a contrarian stance.
The Wisdom of the Crowd Is Rarely Correct
When it comes to investing, the wisdom of the herd is often wrong.
This is especially true of bitcoin. The digital asset has a tendency to
mislead the crowd and burn retail investors.
We saw this happen in the 2017 bull market. Many retail investors
hopped on the bandwagon just as bitcoin was peaking around $20,000. Countless got wiped out as the cryptocurrency entered a vicious bear market.
This happened again in December 2018. At the time, bitcoin was trading at $3,000. Many capitulated as calls for a massive drop to $1,800
reverberated on social media. What did the cryptocurrency do? It left
disbelievers with their jaws on the floor as it soared to a 2019 high of $14,000.
The Crypto Dog considering the possibility of a bitcoin drop just before the cryptocurrency skyrocketed. | Source: Twitter
These examples show that it’s prudent to look at the other side of
the coin. Getting pulled by the herd is a bad trading strategy.
Posted by AGORACOM-JC
at 4:03 PM on Thursday, January 16th, 2020
SPONSOR: BetterU Education Corp.
aims to provide access to quality education from around the world.
The company plans to bridge the prevailing gap in the education and job
industry and enhance the lives of its prospective learners by developing
an integrated ecosystem. Click here for more information.
2020 vision: edtech in 2020 with John Ingram
Thursday 16th January 2020
Q. What should schools, colleges and universities be focusing on for 2020?
Certainly, from our experience working with schools, they need
to be supported more when it comes to training teachers to use
technology. We find that teachers are usually keen on the idea of using
new technologies in the classroom, but that implementation needs to be
handled with greater care. Tech in UK classrooms often goes unused,
which ultimately means that millions of pounds are potentially going to
waste. Colleges and universities are making better progress on training
teachers to use technology, so I’d like to see more improvement at
school level.
Q. What, if any, policy changes would you like to see in education this year?
It was encouraging to hear the government announce new measures
to help boost the nation’s skills and transform technical education,
such as providing up to £120m to establish up to eight more Institutes
of Technology. However, many of the measures aimed at boosting the UK’s
productivity and building a skilled workforce are targeted towards
further education, so it would be great to see some more focus given to
schools.
It would also be great to see some progress around the UK Youth Parliament’s campaign for A Curriculum for Life.
Young people are calling for the education system to do more to prepare
them for life after school and college – a critically important area
that often flies under the radar – and it’s important that they are
heard.
Q. If you could pinpoint one area of improvement for the education sector during 2020, what would it be?
If I had to choose one area, it would be improving the way we
treat and support teachers, addressing serious problem areas such as
excessive workloads and teacher retention.
There are many tools on the market that can help with
onerous non-teaching tasks such as marking, assessment and lesson
planning. The challenge is to ensure that schools are made aware of the
best of these, so that they can spend their tight budgets wisely.
Schools are often tasked with helping reduce teacher workload
and ensuring staff retention, but this can be difficult against a
backdrop of increasing budget cuts and Ofsted pressures.
I believe edtech can play a role here. There are many tools on
the market that can help with onerous non-teaching tasks such as
marking, assessment and lesson planning. The challenge is to ensure that
schools are made aware of the best of these, so that they can spend
their tight budgets wisely.
Q. Is there a particular area within edtech that you think should be the main focus for 2020?
I think adaptive learning and targeted education are set to
feature prominently in 2020 – there are many platforms out there making
big strides, but there’s still a long way to go. The end goal is for
classrooms to have adaptive learning platforms that retain the benefits
of learning in a group (social skills, motivation, etc) and combine this
with fully personalised instruction. We’re making progress towards
this, but fully moving away from ‘one-size-fits-all’ learning, and
inflexible learning pathways, will take time.
Separately, I’d also like to see more of a push towards
technology being used at earlier ages in schools, so that comfort and
familiarity with using tech amongst students and teachers is embedded
early on. Nevertheless, no matter what technologies are introduced, we
must bear in mind that not everyone is a technophile. For edtech
adoption to take off, schools and universities must work to adjust
internal cultures so that they are open to advancements.
Posted by AGORACOM-JC
at 3:53 PM on Thursday, January 16th, 2020
SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
House Intelligence Committee chairman praised Facebook policy on deepfakes
Other lawmakers not so impressed; add other social media platforms not doing enough
House Permanent Select Committee on Intelligence Chairman Rep. Adam
Schiff (D-CA) said Facebook’s announcement this past week of its “new
policy which will ban intentionally misleading deepfakes from its
platforms is a sensible and responsible step, and I hope that others
like YouTube and Twitter will follow suit.â€
Schiff cautioned, however, that, “As with any new policy, it will be
vital to see how it is implemented, and particularly whether Facebook
can effectively detect deepfakes at the speed and scale required to
prevent them from going viral,†emphasizing that “the damage done by a
convincing deepfake, or a cruder piece of misinformation, is
long-lasting, and not undone when the deception is exposed, making
speedy takedowns the utmost priority.â€
Schiff added he’ll “also be focused on how Facebook deals with other
harmful disinformation like so-called ‘cheapfakes,’ which are not
covered by this new policy because they are created with less
sophisticated techniques but nonetheless purposefully and maliciously
distort an existing piece of media.â€
Not all lawmakers – or privacy rights advocates and groups —
concerned about this problem, though, were as impressed as Schiff with
Facebook’s new policy, Enforcing Against Manipulated Media, which was
announcement by Facebook Vice President for Global Policy Management
Monika Bickert only days before she testified
last week before the House Committee on Energy and Commerce
Subcommittee on Consumer Protection and Commerce hearing on, “Americans
at Risk: Manipulation and Deception in the Digital Age.â€
Subcommittee Chairwoman Rep. Jan Schakowsky (D-IL), chastised
“Congress [for having] unfortunately taken a laissez faire approach to
regulating unfair and deceptive practices online over the past decade
and platforms have let them flourish,†the result of which has been “big
tech failed to respond to the grave threat posed by deep-fakes, as
evidenced by Facebook scrambling to announce a new policy that strikes
me as wholly inadequate, since it would have done nothing to prevent the
altered video of Speaker Pelosi that amassed millions of views and
prompted no action by the online platform.â€
Similarly, Democratic Presidential candidate Joe Biden’s spokesman
Bill Russo stated, “Facebook’s announcement is not a policy meant to fix
the very real problem of disinformation that is undermining face in our
electoral process, but is instead an illusion of progress. Banning
deepfakes should be an incredibly low floor in combating
disinformation.â€
Schakowsky and other subcommittee members didn’t seem much assuaged
by either Bickert or the other witnesses who testified at the hearing
that Facebook’s policy goes far enough.
She declared that, “Underlying all of this is Section 230 of the
Communications Decency Act, which provided online platforms like
Facebook a legal liability shield for 3rd party content. Many have
argued that this liability shield resulted in online platforms not
adequately policing their platforms, including online piracy and
extremist content. Thus, here we are, with big tech wholly unprepared to
tackle the challenges we face today,†which she described as “a topline
concern for this subcommittee.†We “must protect consumers regardless
of whether they are online or not. For too long, big tech has argued
that ecommerce and digital platforms deserved special treatment and a
light regulatory touch.â€
In her opening statement, Schakowsky further noted that the Federal
Trade Commission “works to protect Americans from many unfair and
deceptive practices, but a lack of resources, authority, and even a lack
of will has left many American consumers feeling helpless in the
digital world. Adding to that feeling of helplessness, new technologies
are increasing the scope and scale of the problem. Deepfakes,
manipulated video, dark patterns, bots, and other technologies are
hurting us in direct and indirect ways.â€
“People share millions of photos and videos on Facebook every day,
creating some of the most compelling and creative visuals on our
platform,†Bickert said in announcing Facebook’s policy, but conceded
“some of that content is manipulated, often for benign reasons, like
making a video sharper or audio more clear. But there are people who
engage in media manipulation in order to mislead,†and these
“manipulations can be made through simple technology like Photoshop or
through sophisticated tools that use artificial intelligence or ‘deep
learning’ techniques to create videos that distort reality – usually
called deepfakes.â€
“While these videos are still rare on the Internet†Bickert said,
“they [nevertheless] present a significant challenge for our industry
and society as their use increases.â€
“As we enter 2020, the problem of disinformation, and how it can
spread rapidly on social media, is a central and continuing national
security concern, and a real threat to the health of our democracy,â€
Schiff said, noting that “for more than a year, I’ve been pushing
government agencies and tech companies to recognize and take action
against the next wave of disinformation that could come in the form of
‘deepfakes’ — AI-generated video, audio, and images that are difficult
or impossible to distinguish from real thing.â€
Schiff pointed to experts who testified during an open hearing of the
Intelligence Committee last year that “the technology to create
deepfakes is advancing rapidly and widely available to state and
non-state actors, and has already been used to target private
individuals …â€
Schiff said in his response to Facebook’s policy that he intends “to
continue to work with government agencies and the private sector to
advance policies and legislation to make sure we’re ready for the next
wave of disinformation online, including by improving detection
technologies, something which the recently passed Intelligence
Authorization Act facilitates with a new prize competition,†which Biometric Update earlier reported on.
Bickert said Facebook’s “approach has several components, from
investigating AI-generated content and deceptive behaviors like fake
accounts, to partnering with academia, government and industry to
exposing people behind these efforts,†underscoring that “collaboration
is key. Across the world, we’ve been driving conversations with more
than 50 global experts with technical, policy, media, legal, civic and
academic backgrounds to inform our policy development and improve the
science of detecting manipulated media,†and, “as a result of these
partnerships and discussions, we are strengthening our policy toward
misleading manipulated videos that have been identified as deepfakes.â€
“Going forward,†she stated, Facebook “will remove misleading manipulated media if it meets the specific detailed criteria she briefly outlined in announcing the social media giant’s new policy.
She described criteria as applying specifically to content which “has
been edited or synthesized – beyond adjustments for clarity or quality –
in ways that aren’t apparent to an average person and would likely
mislead someone into thinking that a subject of the video said words
that they did not actually say, and, it is the product of artificial
intelligence or machine learning that merges, replaces or superimposes
content onto a video, making it appear to be authentic.â€
However, she called attention to the fact that the new policy “does
not extend to content that is parody or satire, or video that has been
edited solely to omit or change the order of words,†highlighting that,
“consistent with our existing policies, audio, photos or videos, whether
a deepfake or not, will be removed from Facebook if they violate any of
our other Community Standards including those governing nudity, graphic violence, voter suppression, and hate speech.â€
She further stated that “videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers,
which include over 50 partners worldwide fact-checking in over 40
languages,†under the new Facebook policy. And, “If a photo or video is
rated false or partly false by a fact-checker, we significantly reduce
its distribution in News Feed, and reject it if it’s being run as an
ad.â€
“And, critically,†she stressed, “people who see it, try to share it,
or have already shared it, will see warnings alerting them that it’s
false.â€
Bickert said the company believes that “this approach is critical to
our strategy, and one we heard specifically from our conversations with
experts,†exclaiming that “if we simply removed all manipulated videos
flagged by fact-checkers as false, the videos would still be available
elsewhere on the Internet or social media ecosystem.†Thus, she
expressed, “by leaving them up and labelling them as false, we’re
providing people with important information and context.â€
“Our enforcement strategy against misleading manipulated media also
benefits from our efforts to root out the people behind these efforts,â€
she continued, pointing out that, “Just last month, we identified and
removed a network using AI-generated photos to conceal their fake
accounts,†and Facebook “teams continue to proactively hunt for fake
accounts and other coordinated inauthentic behavior.â€
“We are also engaged in the identification of manipulated content, of
which deepfakes are the most challenging to detect,†she continued,
explaining “that’s why last September we launched the Deep Fake
Detection Challenge, which has spurred people from all over the world to
produce more research and open source tools to detect deepfakes.â€
Meanwhile, in a separate effort by Facebook, the company has
“partnered with Reuters, the world’s largest multimedia news provider,
to help newsrooms worldwide to identify deepfakes and manipulated media
through a free online training course,†Bickert adding, noting that
“news organizations increasingly rely on third parties for large volumes
of images and video, and identifying manipulated visuals is a
significant challenge. This program aims to support newsrooms trying to
do this work.â€
She concluded by saying that, “As these partnerships and our own
insights evolve, so too will our policies toward manipulated media. In
the meantime, we’re committed to investing within Facebook and working
with other stakeholders in this area to find solutions with real
impact.â€
“Facebook wants you to think the problem is video-editing technology,
but the real problem is Facebook’s refusal to stop the spread of
disinformation,†House Speaker Nancy Pelosi Deputy Chief of Staff Drew
Hammill responded in a tweet.
Facebook was roundly chastised for seemingly only to be concerned
about deepfake videos rather than all the other tech that’s been used –
and admitted by Facebook — to manipulate audio and text that’s also
deliberately meant to deceive viewers and readers.
“Consider the scale. Facebook has more than 2.7 billion users, more
than the number of followers of Christianity. YouTube has north of 2
billion users, more than the followers of Islam. Tech platforms arguably
have more psychological influence over two billion people’s daily
thoughts and actions when considering that millions of people spend
hours per day within the social world that tech has created, checking
hundreds of times a day,†the subcommittee heard from Center for Humane Technology President and Co-Founder Tristan Harris.
“In several developing countries like the Philippines, Facebook has
100 percent penetration. Philippines journalist Maria Ressa calls it the
first ‘Facebook nation.’ But what happens when infrastructure is left
completely unprotected, and vast harms emerge as a product of tech
companies’ direct operation and profit?â€
Declaring that “social organs of society [are] left open for
deception, Harris warned that “these private companies have become the
eyes, ears, and mouth by which we each navigate, communicate and make
sense of the world. Technology companies manipulate our sense of
identity, self-worth, relationships, beliefs, actions, attention,
memory, physiology and even habit-formation processes, without proper
responsibility.â€
“Technology,†he said, “has become the filter by which we are
experiencing and making sense of the real world,†and, “in so doing,
technology has directly led to the many failures and problems that we
are all seeing: fake news, addiction, polarization, social isolation,
declining teen mental health, conspiracy thinking, erosion of trust,
breakdown of truth.â€
“But, while social media platforms have become our cultural and
psychological infrastructure on which society works, commercial
technology companies have failed to mitigate deception on their own
platforms from deception,†Harris direly warned. “Imagine a nuclear
power industry creating the energy grid infrastructure we all rely on,
without taking responsibility for nuclear waste, grid failures, or
making sufficient investments to protect it from cyber attacks. And
then, claiming that we are personally responsible for buying radiation
kits to protect ourselves from possible nuclear meltdowns.â€
“By taking over more and more of the ‘organs’ needed for society to
function, social media has become the de facto psychological
infrastructure that has created conditions that incentivize mass
deception at industrialized scales,†he quantified the issue, starkly
adding, “Technology companies have covertly ‘tilted’ the playing field
of our individual and collective attention, beliefs and behavior to
their private commercial benefit,†and that, “naturally, these tools and
capabilities tend to favor the sole pursuit of private profit far more
easily and productively than any ‘dual purpose’ benefits they may also
have at one time — momentarily — and occasionally had for culture or
societyâ€
Hill staffers involved in this issue advised to watch for “more
aggressive†legislation emanating from “the variety of committees and
subcommittees†with authority “to do something.â€
Indeed. Energy and Commerce Committee Chairman Frank Pallone, Jr. (D-NJ), said in his opening statement
that Congress needs to move “forward to beginning to get answers “so
that we can start to provide more transparency and tools for consumers
to fight misinformation and deceptive practices.â€
“While computer scientists are working on technology that can help
detect each of these deceptive techniques, we are in a technological
arms race. As detection technology improves, so does the deceptive
technology. Regulators and platforms trying to combat deception are left
playing whack-a-mole,†he acknowledged.
“Unrelenting advances in these technologies and their abuse raise
significant questions for all of us,†he concluded, asking, “What is the
prevalence of these deceptive techniques,†and, “how are these
techniques actually affecting our actions and decisions?â€
But, more importantly – from a distinctly legislatively regulatory position – he posited, “What steps are companies and regulators taking to mitigate consumer fraud and misinformation?â€
House lawmakers are growing increasingly frustrated with restrictions on federal marijuana research and are putting pressure on regulators to change the rules.
While 33 states have legalized marijuana for medicinal purposes, federal research is extremely restricted.
During a House Energy and Commerce Health Subcommittee hearing
Wednesday, bipartisan lawmakers pressed officials from the Food and Drug
Administration, Drug Enforcement Administration (DEA) and National
Institute on Drug Abuse about obstacles to studying the safety and
effectiveness of cannabis products, including hemp-based cannabidiol.
“States’ laws and federal policy are a thousand miles apart. As more
states allow cannabis, the federal government still strictly controls
and prohibits it, even restricting legitimate medical research,†said
subcommittee Chairwoman Anna Eshoo (D-Calif.).
All of the administration officials at the hearing agreed the current
studies on the benefits and health consequences of marijuana are
inadequate. However, they indicated that changes are not going to be
immediately forthcoming, as more studies are needed.
Marijuana is a Schedule I drug, meaning it is in the same category as
drugs like heroin and LSD. According to the federal government, it has a
high potential for abuse and no accepted medical value.
Drug schedules were first established by former President Nixon as
part of the 1970 Controlled Substances Act. Marijuana was put into
Schedule I at that time, and has remained there ever since.
Democrats expressed frustration at the hurdles potential researchers have to overcome.
“Federal prohibition has failed, from our criminal justice system to
our health care system to our state and local governments that are
forced to navigate an impossible landscape,†said Rep. Joe Kennedy III (D-Mass.).
Researchers need approval from three separate agencies, which can
sometimes take upwards of a year. Once approved, they’re only allowed to
research cannabis grown by a government-authorized farm at the
University of Mississippi.
That facility has been the sole grower of federally approved marijuana since 1968.
Researchers and lawmakers from both parties have said the single
source is too limiting, but experts said officials across multiple
administrations have not provided an adequate reason why marijuana
research is so restricted.
“Researchers are in a catch-22. They can’t conduct cannabis research
until they show cannabis has a medical use, but they can’t show cannabis
has a medical use until they can conduct research,†Eshoo said.
DEA senior policy adviser Matthew Strait said the agency is aware of
the limitations, and has drafted new regulations that would allow
additional marijuana growers.
The DEA in August announced it would begin taking steps to expand the
number of federally approved marijuana growers, but it first needed to
develop new regulations to evaluate the applications.
Strait said the agency has drafted those rules and submitted them to
the White House for regulatory review. Agency staff will be on a call
tomorrow to discuss them, he said.
Strait was also pressed about removing marijuana from the list of controlled substances.
The DEA has the authority to change the scheduling of marijuana, or
completely remove it from the list of controlled substances without
input from Congress, but it has yet to do so.
Advocates are pushing the House to pass the Marijuana Opportunity,
Reinvestment and Expungement Act, which would deschedule marijuana.
But some Republicans expressed concern about completely removing
marijuana from the controlled substances list. Instead, they indicated
an openness to changing its schedule to make it easier to research.
“Descheduling cannabis is a step too far and one I would not support,†said Rep. Greg Walden
(Ore.), the top Republican of the full committee. Any discussion of
descheduling must be preceded by a fuller understanding of the potential
risks associated with cannabis use — which we currently do not have.â€
Walden added that rescheduling cannabis may help improve the research landscape.
“We need more research and better data. Americans are consuming more
cannabis and policy decisions on this substance have been made in a
virtual information vacuum,†Walden said.
Posted by AGORACOM-JC
at 2:56 PM on Thursday, January 16th, 2020
SPONSOR: NORTHBUD (NBUD:CSE)
Sustainable low cost, high quality cannabinoid production and
procurement focusing on both bio-pharmaceutical development and
Cannabinoid Infused Products. Learn More.
High demand: Ontario’s online Cannabis 2.0 products sell out fast
More than 2,000 people placed orders within the first hour that
cannabis-infused edibles and vape products became available for sale on
the Ontario Cannabis Store’s website, a spokesperson told BNN Bloomberg.
Beginning Thursday at 9 a.m. ET, the website listed 50 vape products
and 21 pot-infused gummies for sale, a slight increase from the number
of items available at Ontario’s brick-and-mortar cannabis retailers.
More than 3,000 people were waiting in a “digital queue†before the
online sales began. Due to the high demand, the website experienced
several crashes for some products, while all “soft-chew†items, or
gummies, were sold out within the first 30 minutes.
OCS spokesperson Daffyd Roderick told BNN Bloomberg the government
agency is managing the website’s traffic issues and plans to replenish
any sold-out items after bricks-and-mortar stores have been allotted an
equal share of available product.
“We know the licensed producers are working hard to make more
products available and we’re confident that these growing pains will be
moved through in relatively short order,†Roderick said.
While some of the next-generation cannabis products on the website
have been available at physical Ontario cannabis stores since earlier
this month, the various cannabis-infused cookies, soft chews, mints, tea
and vapes for sale represent a potential new windfall for the country’s
pot producers, who have been stymied over the past year with
softer-than-expected revenue from dried flower products.
Raymond James analysts said in a recent report that cannabis
producers should report material revenue from the latest rollout of
Cannabis 2.0 products in the second-half of this year.
Cannabis Canada is
BNN Bloomberg’s in-depth series exploring the stunning formation of the
entirely new — and controversial — Canadian recreational marijuana
industry. Read more from the special series here and subscribe to our Cannabis Canada newsletter to have the latest marijuana news delivered directly to your inbox every day.