Agoracom Blog

Tiredness could be ‘human signature’ used to detect bots on Twitter – SPONSOR: Datametrex AI Limited $DM.ca

Posted by AGORACOM-JC at 1:21 PM on Wednesday, April 22nd, 2020

SPONSOR: Datametrex AI Limited (TSX-V: DM) A revenue generating small cap A.I. company that NATO and Canadian Defence are using to fight fake news & social media threats. The company is working with US Government agencies on Covid19 and Coronavirus fake news and disinformation. The company also obtained the rights to import and sell COVID-19 test kits from South Korea – Click here for more info.

Tiredness could be ‘human signature’ used to detect bots on Twitter

  • Concerns about the impact of deceptive bot networks spreading disinformation in order to influence democratic events, such as the 2016 US presidential election, had lead to calls from lawmakers, academics and campaigners for social media companies to detect and take down these networks
  • These efforts will include human moderators and machine learning algorithms trained to detect suspicious behaviour

By E&T editorial staff

A study has identified short-term behavioural differences between humans and bots – reflecting what is likely to be increasing tiredness towards the end of a social media session – which could be used to detect and take down networks of bots on social media.

Bots – which are controlled by computers, rather than by humans – serve a wide variety of purposes, including news aggregation and customer service. Despite their benefits, bots have come under scrutiny recently in the context of being used manipulatively as part of large-scale, state-backed projects to spread disinformation on social media platforms.

Concerns about the impact of deceptive bot networks spreading disinformation in order to influence democratic events, such as the 2016 US presidential election, had lead to calls from lawmakers, academics and campaigners for social media companies to detect and take down these networks. These efforts will include human moderators and machine learning algorithms trained to detect suspicious behaviour.

Now, a first-of-its-kind study published in Frontiers in Physics has identified some short-term behavioural trends seen in human-run accounts which are absent in bot accounts. This could provide a “human signature” to detect fake accounts, which are constantly adapting to fool detectors.

“Remarkably, bots continuously improve to mimic more and more of the behaviour humans typically exhibit on social media,” said Professor Emilio Ferrara, a University of Southern California computer science expert and co-author of the study. “Every time we identify a characteristic we think is prerogative of human behaviour, such as sentiment of topics of interest, we soon discover that newly developed open-source bots can now capture those aspects.

Ferrara and his colleagues studied how the behaviour of humans and bots changed over the course of single sessions using a large Twitter dataset associated with recent political discussion. They monitored factors such as propensity to engage in various social interactions and volume and type of tweets they wrote, then compared the results between humans and bots.

They found that humans showed an increase in the amount of social interaction over the course of a session (an increase in the fraction of retweets, replies and mentions in a tweet) and a decrease in the amount of content they produce (a decrease in average tweet length). The researchers suggested that this could reflect humans becoming tired towards the end of the session and being less able or willing to produce original content. This behavioural change was not seen in bots.

The researchers used these results to inform a classification system for bot detection. They found that their model significantly outperformed a baseline model in its bot-detection accuracy, indicating that searching for short-term behavioural patterns like this could be valuable in the implementation and improvement of detection systems.

“Bots are constantly evolving: with fast-paced advances in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms,” said Ferrara. “We are continuously trying to identify dimensions that are particular to the behaviour of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”

Source: https://eandt.theiet.org/content/articles/2020/04/tiredness-could-be-human-signature-used-to-detect-bots-on-twitter/

Tags: , , , , ,

Comments are closed.