Recently Facebook, Reddit, Google, LinkedIn, Microsoft, Twitter and YouTube committed to removing coronavirus-related misinformation from their platforms. COVID-19 is being described as the first major pandemic of the social media age.
In troubling times, social media helps distribute vital knowledge to the masses. Unfortunately, this comes with myriad misinformation, much of which is spread through social media bots.
These fake accounts are common on Twitter, Facebook, and Instagram. They have one goal: to spread fear and fake news. We witnessed this in the 2016 United States presidential elections, with arson rumours in the bushfire crisis, and we’re seeing it again in relation to the coronavirus pandemic. Busy busting bots The exact scale of misinformation is difficult to measure. But its global presence can be felt through snapshots of Twitter bot involvement in COVID-19-related hashtag activity. Bot Sentinel is a website that uses machine learning to identify potential Twitter bots, using a score and rating.
According to the site, on March 26 bot accounts were responsible for 828 counts of #coronavirus, 544 counts of #COVID19 and 255 counts of #Coronavirus hashtags within 24 hours. These hashtags respectively took the 1st, 3rd and 7th positions of all top-trolled Twitter hashtags. It’s important to note the actual number of coronavirus-related bot tweets are likely much higher, as Bot Sentinel only recognises hashtag terms (such as #coronavirus), and wouldn’t pick up on “coronavirus”, “COVID19” or “Coronavirus”.
How are bots created? Bots are usually managed by automated programs called bot “campaigns”, and these are controlled by human users. The actual process of creating such a campaign is relatively simple. There are several websites that teach people how to do this for “marketing” purposes. In the underground hacker economy on the dark web, such services are available for hire.
Typically, in the context of COVID-19 messages, bots would spread misinformation through two main techniques.
The first involves content creation, wherein bots start new posts with pictures that validate or mirror existing worldwide trends. Examples include pictures of shopping baskets filled with food, or hoarders emptying supermarket shelves. This generates anxiety and confirms what people are reading from other sources.
The second technique involves content augmentation. In this, bots latch onto official government feeds and news sites to sow discord. They retweet alarming tweets or add false comments and information in a bid to stoke fear and anger among users. It’s common to see bots talking about a “frustrating event”, or some social injustice faced by their “loved ones”.
The example below shows a Twitter post from Queensland Health’s official twitter page, followed by comments from accounts named “Sharon” and “Sara” which I have identified as bot accounts. Many real users reading Sara’s post would undoubtedly feel a sense of injustice on behalf of her “mum”.