Bots Trending Now: Disinformation and Calculated Manipulation of the Masses

By on July 28th, 2017 in Editorial & Opinion, Magazine Articles, Robotics

A bot (short for robot) performs highly repetitive tasks by automatically gathering or posting information based on a set of algorithms. Internet-based bots can create new content and interact with other users like any human would. Bots are not neutral. They always have an underlying intent toward direct or indirect benefit or harm. The power is always with the individual(s)/organization(s) unleashing the bot, and imbued with the developer’s subjectivity and bias [1].

Bots can be overt or covert to subjects; they can deliberately “listen” and then in turn manipulate situations providing real information or disinformation (known also as automated propaganda). They can target individuals or groups and successfully alter or even disrupt group-think, and equally silence activists trying to bring attention to a given cause (e.g., human rights abuses by governments). On the flipside, bots can be used as counterstrategies in raising awareness of political wrongdoing (e.g., censorship) but also be used for terrorist causes appealing to a global theatre (e.g., ISIS) [2].

Software engineers and computer programmers have developed bots that can do superior conversational analytics, bots to analyze human sentiment in social media platforms such as Facebook [3] and Twitter [4], and bots to get value out of unstructured data using a plethora of big data techniques. It won’t be long before we have bots to analyse audio using natural language processing, and commensurate bots to analyze and respond to uploaded videos on YouTube, and even bots that respond with humanlike speech contextually adapted for age, gender, and even culture. The convergence of this suite of capabilities is known as artificial intelligence [5]. Bots can be invisible, they can appear as a 2D embodied agent on a screen (avatar or dialog screen), or as a 3D object (e.g., toy) or humanoid robot (e.g., Bina [6] and Pepper).

Bots that Pass the Turing Test

Most consumers who use instant messaging chat programs to interact with their service providers very well might not realize that they have likely interacted with a chat bot that is able to crawl through a provider’s public Internet page for information acquisition [7]. After 3–4 interactions with the bot, that can last anything between 5 to 10 minutes, a human customer service representative might intervene to enable a direct answer to a more complex problem. This is known as a hybrid delivery model where bot and human work together to solve a customer inquiry. The customer may detect a slower than usual response in the chat window, but is willing to wait given the asynchronous mode of communications, and the mere fact they don’t have to converse with a real person over the telephone. The benefit to the consumer is said to be bypassing a human clerk and wait times for a representative, and the benefit to the service provider is in saving the cost of human resources, including ongoing training.

Bots that interact with humans and go undetected as being non-human are considered successful in their implementation, and are said to pass the Turing Test [8]. Devised in 1950, English mathematician, Alan M. Turing suggested the “imitation game,” which consisted of a remote human interrogator within a fixed time frame being able to distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator [9].

Bot Impacts Across the Globe

Bots usually have Internet/social media accounts that look like real people, generate new content like any human would, and interact with other users. Politicalbots.org reported that approximately 19 million bot accounts were tweeting in support of either Donald Trump or Hillary Clinton in the week before the U.S. presidential election [10]. Pro-Trump bots worked to sway public opinion by secretly taking over pro-Clinton hashtags like #ImWithHer and spreading fake news stories [11]. These pervasive bots are said to have swayed public opinion.

Yet bots have not just been utilized in the U.S. alone, but also the U.K. (Brexit’s mood contagion [12]), Germany (fake news [1]), France (robojournalism [13]), Italy (popularity questioned [14]), and even in Australia (Coalition’s fake followers [15]). Unsurprisingly, political bots have also been used by Turkey (Erdogan’s 6000 robot army [16], [17]), Syria (Twitter spambots [18]), Ecuador (surveillance [19]), Mexico (Peñabots [20]), Brazil, Rwanda, Russia (Troll Houses [21]), China (tracking Tibetan protestors [22]), Ukraine (social bots [23]), Venezuela (6000 bots generating anti-U.S. sentiment [24] with #ObamaYankeeGoHome [25]).

Whether it is personal attacks meant to cause a chilling effect, spamming attacks on hashtags meant to redirect trending, overinflated follower numbers meant to show political strength, or deliberate social media messaging to perform sweeping surveillance, bots are polluting political discourse on a grand scale. So much so, that some politicians themselves are now calling for action against these autobots – with everything from demands for ethical conduct in society, to calls for more structured regulation [26] for political parties, to even implementation of criminal penalties for offenders creating and implementing malicious bot strategies.

Provided below are demonstrative examples of the use of bots in Australia, the U.K., Germany, Syria and China, with each example offering an alternative case whereby bots have been used to further specific political agendas.

Fake Followers in Australia

In 2013, the Liberal Party internally investigated the surge in Twitter followers that the then Opposition Leader Tony Abbot accumulated. On the night of August 10,2013, Abbot’s Twitter following soared from 157 000 to 198 000 [27]. In the days preceding this period, his following was steadily growing at about 3000 per day. The Liberal Party had to declare on their Facebook page that someone had been purchasing “Fake Twitter followers for Tony Abbot’s Twitter account,” but later a spokeswoman said it was someone not connected with the Liberal Party nor associated with the Liberal campaign and that the damage had been done using a spambot [27], an example of which is shown in Figure 1.

Paid parental leave is a winner for Tony Abbott.

Figure 2. Paid parental leave is a winner for Tony Abbott. Twitter image taken from [31].

Fake Trends and Robo-Journalists in the U.K.

As the U.K.’s June 2016 referendum on European Union membership drew near, researchers discovered automated social media accounts were swaying votes for and against Britain’s exit from the EU. A recent study found 54% of accounts were pro-Leave, while 20% were pro-Remain [32]. And of the 1.5 million tweets with hashtags related to the referendum between June 5 and June 12, about half a million were generated by 1% of the accounts sampled.

As more and more citizenry head to social media for their primary information source, bots can sway decisions this way or that. After the results for Brexit were disclosed, many pro-Remain supporters claimed that social media had had an undue influence by discouraging “Remain” voters from actually going to the polls [33], refer to Figure 3. While there are only 15 million Twitter users in the U.K., it is possible that robo-journalists (content gathering bots) and human journalists who relied on social media content that was fake, further propelled the “fake news,” affecting more than just the TwitterSphere.

Figure 3

Figure 3. Twitter image taken from [34].

German Chancellor Angela Merkel has expressed concern over the potential for social bots to influence this year’s German national election [35]. She brought to the fore the ways in which fake news and bots have manipulated public opinion online by spreading false and malicious information. She said: “Today we have fake sites, bots, trolls – things that regenerate themselves, reinforcing opinions with certain algorithms and we have to learn to deal with them” [36]. The right-wing Alternative for Germany (AfD) already has more Facebook likes than Merkel’s Christian Democrats (CDU) and the center-left Social Democrats (SPD) combined. Merkel is worried the AfD might use Trump-like strategies on social media channels to sway the vote.

It is not just that the bots are generating fake news [35], but that the algorithms that Facebook deploys as content are shared between user accounts, creating “echo chambers” and outlets for reverberation [37]. However in Germany, Facebook, which has been criticized for failing to police hate speech, in 2016 has just been legally classified as a “media company,” which means it will now be held accountable for the content it publishes. While the major political parties responded by saying they will not utilize “bots for votes,” it is now also outside geopolitical forces (e.g., Russians) who are chiming in, attempting to drive social media sentiment with their own hidden agendas [35].

Spambots and Hijacking Hashtags in Syria

During the Arab Spring, online activists were able to provide eyewitness accounts of uprisings in real time. In Syria, protesters used the hashtags #Syria, #Daraa, and #Mar15 to appeal for support from a global theater [18]. It did not take long for government intelligence officers to threaten online protesters with verbal assaults and one-on-one intimidation techniques. Syrian blogger Anas Qtiesh wrote: “These accounts were believed to be manned by Syrian mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bile and insults” [38]. But when protesters continued despite the harassment, spambots created by Bahrain company EGHNA were coopted to create pro-regime accounts [39]. The pro-regime messages then flooded hashtags that had pro-revolution narratives.

This essentially drowned out protesters’ voices with irrelevant information – such as photography of Syria. @LovelySyria, @SyriaBeauty and @DNNUpdates dominated #Syria with a flood of predetermined tweets every few minutes from EGHNA’s media server [40]. Figure 4 provides an example of such tweets. Others who were using Twitter to portray the realities of the conflict in Syria publicly opposed the use of the spambots (see Figure 5[43].

 

 
Figure 4

Figure 4. Twitter image taken from [41].

Figure 5 - Twitter is not Bashar's spam machine!

Figure 5. Twitter image taken from [42].

Sweeping Surveillance in China

In May 2016, China was exposed for purportedly fabricating 488 million social media comments annually in an effort to distract users’ attention from bad news and politically sensitive issues [46]. A recent three-month study found 13% of messages had been deleted on Sina Weibo (Twitter’s equivalent in China) in a bid to crack down on what government officials identified as politically charged messages [47]. It is likely that bots were used to censor messages containing key terms that matched a list of banned words. Typically, this might have included words in Mandarin such as “Tibet,” “Falun Gong,” and “democracy” [48].

China employs a classic hybrid model of online propaganda that comes into action only after some period of social unrest or protest when there is a surge in message volumes. Typically, the task is left to government officials to do the primary messaging, with back up support from bots, methodically spreading messages of positivity and ensuring political security using pro-government cheerleading. While on average it is believed that one in every 178 posts is curated for propaganda purposes, the posts are not continuous and appear to overwhelm dissent only at key times [49]. Distraction online, it seems, is the best way to overcome opposition. That distraction is carried out in conjunction with making sure there is a cap on the number of messages that can be sent from “public accounts” that have broadcasting capabilities.

What Effect are Bots Having on Society?

The deliberate act of spreading falsehoods via the Internet, and more specifically via social media, to make people believe something that is not true is certainly a form of propaganda. While it might create short-term gains in the eyes of political leaders, it inevitably causes significant public distrust in the long term. In many ways, it is a denial of citizen service that attacks fundamental human rights. It preys on the premise that most citizens in society are like sheep, a game of “follow the leader” ensues, making a mockery of the “right to know.” We are using faulty data to come to phony conclusions, to cast our votes and decide our futures. Disinformation on the Internet is now rife – and if the Internet has become our primary source of truth, then we might well believe anything.

ACKNOWLEDGMENT

This article is adapted from an article published in The Conversation titled “Bots without borders: how anonymous accounts hijack political debate,” on January 24, 2017. Read the original article http://theconversation.com/bots-without-borders-how-anonymous-accounts-hijack-political-debate-70347 Katina Michael would like to thank Michael Courts and Amanda Dunn from The Conversation for their editorial support, and Christiane Barro from Monash University for the inspiration to write the piece. Dr. Roba Abbas was also responsible for integrating the last draft with earlier work.

Author

Author image of Katina Michael
Katina Michael is professor at the Faculty of Engineering and Information Sciences at the University of Wollongong, Australia. Email: katina@uow.edu.au.
Full article: