Canadian Journal of Communication Policy Portal Vol 44 (2019) PP-27–33  ©2019 Canadian Journal of Communication Corporation http://doi.org/10.22230/cjc.2019v42n2a3511


Political Bots: Disrupting Canada’s Democracy

Elizabeth Dubois, University of Ottawa

Fenwick McKelvey, Concordia University

Elizabeth Dubois is an Assistant Professor in the Department of Communication at the University of Ottawa, 75 Laurier Ave E, Ottawa, ON K1N 6N5. Email: elizabeth.dubois@uottawa.caFenwick McKelvey is an Associate Professor at Concordia University, 455, Boulevard De Maisonneuve West, Montréal, QC H3G 1M8. Email: fenwick.mckelvey@concordia.ca.


ABSTRACT 

Bots—automated online agents that mimic human behaviour—attempting to influence public opinion through social media, provide a useful case of the challenges for regulating disruptive technologies. This article articulates four problems associated with bots in politics: identification, evidence, attribution, and enforcement. It then explores policy solutions, arguing the solutions must align with the type of disruption bots contribute to. 

Keywords  Political communication; New media; Cybernetics; Network policy; Democracy 

RÉSUMÉ  

Les bots sont des agents automatiques en ligne qui imitent le comportement humain. En essayant d’influencer l’opinion publique au moyen des médias sociaux, les bots offrent un exemple pertinent de défis que constitue la réglementation des technologies perturbatrices. Dans cet article, quatre problèmes relatifs aux bots dans la sphère politique sont soulevés : l’identification, la preuve, l’attribution, et l’application d’une réglementation. Par la suite, des solutions politiques en ligne avec le type de perturbation causée par les bots sont proposés.   

Mots clés  Communication politique; Nouveaux médias; Cybernétique; Politique sur les réseaux; Démocratie


Introduction

As Canada develops its digital and data strategy, which includes the development of artificial intelligence (AI) and its political impacts, key questions include “What are the emerging ethical and regulatory concerns with respect to the use of disruptive technologies? Who is best situated to resolve these and through what mechanisms?” (Government of Canada, 2018). This article argues that political bots—automated online agents that mimic human behaviour—are an important litmus test to answer these questions.

Bots are potentially disruptive because their verisimilitude to humans raises questions about democratic legitimacy and agency. Political bots are most visible on Twitter, but are suspected to be on all social media platforms (Kaufman, 2017), and can undermine trust in public opinion, raising concerns about robotic support (Woolley & Howard, 2018). Some political bots also do politically sensitive work, such as editing Wikipedia articles and tweeting on behalf of politicians. This article describes political bots and outlines the ways in which these technologies exist in a wider media and political system. It also points to potential policy solutions responding to the possible media system failures that have popularized bots in contemporary politics.

Bot or bought? How political bots work and the problem of astroturfing 

Political bots are automated online agents that are used to intervene in political discourse online. They can be created for free or at a cost by anyone from journalists, to political parties, to average citizens. Past research examined the Canadian Twittersphere and found four types of political bots: amplifiers, dampeners, transparency, and servant bots (Dubois & McKelvey, 2018; Gorwa & Guilbeault, 2018). These bots interact with humans, algorithms, and even other bots. The use of political bots began with the simple automation of tasks, such as pre-scheduled posting on social media, but has advanced into creating automated accounts that can interact with various datasets, platforms, and other accounts.1

Today, some of the uses of these bots (and interactions with them) are quite benign or even beneficial (Ford, Puschmann, & Dubois, 2016). Journalists in Canada can use transparency bots to help them scrape public data or automatically report on routine incidents, such as the number of cyclist injuries and mortalities in Toronto (@StruckTObot). Political parties use automated assistants—servant bots—such as schedulers to help them coordinate their social media rollout across various accounts. But others are potentially problematic.

Some of the most concerning political bots are those associated with the rise of astroturfing online, which includes both amplifier and dampener bots. Astroturfing is a term denoting fake grassroots campaigning. It has various forms: information subsidies paid by public relations firms to produce ads that seem like regular news, paying people online and offline to be supporters, as well the multitude of ways dark money conceals itself through third-party political action committees. Astroturfing online is not new in itself. Users of the Free Republic, an American conservative website, engaged in what was called “freeping,” or targeting online polls to skew the results (Kent, Harrison, & Taylor, 2006). The Yes Men, documentary activists, often used a deceptive website, pretending to be the World Trade Organization, for example, to gain access to private industry events (Reilly, 2018).

Astroturfing, associated with computational propaganda, is a perennial issue in and anxiety of democratic politics (Howard, 2006; Kim, Young, Hsu, Neiman, Kou, Bankston, Kim, Heinrich, Baragwanath, & Raskutti, 2018). Who is part of the public and who speaks on its behalf are fraught democratic questions. Now publics, politicians, and journalists have to gauge not only whether support is grassroots or fabricated but also whether it is human or bot.

Political bots, in their promotion or suppression of content, are part of the astroturfing problem. In past research (Dubois & McKelvey, 2018; Gorwa & Guilbeault, 2018) it has been shown that there is use of automated Twitter agents by suspected political party members or supporters in Canada, as well as by foreign players such as the Kremlin-based Internet Research Agency (Gorwa, 2018). Political bots are used to amplify divisive political messages in Canada. This can include coordinated harassment, which pushes people to self-censor, and inflammatory messages, which spark even more emotional and extreme comments (Tenove, Tworek, & McKelvey, 2018). Crucially, these groups are often not necessarily creating their own comments but amplifying others. Those others may be Canadians expressing legitimate political opinions. The bots’ interaction with human actors makes addressing the role of political bots in democracy more challenging.

Beyond the relationships between bots and others, it is challenging to assess the influence of bots on public opinion, particularly with the influx of computer science methodologies to evaluate social phenomenon. Bots might be active, but preliminary research suggests their effects are overstated (for a good introduction, see Nyhan, 2018). 

Problems with identification and accountability of political bots

With the 2019 Canadian federal election months away, there has been a frenzy of activity on Parliament Hill to find solutions for political bots, but the problem goes beyond the election context. Particularly because of trends toward permanent campaigning (Elmer, Langlois, & McKelvey, 2012; Marland, Giasson, & Esselment, 2017), Canada’s digital and data strategy must consider how to ensure bots are accountable. Given that bots are a primitive form of AI, this article identifies four major challenges to bot accountability, which will likely apply as AI for political communication purposes evolves: identification, evidence, attribution, and enforcement.

Identification

It is difficult to know what is a bot. Identification is typically based on communication patterns, but as bot detectors improve, bot creators make their bots’ behaviour more complex. Further, political bots are sometimes confused with human actors online who are partaking in actions bots are typically designed for. Bot identification often relies on political norms about proper speech. New immigrants and non-native English speakers currently experience an added risk online for being identified as a bot, because a lot of bot-detection processes are being trained with content from Russian bots.

Adding to the complexity of identification, most platforms do not have any formal labelling for automated accounts, which is different from verified accounts on Google or Twitter. These platforms have the most complete data and have the ability to change what information is available about a given account. But they choose not to identify bots for various reasons, such as the fear of misidentification or economic incentives to minimize the prominence of bots on their platform (Gowra, 2018; Kaufman, 2017). Bot identification may change as new laws, like one passed in California last year, take effect. The California bot disclosure rules require companies to also disclose with it communicates to the public through automated agents (Gershgorn, 2018).

Evidence

Related to the issue of identification is a lack of evidence. Bots can disappear quickly and there is a lack of archiving on most social media platforms. This makes it difficult to do forensic research on issues such as disinformation and misinformation (Elmer, Langlois, & Redden, 2015)

Attribution

Similar to other matters in cybersecurity, it can be difficult to attribute the creation or use of bots to particular actors. One of the first examples of political bots in Canada came from a supporter of the Coalition Avenir Québec party. A bot amplified messages from the party, but it seems as though the bot’s creator programmed it without consulting with the party. In other words, the party benefited from the amplification without coordinating with the programmer. This creates an attribution problem. This is particularly perplexing because parties and politicians might benefit from this indirect support or through dark money and third parties that hire bots without being attributed.

Furthermore, some bots are created and set loose in the media environment without continued oversight from their creator. For example, in an interview with the creator of one of the first WikiEdits bots, which tweets each time anonymous edits are made from specific IP address ranges, he explained he is no longer involved with or feels responsible for the bot he created, despite the fact that the bot continued to tweet based on his design (Ford et al., 2016).

Enforcement

If bots are designed to try to influence elections or public opinion, then they can often be effective even if they are caught later. Identifying nefarious bot activity does not necessarily relieve or treat the effect. Whitney Phillips (2018) warns that often when debunking a lie, journalists and academics end up amplifying the false narratives by repeating them. Put simply, enforcement might be too late, and it is unclear what mechanisms might stop nefarious bots. 

Policy options for political bots 

A variety of policy options have been proposed for bots that include

The successful solution largely depends on what kind of disruption bots turn out to be. Conceptually, bots might be thought of as either a symptom of systemic, agentic or neo-institutional failure. This taxonomy comes from sociologist Charles Perrow (2011). Bots might be an example of systemic failure. Perrow’s (2011) systems accidents theory describes this type of failure as when “some systems were so interactively complex and tightly coupled that they would have rare, but inevitable failures no matter how hard everyone tried to make them safe” (p. 310). Bots might be a phenomenon involving too complex a communication system with too many parts, which then requires moves to simplify, such as banning all bots. Yet, bots keep Wikipedia running.

Problematic bots might be agentic failure, or what Perrow (2011) describes: “that while most actors innocently accepted the norms and ideologies – and we need institutional theory to understand these – key actors used them for personal and class ends with knowledge of the damage they might cause” (p. 310). In this case, better policy, disclosure rules, or a bot registry might be in order—requiring practitioners to abide by rules or face punishment. Conceivably, if bots are thought of as advertising, then we already have the Election Act to better enforce conduct, though there are known challenges (Reepschlager & Dubois, 2019). Enforcement, as noted above, is difficult and it is not clear whether the government will have the means to stop bad actors giving the low barriers to access bots (Kollanyi, 2016).

The best solutions might be a neo-institutional approach in tandem with better enforcement. If bots are a neo-institutional failure, one that “sees agents as unwitting causes of the failure” (Perrow, 2011, p. 310), then bots might require stronger institutions. Here, the use of bots is caught up in the institutional norms of political parties and weak tools such as codes of conduct might stimulate political actors to learn and think about the proper conduct of bots. Platforms themselves, as conveners of communication with standards, might be more active in defining good practices for bots. Twitter, for example, not only has to be more proactive in banning nefarious bots but also in communicating the value and contributions of non-nefarious bots. Relatedly, there is an opportunity for enhanced digital literacy among citizens to support better responses to the use of bots. If individuals are equipped with tools to identify and critique forms of computational propaganda, its impact might be limited (Bulger & Davison, 2018; Woolley & Howard, 2018).

Conclusion 

Political bots are here to stay and conceivably could become a bigger problem as emotional analytics, ubiquitous AI, and a move to private platforms make bots harder to detect. Able to emote, adapt, and move undetected, bots unsettle a consensus that political debate involves independent citizens. Moving to address bots ultimately leads to ongoing questions about the meaning of democracy in a technological society, a question that consultations must continue to address for years to come. 

Note

  1. See (Howard et al., 2018) for a detailed definition and examples of political bots and their activity.

References

Bulger, Monica, & Davison, Patrick. (2018). The promises, challenges, and futures of media literacy. New York, NY: Data & Society. URL: https://datasociety.net/output/the-promises-challenges-and-futures-of-media-literacy/ [March 18, 2019].

Dubois, Elizabeth, & McKelvey, Fenwick. (2018). Canada: Building bot typologies. In S. Woolley & P.N. Howard (Eds.), Computational propaganda: Political parties, politicians, and political manipulation on social media (pp. 64–85). New York, NY: Oxford University Press.

Elmer, Greg, Langlois, Ganaele, & McKelvey, Fenwick. (2012). The permanent campaign: New media, new politics. New York, NY: Peter Lang.

Elmer, Greg, Langlois, Ganaele, & Redden, Joanna. (Eds.). (2015). Compromised data: From social media to big data. New York, NY: Bloomsbury Academic.

Ford, Heather, Dubois, Elizabeth, & Puschmann, Cornelius. (2016). Keeping Ottawa honest, one tweet at a time? Politicians, journalists, Wikipedians and their Twitter bots. International Journal of Communication10, 4891–4914.

Gershgorn, D. (2018, October 3). A California law now means chatbots have to disclose they’re not human. Quartz. URL: https://qz.com/1409350/a-new-law-means-californias-bots-have-to-disclose-theyre-not-human/ [April 7, 2019].

Gorwa, Robert. (2018). Responding to digital interference in elections. Ottawa, ON: Public Policy Forum, University of British Columbia, and Concordia University.

Gorwa, Robert, & Guilbeault, Douglas. (2018). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet. Advanced online edition. URL: https://doi.org/10.1002/poi3.184 [April 7, 2019].

Government of Canada. (2018, July 19). Positioning Canada to lead in a digital- and data-driven economy. [Discussion paper]. URL: http://www.ic.gc.ca/eic/site/084.nsf/eng/00007.html [November 14, 2018].

Howard, Philip. N. (2006). New media campaigns and the managed citizen. Cambridge, UK: Cambridge University Press.

Howard, Philip N., Woolley, Samuel, & Calo, Ryan. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics15(2), 81–93.

Kaufman, Mark. (2017). Why Twitter is still teeming with bots. Mashable. URL: https://mashable.com/2017/10/16/twitter-bots-here-to-stay/ - 8YcEsiD89mqX [March 17, 2019].

Kent, Michael L., Harrison, Tyler R., & Taylor, Maureen. (2006). A critique of internet polls as symbolic representation and pseudo-events. Communication Studies, 57(3), 299–315. doi: 10.1080/10510970600845941

Kim, Young Mie, Hsu, Jordan, Neiman, David, Kou, Colin, Bankston, Levi, Kim, Soo Yun, Heinrich, Richard, Baragwanath, Robyn, & Raskutti, Garvesh. (2018). The stealth media? Groups and targets behind divisive issue campaigns on Facebook. Political Communication35(4), 515–541. doi: 10.1080/10584609.2018.1476425

Kollanyi, Bence. (2016). Automation, algorithms, and politics: Where do bots come from? An analysis of bot codes shared on GitHub. International Journal of Communication10(20), 4932–4951. 

Marland, Alexander J, Giasson, Thierry, & Esselment, Anna Lennox. (2017). Permanent campaigning in Canada. Vancouver, BC: UBC Press.

Nyhan, Brendan. (2018, February 16). Fake news and bots may be worrisome, but their political power is overblown. New York Times. URL: https://www.nytimes.com/2018/02/13/upshot/fake-news-and-bots-may-be-worrisome-but-their-political-power-is-overblown.html [March 18, 2019].

Perrow, Charles. (2011). The meltdown was not an accident. In M. Lounsbury & P.M. Hirsch (Eds.), Markets on trial: The economic sociology of the U.S. financial crisis (pp. 307–330). Bingley, UK: Emerald Group Publishing.

Phillips, Whitney. (2018). The oxygen of amplification. New York, NY: Data & Society Research Institute. URL: https://datasociety.net/output/oxygen-of-amplification/ [March 17, 2019].

Reepschlager, Anna, & Dubois, Elizabeth. (2019). New election laws are no match for the Internet. Policy Options. URL: http://policyoptions.irpp.org/magazines/january-2019/new-election-laws-no-match-internet/ [March 17, 2019].

Reilly, Ian. (2018). Media hoaxing: The Yes Men and utopian politics. Lanham, MD: Lexington Books.

Tenove, Chris, Tworek, Heidi, & McKelvey, Fenwick. (2018). Poisoning democracy: How Canada can address harmful speech online. Ottawa, ON: Public Policy Forum. URL: https://www.ppforum.ca/wp-content/uploads/2018/11/PoisoningDemocracy-PPF-1.pdf [March 17, 2019].

Woolley, Samuel, & Howard, Philip N. (Eds.). (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. New York, NY: Oxford University Press.