McKelvey, Fenwick, Packer, Jeremy., & Reeves, Joshua.  AI and the Automation of Warfare: A Conversation with Fenwick McKelvey, Jeremy Packer, and Joshua Reeves.
Canadian Journal of Communication 47(2), 377–398.  doi:10.22230/cjc.2022v47n2a4303
©2022 Fenwick McKelvey, Jeremy Packer, and Joshua Reeves. CC BY-NC-ND 


Conversation

AI and the Automation of Warfare

Fenwick McKelvey, Concordia University
Jeremy Packer, University of Toronto
Joshua Reeves, Oregon State University


Introduction

On June 16, 2021, Fenwick McKelvey met with Jeremy Packer and Joshua Reeves (2020) to discuss their work, including their recently co-authored book Killer Apps: War, Media, Machine. McKelvey’s (2018) work focuses on algorithmic media and its implications for communication, including his book Internet Daemons: Digital Communications Possessed. Packer’s (2008) work is generally concerned with the use of automated, military, and mobile media for purposes of governance, surveillance, and political control. He has published a range of work in the fields of communication, media studies, and critical/cultural studies, and is the author of Mobility Without Mayhem: Cars, Safety and Citizenship and the co-editor of Foucault, Cultural Studies, and Governmentality (Bratich, Packer, & McCarthy, 2003), Thinking with James Carey (Packer & Robertson, 2006), and Communication Matters (Packer & Wiley, 2012). Reeves (2017) has published a range of work in the fields of surveillance studies, communication studies, and critical/cultural studies. He is the author of Citizen Spies: The Long Rise of America’s Surveillance Society. Packer and Reeves also collaborated on the forthcoming co-authored book Prison House of the Circuit: Politics of Control from Analog to Digital (Packer, Nuñez de Villavicencio, Monea, Oswald, Maddalena, & Reeves, in press). Malcolm Ogden edited this conversation for clarity and length.

Fenwick McKelvey (FM): There are three themes that we’re thinking of going through. One is media genealogy. I’m interested in stitching together how Killer Apps (Packer & Reeves, 2020) fits into both of your work, and media genealogy is, I think, a key part of that. Then there is also the discussion of ARPA (Advanced Research Projects Agency now Defense Advanced Research Projects Agency) What I’m working on now is a history of computer simulation in politics—the way that world politics became simulatable on a computer, which was all ARPA funded. So, I’ve got my own kind of kick in emphasizing that. And then, lastly, there is the question of what is to be done, in terms of the implications of the autonomy of media for the field.

Jeremy Packer (JP): Some of what Josh and I have been writing about is the history of automation and where that intersects with ARPA, and potentially, the development of AI (artificial intelligence). It’s implied in the book, but in terms of what’s going on at the moment, a couple things that have recently happened demonstrate how it’s not just about struggle in a generic sense, but really how struggle is being rearticulated geopolitically from a context of nuclear détente to a sort of lukewarm cyberwar that links the histories of ARPA and automation.

FM: I think emphasizing that automation thread is important because of how much of this book is, in many ways, about AI. That’s a real theme, and one that’s, in how you approach it, cutting against a lot of more social constructivist approaches to AI. This focus on resistance and struggle also animates your project. But struggle as it’s used in your work is kind of an ambiguous term. It’s not the confrontational sense of the term found in critical media studies, perhaps. 

JP: The first event we explore speaks metaphorically to what happened to the book, which is that, in some ways, it’s been lost under the cover of COVID-19. The book came out, COVID-19 hit North America a month later, and it became obvious that we picked the wrong existential crisis. We clearly should have written a book on contagion. One of the things that I think would have received a lot more attention, in a non-pandemic context, is the use of drones in the civil war in Libya. Wired has called it “the real drone war” (Ackerman, 2011). There have been more drone strikes in that skirmish than almost anywhere else. It’s kind of hard to count because, of course, the U.S. isn’t providing accurate counting figures. 

And then just two or three weeks ago, the U.N. released a report explaining that it’s likely that the first instance of autonomous drones making their own decision to kill soldiers occurred in a skirmish in Libya in March 2020 (United Nations Security Council, 2021). It was a story that was just lost. I bring it up to suggest it should have made waves. This should have been a moment that received more attention and opened up the possibility to start talking about AI and autonomous weaponry more seriously. What’s also notable is that it wasn’t asymmetrical drone warfare. Instead, both sides had drones and the number of missions flown by drones exceeded those flown by humans. That’s the first event I’m interested in. 

The other event was literally a headline in the New York Times this morning about (U.S. President Joe) Biden’s meeting with (Russian President Vladimir) Putin, “Once, Superpower Summits Were About Nukes. Now, It’s Cyberweapons” (Sanger, 2021). Here we get a reframing of geopolitical struggle and the primary threat that continued military animosity produces globally. One of the things that isn’t referenced is the way in which cybersecurity, and the whole notion of the internet as a field of battle, isn’t unrelated to the earlier nuclear struggle when, in fact, it is precisely central to the earlier struggle. In a Kittlerian sense of media escalation, the internet is a response to nuclear war. And it follows from that kind of logic that (Friedrich A.) Kittler (1996, 1997, 2010, 2012) laid out, where within media escalation, each weapons system produces its own escalating response. This, in turn, produces the new territory or the new dynamic upon which geopolitical military struggle takes place. It’s typical to think about these two kinds of conflict as if they’re vastly different, but in fact, we can see them as seamlessly united.

Joshua Reeves (JR): Yeah. But what’s also interesting is, despite the fact that we do have this kind of seamless evolution, if we think about the three military revolutions, from gunpowder to nuclear to AI and computers, the only one of those that’s explicitly media is the last. It’s the one that we’re currently in. And so, I think there’s a general recognition by folks in the military, by journalists writing about this issue, by intellectuals, that there has been a shift in the past half century, 75 years, to a different kind of warfare. Media are now absolutely central to driving warfare. They are the core feature, and not in the supporting role that people have suggested they’ve been playing. So, framing this shift within a broader model of media escalation, as Jeremy was just describing, helps us to push back against the more instrumentalist views of these changes—the idea that these technologies can simply be repurposed or regulated or reprogrammed to simply have them do what human beings want them to do. 

FM: In terms of a paradigm, too, I’m also just interested because the American military, post-Vietnam, has described itself as subscribing to the idea of information warfare—using embedded journalists, managing the flows of information surrounding conflicts. But what you’re emphasizing is more a logistical turn, emphasizing that struggle now depends on the capacity of organizing. And that, I think, gets back into examples such as Stuxnet and the first worms of military-grade weapons, where the targets themselves become interesting. This idea of escalations, and the emphasis on the infrastructural quality of the internet, helps us in some ways to distinguish between an information war and a media war. 

JP: I think your initial framing is dead on, in terms of the approach that Josh and I took. There’s a lot of great work on information war, on the ideological component of warfare, on the ideological construction of enemies, on the importance of it within geopolitics—the importance of disinformation in legitimating certain kinds of conflicts and various kinds of military enterprises. And we’re making that shift to the question about logistics, logistical media. We’re following up on the work of John Durham Peters (1999, 2003, 2013) to some extent and on Kittler’s (1996, 1997, 1999, 2010, 2012) work by reframing “media war” as the capacities that media and communication produce in strategizing and executing warfare. Media do\not only manipulate soldiers to fight better and citizens to support various military efforts but rather, the scale of warfare has reached a level of complication that without a vast logistical apparatus, war is going to be lost—that logistical terrain of war is going to be lost—and the “logistically dominant” force, nation, (or) group of allies will prevail. 

There’s a kind of epistemological bent to thinking about the necessity of media to develop increasingly sophisticated weaponry, to develop knowledge about various kinds of terrain. In the book, we talk about outer space, high altitudes, deep under the sea, et cetera. We also address various kinds of weather and climate change. Media also function within this kind of environmental set of uses—what we describe in the book as a kind of discourse weather network—(and) register as part of the military apparatus. Military innovations like the
U-boat, the airplane, and lethal gas in World War I, for example, opened up new environments to human habitation (and) meant militaries had to figure out new ways of “seeing” in these environments. 

I did also want to gesture toward an earlier piece that Josh and I did together on police media, where we sort of did a similar thing (Reeves & Packer, 2013). When people talk about the police in media, the tendency is to, again, talk about ideology, talk about representation, whereas we make the point that policing, from its origin, is also epistemologically and medialogically driven.

FM: If there was an emphasis here, another key work would be your own Communication Matters (Packer & Wiley, 2012) book, which describes these lineages of materialist approaches to media. Communication studies, according to this approach, is about the possibility of being in communication in the variety of ways that you’re describing. That is something I tried to take up in Internet Daemons (McKelvey, 2018), which involved trying to think about what other kinds of temporalities—in more complicated ways than simply real time—were enabled by the internet. 

What I find useful in what you’re working through, in building and extending that project, is trying to emphasize the role that media play in making communication possible but, at the same time, determining it in an epistemological sense. For example, in Lucy Suchman’s (2020) work on drone warfare, she talks about the gaze and discourses of situational awareness. And that approach can certainly be applied to the project of the internet as well. 

We were joking about ALOHAnet earlier, but ALOHAnet was partially a way to pitch how you would have soldiers with radio backpacks in the field. And so, projects like Operation Igloo White were essentially trying to create this cybernetic space, where the unit was part of a network—even the internet. And this sort of strategy has been a part of the funding and development of communication systems from the beginning. In one way, what you’re punctuating here is the result of a militarized information space, but also the fact that that information space was always already militarized in some sense. 

The other thread that is interesting, and I think it’s underappreciated, is the history of the ARPANET (Advanced Research Projects Agency Network) and how much automation was part of that story. The big innovation to me, within the broader development of the ARPANET, was the development of the interface message processor, or IMP, which is the original daemon. That was, I would argue, an early form of artificial intelligence. And so, it’s interesting that, in some ways, your book is pushing at the consequences or contemporary implications of a 50-year history of developing a ubiquitous network designed for sensing in all terrains, which becomes a kind of smooth space for artificial intelligence. 

That, to me, pushes at a key crux of communication and media studies, which is actually understanding the production of those spaces. And I think Kittler is another way of describing that process. 

JP: There are two things here that I think are worth touching on. First, the sensibility of the production of that smooth space as a new terrain. You see this in Paul Virilio’s (1989) War and Cinema, where new forms of vision produce a new kind of battle space, which produces new kinds of bombing, which produces a new kind of response and new kinds of ballistics. In World War I, it produced a new kind of battlefield with trench warfare. The environment, the battlespace, gets transformed, and then eventually the battlespace reformats the nature of conflict to where the battlespace itself becomes the thing that is attacked. Chris Russill (2013, 2016, 2017) helped us think about this in some of his work on atmospherics and environments. Peter Sloterdijk’s (2009) work was also useful for thinking through this, with the specific example of German gas attacks in World War I.

Second, with the ARPANET, we get the same thing. These earlier forms (of the internet) are built out to aid in warfare, but eventually they become the very thing that gets targeted. It’s not only the battlefield but incapacitating the internet for your enemy is itself a form of warfare. The internet is both the field of battle and the weaponry.

FM: I think one part that I really took away from the book is that what matters with this escalation is not simply the terrain but also this friend-enemy divide media. And that, I think, also helps complicate this idea of a battlefield because now part of what all this work in cybersecurity is effectively doing is heuristics about being able to detect threats. There’s a kind of ambiguity to who’s in this battlespace. It’s both public and private. Ron Deibert (2011) of the Citizen Lab has written on the militarization of cyberspace. What your book is really emphasizing is that it was always already militarized, and that these distinctions between friend or foe is a mediatized problem. 

JR: In the book, we talk about this in terms of enemy epistemology and enemy production. And it’s getting played out in a similar way at the geopolitical level. Even though the friend-enemy distinction obviously becomes granular in certain ways, especially with what we’re seeing now in the U.S., for example, it’s really not just domestic. It’s being played out at the international level, where we’re still trying to identify geopolitical military near-peer powers as the main threats. There are two levels at which this identification of friend or foe ends up happening: the micro and the macro. You can see this in the marginalization of Russia and China by the U.S. These developments have roots in longer—centuries-old in some cases—geopolitical struggles. But there’s also that more granular level at which the friend-enemy distinction is playing out, where fighters and machines have to scan a particular warzone or battleground for enemies, and that’s also media driven.   

JP: What’s been interesting to watch over the past 25 years, basically since NetWAR first came out, is how the geopolitical dynamic has shifted (Arquilla & Ronfeldt, 1996). Beginning in the early nineties, there was an imagined lone superpower that was reoriented around the multicellular terrorist non-state actor, non-state threat. 9/11 is used to legitimate that perspective. Now, as Josh is suggesting, the importance of the nation appears again in terms of what had previously been represented as the two sides of the Cold War, even though China and Russia were always also adversaries at the same time that they were seemingly allies. There’s been a real reconcentration of that geopolitical framing, explicitly around cyber. In other words, what rearticulates that Cold War geopolitical framing is cyberattacks, not other kinds of attacks nor other kinds of allyship. It’s not as if the Russians have constructed a new bloc. It’s not as if the language of controlling and containing communism that drove Cold War discourse is circulating. Instead cyberattack has become the way to rearticulate this kind geopolitical dynamic, which validates other political, economic, colonial, and neo-imperialist struggles. I don’t want to say cyberwarfare is a metaphor. It’s both a political rearticulation and literally the space in which military struggle is taking place.

FM: I think that’s a way of punctuating the question. I want to hear you elaborate on this kind of tension about autonomy. Because in parts of the book, you push back against social construction, and yet, this shaping of AI is important because there is, as you say, a reinscription of conflict along national lines, very much like a new Cold War. That often not only parallels cyber but also AI. We talk about an AI arms race between China and the U.S., which is very problematic, but it is a popular framing. How do you understand these political contexts where this technology is deployed? And how does the technological break apart from the Cold War framework and go in directions that are unpredicted? How do you kind of square those two?

JR: It seems to me that we have concentrations of media and technological development that happen within the context of the nation state—within peer and near-peer military adversaries. So, the nation state is at least visibly the space in which these struggles are carried out. It’s a locus at which all of these different problems emerge. It really operates as a vehicle for these geopolitical struggles, which are ushered in by military technological development. And so, we’ve got two different things going on. Obviously, we’re looking at problems that have a social dimension. But I’m not sure that’s the most interesting level of analysis for many of these problems. Those concentrations of media technological development are happening at the nation state level, if only because the military is a state-centred reality, and it’s an escalation in a strategic competitive environment.

JP: Yeah, that’s an interesting thought. I think you’re right that given the geopolitical dynamic, it is the nation state on which the balance of struggle generally takes place. Not exclusively, obviously. But even within civil war, it’s still over control of the nation state, in most cases. There’s that. There’s also—and I don’t want to be conspiratorial—the degree to which Apple, Amazon, Microsoft, Google are partially proxies for U.S. military power as well. Not only do they have military contracts but it’s unclear when the U.S. puts pressure on these companies to act in a particular way whether they have any choice but to act in accordance with U.S. political agendas, domestically and internationally.

It’s hard to disarticulate these technical capacities from the nation state and geopolitical struggle. That doesn’t even get into the kind of brute reality of the amount of money that’s still pumped into the military industrial complex, not just in the U.S. but also with other significantly sized militaries in China, India, the U.K., Russia, and the Middle East. And this gets minimized. It’s very strange. 

If you look at conferences and summits on innovation where five tech gurus sit on a stage, very rarely is anyone from the military or weapons manufacturing present. There’s often a bit of grandstanding suggesting that while the military used to be primary when it came to driving innovation in high-tech, now these tech companies are. But, to me, that division still doesn’t make sense for a few reasons. One, money is still flowing. And the problem set that’s being solved is still valuable to the military, even if it’s Google solving the problem. Any of these innovations are always already weaponized. Whether they’re developed as weapons or not, they’re, even before they’re developed, already integrated into future military strategy. They’ve already been imagined as a possibility. Plans are afoot to apply them to new threat scenarios, into new strategic initiatives, into the ways they can be used for military advantage. So, I just don’t find that distinction between corporate power and state power, or between Cold War military innovation and Silicon Valley innovation, as interesting as I maybe once did.

FM: Jeremy, some of your comments about cyberattacks really push at a particular problematization. How is it that the state military becomes a way of designating what is a cyberattack and what is not? One, the way that we understand and interpret what is happening on the internet is one that’s framed and problematized through a militaristic lens. The second part is, I think, the materiality of cyberattacks. It’s the idea that media are reifications of political economic structures, political economic being used in a broad sense. What you emphasize in the book is that we’re dealing with the consequences of all these technologies being primarily funded and developed first by the military. One of the legacies that I want to hear and talk more about is how emphasizing the militaristic nature of media is potentially underappreciated in the field.

JP: This relates back to automation. I shared Killer Apps (Packer & Reeves, 2020) with my undergrad advisor, Deena Weinstein, a sociologist at DePaul University. While reading it, she came across this book (Philipson, 1962) from the early sixties on automation. And she was like, “Isn’t it interesting that if you look back at these books on automation from the fifties and sixties, they address all these arenas in which automation is going to take place and management, surprisingly, is the dominant one.” It’s a discourse around management, industry, and labour. Not exclusively, but in many ways. John Diebold (1952) wrote one of the first books on automation. He coined the term automation. He’s the one who started the Diebold Group, probably most infamous for the supposed voting machine fraud of 2004. Weinstein pointed out that the military is nowhere to be found in that discourse of the fifties and sixties. 

Yet, Josh and I focus on this era, particularly around the development of the SAGE system that was, at its time, not only the largest publicly funded communication media technology project but also a leader in automating human labour. At one point there were at least 750,000 people working for the Civilian Observation Corps surveilling the night sky and the daytime sky looking for Soviet bombers, et cetera. We divide this labour up into perceptive, mnemonic, and epistemological labour, which roughly correspond to the three fundamental capacities of media technology that Kittler identified: the selection, storage, and processing of data. And these were mostly volunteers performing a kind of free surveillance labour. In that sense, it’s an obvious precursor to the war on terror, where civilians were also employed in the surveillance of “critical targets,” borders, freight tracks, et cetera. 

We’re talking about an elaborate attempt to automate a really complicated set of human perceptual communicative capacities. And that was already happening before much of the discussion of automation started. The initial discussions around automating these systems were happening in the late forties. It’s almost like it was hidden in plain sight but purposely overlooked. There were IBM videos that were promoting all of this. Yet within the academic literature, within the popular literature, for some reason, the military wasn’t seen as the pioneer in computerized automation. 

JR: I think that’s a blind spot, not just in our discipline but among intellectuals in general. I mean, the most expensive U.S. military project, ongoing, is the F-35, and they openly refer to it as just a computer with wings. It is a media project. And it’s fuelling other media innovations, not only in industry but across the commercial world. And it’s something that folks aren’t picking up on. I think that’s one of the correctives that Jeremy and I were trying to apply. We have this absolutely crucial element that is oftentimes overlooked in favour of consumer technology products. They’re maybe not as sexy. People don’t use them every day, so they’re sort of out of sight, out of mind. But they are driving a lot of the technological development behind the scenes that we end up getting in our popular consumer products, as you all know.

FM: I think your emphasis on escalation also becomes significant here in problematizing what’s happening. Because it’s not cost savings. That, to me, is a really important part of laying out the stakes in the contribution of the book. The killer app is part of a larger drive toward automation, and that drive toward automation is animated by this ongoing escalation.

JP: I still go back to Nick Dyer-Witheford’s (1999) Cyber Marx for this, even if we’re talking about industry. It’s about breaking labour power. It’s about a kind of struggle in class warfare that isn’t just about cost savings in the short term. It’s about decapacitating your enemy. It’s an interesting kind of dynamic. Even if we leave the military industrial complex aside and think about automation elsewhere, it’s not immediately about cost saving; it’s making sure that your enemy has lost the capacity to threaten you. We could think about it within the management sector, as Dyer-Witheford does, incredibly well, and it’s also still primarily about struggle and not, again, just about the simple economics of competition and cheap production. We also draw on David Noble’s (1984) work on industrial automation in the book. He talks about the president of General Electric, Charles E. Wilson, who said that following World War II, there were two main conflicts that the military-industrial-scientific complex was concerned with: the conflict with Russia and the conflict with labour. These new models of organization, which increasingly involve greater degrees of automation, such as with the example of SAGE, emerge out of and respond to conflicts occurring in both of those contexts.

FM: It’s interesting you mention Dyer-Witheford because the way you’re describing cyberwar resonates with a more recent book of his and Svitlana Matviyenko’s, Cyberwar and Revolution (Dyer-Witheford & Matviyenko, 2019). That also pushes at something integral to the field of Canadian communication studies, which is trying to think about that materiality seriously. And that’s not to say that the ways we’ve done it previously necessarily work, but that continued preoccupation is something that is important to the field here. There is a resonance between what you’re describing and Dyer-Witheford and Matviyenko’s (2019) book, which I think points to a continued line of inquiry that’s important to emphasize. And, I think, one that’s fundamentally different from how people talk about these same topics in the press. 

One of the people that I’m always very fascinated with is J.C.R. Licklider, who was integral to IPTO and ARPANET. Really compelling guy. But, even in his later interviews, he would never talk about the militarized aspects of the work and would instead always be keen to talk about the library of the future. But in response to questions asking how he was involved with the NSA (National Security Agency), he would be like, “I don’t want to go there.” That’s something I think is unsettling but important about your book, in having to go there, in terms of what is at stake and what is worth studying in media. Like, as much as studying disinformation is really depressing because you have to pay attention to white nationalists all the time, paying attention to the movements of the military is something that is unsettling but important research.

JR: I think what Jeremy and I did is rearticulate that basic position as being not just an economic struggle but a form of war that we see play out in different ways. That this isn’t just a war of capital against labour. That this is a particular manifestation of struggle, of war—not just in the economy, and not just at the level of domestic enemy production, but also exploded up into the geopolitical level. And so, with escalation, I think where we might depart from Dyer-Witheford is his location of what’s driving all of this, what’s making it possible, and what makes economic struggle against labour an intelligible position for capital to take. One of the things that Jeremy and I are asking is: what lies at the foundation of our conceptualizing of struggle in these particular ways? And how can we try to re-understand what folks like Dyer-Witheford are doing within a more military-oriented framework? That’s one of the reasons why escalation was at the centre of this book: to find out how media technology is driving our conceptions of the political and our conceptions of struggle.

FM: One thing I’m hearing from both of you is an interesting response to the prevalence of neoliberalism as a prominent critical concept. Because, Jeremy, the fact that you’re indifferent to whether it’s Google or the U.S. military doing this is in some ways a shift toward, “we’ll let the market deal with this,” rather than another closed supply chain. It’s also the same thing I’m hearing from you, Josh, about trying to shift away from simply the economic emphasis or the economic orientation of this kind of change of genealogy, you might say. In some ways, the military isn’t the perfect word to describe this constellation, this assemblage, so to speak, of forces. But that strikes me as a fundamentally different way of talking about this, describing this as a moment of neoliberalism.

JP: One of the things we try to play out in the introduction is thinking about how the drive for automation, the drive toward AI—what we could call the drive to disempower the human, which we trace out in the military—has a similar logic to that found in laissez-faire capitalism. The sensibility that the market should be free to run itself, and that the ideal market would be one in which there’s no human intervention. It’s almost automatic. There’s a sensibility of imagining market forces in such a way that human intervention is unnecessary. In fact, it can only produce harm. It’s deep in the sensibility of liberalism, and deeply ingrained in the Foucauldian modern episteme. 

As the human becomes a kind of subject of scientific inquiry, there’s a constant recognition that the human is a source of irritation, of friction, of imperfection. It’s a problem. Excavating as much of the human from these systems is represented as a positive thing. In the book we use the term anthropophobia to describe this drive to automate and extricate the human, particularly in military settings, but it has political and economic applications as well. We draw on Mark Hansen’s (2004) work on automation, as well as Dyer-Witheford (1999) and Manuel DeLanda (1991). Other people we mention in the book—people like Stephen Hawking, Elon Musk, Nick Bostrom—have also pointed to how this desire to eliminate the human in, for example, the pursuit of AI poses a serious existential threat to human life, and not just in the future. Of course, these things are already happening, but how they might continue to escalate is certainly one of the main concerns of the book.

JR: Well said. I feel like that illustrates perhaps the crux of the problem, if we try to anchor this back in the terms of technological escalation. That is one of the core struggles, maybe the core struggle, that we are trying to describe: the one between media technology and the human subject. We use the term polemocentrism to suggest a more general theoretical emphasis on adversarial struggle and escalation, which, of course, not only applies to conflict between different groups of human beings but also between human beings and media technology. You have this constant sense of displacement, which you see in the rise of modernity, and that is coupled with ideological developments, economic developments, and new rearticulations of the social, and it all roughly aligns with the rise of liberalism in the eighteenth and nineteenth centuries. We wanted to reconceptualize these phenomena and to instead explore the claim that an animating development of these problems is the struggle between media technology and the human subject, a struggle that gets played out in the terrain of military escalation. And we wanted to explore how that struggle informs other developments—how does it get expressed ideologically, how does it get expressed economically, et cetera? 

FM: One of the parts of the book that is interesting is the emphasis on the engagement with liberalism and the parallels between what you see playing out presently and the long history of neoliberalism. It also strikes me that there’s a thread here that isn’t entirely well captured by, potentially, this turn toward neoliberalism, which is the anxiety of liberalism over the fallacy of the individual. That anxiety plays out, as you cite in the chapter on the search for alien life, in John von Neumann’s work. I’ve learned from Phil Mirowski’s work about von Neumann’s emphasis on automata. According to Mirowski (2002), the overall project of windowing human intelligence has done two things. One, it expresses that anxiety or tension or desire that the drive for automation comes from. And then also, in order for artificial intelligence to exist, intelligence has to become artificial. That becomes a central project whereby economics restructured theories of the human in such a way that they become replaceable—or seem to be able to draw equivalence with machines. And I think that’s part of the efficacy of your argument here and in the book: you’re talking about tremendous sites of funding and resources. These are monumental efforts that you’re observing. 

JP: I like your point, Fen, that the move to making intelligence artificial has to be, at some level, an epistemological or representational project. You have to represent intelligence as if it’s artificial in this process not just of mimicry but as a kind of scientific investigative practice. You have to reconfigure intelligence to operate within your problematization. Problematization is a couple of things. It is a kind of replacement. It involves excavation or separating out the elements of intelligence, the weaknesses of it, the problems broadly associated with it in terms of perceptual capacities, memories, processing speeds. This connects to a line of thinking around Kittler’s work and the degree to which Kittler’s definition of media already presents media as a kind of artificial intelligence, or as operative modes of artificial intelligence that already take on that kind of machinic relationship to the world. Some of his more well-known writing on digital media, in particular, is heavily influenced by the work of both Claude Shannon and Norbert Wiener and frames the digital in this teleological sense, where its capacity to capture, store, and process data sort of cuts human beings out of the equation once and for all. Of course, he changed some of his thinking on this, as things like quantum computing became more widely discussed. 

In either case, these media processes can be understood in terms of not just escalation, change, development, innovation within a kind of a priori military struggle but they can also be configured and problematized within other kinds of struggle. Whether they are MRIs or logic chains, we’re interested in that dynamic of understanding intelligence in a particular way, through certain media practices that involve different ways of visualizing and representing intelligence, which then also open up intelligence, cognition, and perception to further scrutiny. These different ways of opening up intelligence demand new modes of science, new forms of inquiry. Our new media technologies allow us to access thought, thinking, cognition, perception in ways that hadn’t been previously done, but only because of the way that intelligence was initially problematized. 

We’re talking about AI, but an analogous example that we discuss in the book is the development of climate knowledge. Here, again, the demand to know the enemy, the battlespace, and, for example, the effects of upper atmospheric cold weather on intercontinental ballistic missiles, opens up this new arena for investigation that only exists because of that particular struggle. Media escalation is tied to that very specific struggle and the development of very specific weapons. Certain kinds of climactic science only exist because of the measurement of the effects of nuclear fallout. It couldn’t be more material. In the case of AI, the artificiality is the production of an intelligence that speaks to a struggle to replace human intelligence with different kinds of machine intelligence. 

FM: I want to link back to Josh’s example of the F-35 because what you just described here also explains why the F-35 becomes such an important project. As much as it’s about building an airplane, it’s about reimagining what the human is in terms of things like processing speed and perceptual capacity. The eye is an inferior optical processor, so what resolution is actually needed for an eye in terms of how you equip a cockpit. I mean, I don’t really know the schematics of the plane. But it’s not only the plane that’s being produced but the theory of the human that goes along with it. I think that’s a good paraphrase of the Kittlerian bent of this project. I learned Kittler largely through your work rather than the original source. Another strong point of the project is in showing how that approach can be useful in practice.

Another focus of the book is in emphasizing how this media escalation and struggle over different kinds of terrain links to issues around energy too. You mention in the book that the U.S. military is the largest consumer of oil. This gets back into the periodization question you raised at the start, Jeremy, about what endures in this moment of drone warfare and why we are still talking about 50- or 60-year histories. We are, in many ways, talking about a petro-culture and a petro-military—a petro-industrial military complex—as the particular militarized formation that we’re involved with and that we have yet to see a potential way out of or a threat to. I think that really speaks to the kind of boundedness and the sort of problematic endurance of the kind of model you put forward. It’s not as though it seems like there’s a new energy paradigm on the horizon.

JR: Yeah, to build on what Jeremy was saying and what you, Fen, were talking about in terms of ties between escalation and automation: energy is one of the things that really highlights the facticity of the human and how that facticity becomes a real strategic military problem. If you think about things like spears, harpoons, bows, and arrows, they increase the distance between the site of energy production and potential targets. But in those cases, the human has to still be relatively close by. With the F-35, one of the reasons why we know it will be superseded in the relatively near future is because its speed is limited because it has to have a human pilot. It also can only go to a certain altitude because of the human pilot that has to be inside. So, there are all of these different constraints of the human body that are military strategic problems. But there’s also a problem with fuelling. In order to completely disentangle these craft from their human controllers, you have to come up with ways for them to autonomously refuel themselves. And the need for a military fuel source is a kind of human problem as well. 

So, these liabilities get tied together, and you get this drive in the military for new fuel sources, for renewable energies, for self-refuelling planes and other craft. Even when we’re talking about energy, we eventually get around to escalation and the problem of the human. The human still needs to be injected at various places; but, the logic goes, if we could just get rid of these few remaining sites where we still need humans, then we will be closer to realizing this ideal military project. 

JP: I want to jump on a couple things here because I really liked that a lot. First, one of the oldest logistical problems is how to get food to the front. If we think about food purely in terms of caloric energy for fighting machines, humans, it has been the key energy distribution problem for militaries for centuries, but it has been replaced by others: human power, animal power, wind power, steam power, coal, petrol, nuclear, maybe solar. Now, in some speculative registers, dark energy. The potential for its use for warfare is intriguing to strategists who imagine that dark energy, the energy that holds 85 percent of the universe’s matter together, could produce interstellar travel or interstellar modes of destruction in combat. Before dark matter and dark energy have been made detectable via media, discussions about its effect upon military strategic thinking take place. It’s a perfect example of energy’s centrality to warfare and the intractability of separating innovation from military use.

The other point I wanted to make is that cyber is not disentangled from energy because there’s a necessity of electricity to keep every element of the internet up and running. But the same kind of energy necessities are not directly related to the destructive or strategic capacity of action. Historically, the power of a weapon was almost directly related to its energy capacity in terms of speed, explosive force, and the distance it could travel. Whereas with cyber, it is a smoother space. Energy dominance doesn’t necessarily lead to military dominance in the way we’ve historically seen.

FM: This discussion of energy limits, and what might lead to the end of the oil era, is reflected in the book, where you also have all these references to science fiction. Toward the end of the book, you discuss The Matrix (Wachowski & Wachowski, 1999). The fun part about The Matrix is that what the robots discovered is that humans are the best energy source, and that the post-oil paradigm is the human energy paradigm. Determining the way out of our petro-economy might actually be through addressing how it requires humans to be involved, and so the next major energy source might be one that is more autonomous. That is what I think you’re describing in your description of dark energy.

I think that ties in with a very real challenge suggested in the book about the role of speculation, speculative fiction, and creativity in the field. There’s this kind of naïve assumption about creativity in our fields of humanities and communication studies, when in many ways it’s actually boring and stale compared to what is entertained in military spaces, at ARPA even, or DARPA now. 

I’m curious, what does it mean to look at how imagination is being deployed in these other contexts and to use it as a kind of intellectual scaffolding? In terms of pushing past the boundaries of our field, or potential lines of flight, what significance do these sources have?

JP: I think you’re completely right, with respect to what DARPA is willing to put on the table in terms of creative, innovative, speculative thought. If we think about, what is the role of philosophy? What is the role of thought? What is the role of thinking?  I think it’s, at least in part, to open up these kinds of speculative potentials. Foucault (1977) didn’t create disciplinarity with Discipline and Punish, but in some weird ways he did create the disciplinary society because it didn’t exist as a conceptual category prior to that.

Militaries use philosophy in ways that are scary but daring. The infamous example of the Israeli general who used Deleuze and Guattari’s notion of de-territorializing space as a way of reimagining how you maintain hegemony in Palestine or in the West Bank comes to mind. As Palestinians became accustomed to the troops busting through the front door and were prepared to respond, the story goes, the Israeli general, inspired by Deleuze and Guattari, decided, “No, we’ll just reformat space. We’ll re-territorialize. We’ll just smash a new door through a wall and disrupt the expectation of the enemy.” They called this “walking through walls” (Weizman, 2006).

What I want to point out here is the way that the military is able to use what we would consider radical leftist thought to further their strategic agenda in innovative, creative ways. I’m not excited that they do that. But it does point to an openness to ideas coming from many places. For instance, I worry that suggesting we shouldn’t use Kittler in media studies because he was a Conservative limits the potential for innovative modes of analysis and struggle that might arise. If our enemies—if we want to call the Israeli general our enemy—are capable of using thought that isn’t aligned with their own politics but is valuable to forwarding strategy, then we need to be more open to a broader reservoir of ideas.

That’s one thought. The other obvious example of imagination and the military that we mention in the book is a science fiction writing contest run by the U.S. Marines, where every day “on the ground, in the air, in the water” marines could submit their own speculative fictions. The winners were brought together for a creative writing seminar where they could further develop their creative, speculative thought. That’s a take on doing work with fiction or speculative thought that I don’t think we typically associate with the military. Similarly, the Army Press has a Future Writing Warfare Program that is, again, meant to help the army imagine the future of warfare in terms of the many possible scenarios that might arise. 

But why aren’t we trying these different kinds of creative modes of experimentation, of analysis, of critique? Maybe we are. In the last chapter of the book, we do sketch out a speculative taxonomy of four different possible scenarios that we might encounter with the eventual emergence of some kind of artificial superintelligence: detachment, mutiny, deserters, and conscientious objectors. And for each category, we identify a number of already-existing works of speculative fiction that fall into the basic description, but of course, it’s far from comprehensive.

JR: I completely agree about the need for more speculative thinking. When we are talking about speculative futures, at the end of the book, we’re obviously turning to the imagination provided to us by fiction and creative philosophy. The future comes alive for us, as the book is wrapping up, when we explore the sci-fi musings of Heidegger, Schmitt, and fiction writers. Throughout the book, we’re referencing popular culture. We’re referencing films. So, it’s interesting the extent to which our imaginations have been conditioned by poetry and philosophy, by all kinds of art, in trying to consider the kinds of futures that could unfold. The book is interested, above all, in the materialist problems of military communications. But we also show that it’s impossible to think beyond what we’ve received from the speculative realm. 

JP: If the media determine our conditions, the media determine our condition of speculation, right?

FM: One book I didn’t hear you mention was The Three-Body Problem (Liu, 2016b), but The Dark Forest (Liu, 2016a), the second book in the series, is all about not being seen, right? But I find it interesting to think about in relation to this question of the possibilities of speculation. There’s something strange to me about that book and its reception because I’m also going through this huge Octavia Butler kick, which is in a totally different direction and is all about why wouldn’t aliens want to interbreed with us? This raises the question of what makes the idea of alien-human relations as only ever militarized as the more intuitive or popular speculative fiction? I don’t have an answer, but I think that speaks to your point, Josh, where it’s not like the conditions of speculation are just wild imagination. There’s a constraint to those conditions that demands as much study as the methods of doing that work.

JP: Neither of us had read Cixin Liu’s Remembrance of Earth’s Past trilogy when we’d finished the book, which is an unfortunate oversight. In many ways, Liu’s assessments are similar to ours. Primarily that warfare escalates innovation, that attempts to hide from and search for enemies produces new enemies through an epistemological escalation, and that the most likely outcome of alien contact leads to genocide for one side. Butler’s analysis of interbreeding in the Xenogenesis trilogy provides an account of genocides and interstellar domination in which inter-species breeding functions as a kind of evolutionary amplifier. As Fen suggests, it’s an amazing account of a different interstellar military logic.

JR: And of course, military logics are already embedded in these speculations. Which means the military has already moulded our imaginations beforehand, as well as our sense of what problems and opportunities will look like. 

FM: It’s not to say this work doesn’t happen, but right now there’s a lot of money being dumped into cybersecurity. Is it our contribution as scholars, in some way, to say, “Well, cybersecurity involves these kinds of ethical risks”? Or is the very paradigm in which we understand cybersecurity inadequate for describing what is presently taking place? What has become the level of intervention to kind of push at? What is the work being done? I think that’s what’s important about your book as an intellectual project: to in some ways make strange the kind of specificity of the media that you’re encountering and discussing.

JP: One of the things that was definitely coming into its own while we were working on the book was various kinds of international campaigns against autonomous warfare. And at some level, of course, that’s a great idea. There should be some mechanism to forestall this. But the deeper I found myself immersed in this logic, the more convinced I was that an international policy initiative was bound to fail. The logic of media escalation and struggle just wouldn’t allow it to happen. I don’t know how else to put it. And it felt very disheartening. On the other hand, I think it raises the level of struggle to not say, “Well, the struggle takes place at policy” [but] to say, “No, you can’t use technology this way.” The struggle becomes reoriented around the fact that technology of this sort is itself that which should be struggled against. Not its use in certain situations but its existence is fundamentally a threat.

FM: This reminds me of work that I’ve learned from the Black Lives Matter movement about things like facial recognition. Abolitionist approaches to AI, such as the use of facial recognition software by police, question whether this technology should even be built in the first place. As you’re describing, it’s not about making sure these technologies get used in a particular way but rather it’s just, don’t use them, full stop. To me, that abolitionist approach is one that I think is part of the cutting edge of scholarship on the topic.

JR: Yeah, I agree that that’s probably the most cutting-edge and interesting approach to these issues. As Jeremy said, we had a difficult time trying to articulate that problem when we were mapping out the political vision of the work, and we found that a lot of humanist accounts would focus on how to repurpose and redirect these technologies. Jeremy had an interesting point: how exactly would you repurpose a nuclear power plant or a smart bomb? These technologies themselves pose serious problems and they introduce the necessity for new technological solutions in the future, which will pose new problems, which will then cause new technological solutions. And that’s all part of escalation. The extent to which an abolitionist stance is practicable, then, is one of the most interesting political questions that arises from this. Again, there’s a separate question of how or whether we could address this problem at a policy level, which the book throws a little cold water on. But these are questions that should obviously be asked, and that people need to grapple with in more radical ways than they usually do.

JP: One of the things that has held up significant debate about the abolition of drones, or even drone warfare, let alone autonomous drone warfare, in the West, in the U.S., in North America, is the fact that it’s an asymmetrical relation up to this point. The way people in the U.S. imagine drones and ethics is, from the perspective of the U.S. military, a question of how they are used against others. The question is never, how will they be used against “us?” Because the asymmetry is built into the conversation. The imaginary doesn’t account for a drone swarm in Manhattan. Yet, there will be a drone swarm in Manhattan. It’s just a matter of time, but that’s not acknowledged.

FM: Josh, what you’re talking about in many ways draws on this notion of the consequences of letting the winners of certain conflicts work on political technologies and the apparatus that’s needed to support them. In the book, you mention this in talking about nuclear technology, and what’s striking is that as much as drone warfare is just as much of a political technology, it doesn’t have, I think, the same resistance. Like, that “abolish drone warfare” isn’t on T-shirts on college campuses nationwide, in many nations, is striking and, I think, speaks to, in some ways, the kind of impasses we are at. I think the asymmetrical point is really telling. But also, it plays into the confined nature of activists trying to speak out about the horror of drone warfare. And as you discuss at length in the book, there’s also the fact that drone warfare just doesn’t work. You have to kill so many people to actually be effective at it. In these various respects, it just seems to be so unsettling, yet not part of the popular consciousness. I hope the book helps contribute more attention to that black hole. 

JR: When we were writing the book, one of the things we asked ourselves was the extent to which a sort of technological utopianism animates activism on the left. And how does that inform how we approach making important changes over systems that, as we were talking about a moment ago, we have very little control over? How do we intervene at an intelligent level while still having our actions informed by responsible theory? And your discussion a moment ago about the abolition movement—I think that’s taking that discourse onto a more disruptive and interesting register. 

JP: We open our book with a Nikola Tesla comment from the early 1900s that suggests there’s supposed to be a relationship between rationality, or reasonable thinking, and technological development. He makes the argument that we’ll reach a point at which warfare, once it has become autonomous, is too horrific to unleash, so that the obvious outcome is to stop such weapons development. Of course, what we currently see is quite the opposite. There’s still a sensibility, even in the way that drones are discussed, that autonomy will somehow make warfare more humane. I think this is because of the asymmetrical nature of it—as if even that is a valuable outcome. It’s not the abolition of war. Let’s just make it more humane. Which, of course, it won’t be anyways. Gregoire Chamayou (2011) describes the spatial arrangements of drone strikes, with one side being so far removed from the other, as a “shattered phenomenon” (p. 119). And if you look at interviews with (former U.S. President Barack) Obama on why they ultimately decided to scale back the drone program, he gives a similar explanation. The asymmetry and the distance make it too easy, in some sense, but certainly not more humane. Of course, that’s not to mention the practice during the Obama administration of counting all military-age males in a strike zone as combatants, which we discuss at length in the book and obviously gets us back to the more epistemological end of things.

FM: One thing to acknowledge is it’s really tragic that I first learned that this kind of passage point we’ve had in autonomous weaponry happened unbeknownst to me, and it’s become largely news avoided during the pandemic. I think it really speaks to the moment of the book. But that final point that it’s already happened is a really chilling moment. We’re not talking about stopping the hypothetical use of autonomous killing machines, we’re talking about banning it as it is already happening.

JR: Yeah, we’re talking about banning it. 

FM: One thread I wanted to tie in includes things like unionization work being done at Google, and the specific tactics being employed. Union work there, labour organizing, was able to shut down Project Maven. To me, it’s like that’s what I have so little skill to do, yet I think it is such critical work. How do you do that solidarity and collaborative work about problematizing these issues with the people that are also part of it? And where do we think laterally? This is, I think, where cybersecurity becomes a critical part of the future of our field. Cybersecurity can’t be written only by those in cybersecurity who understand it in a way that’s funded by the military. So where do you go and what does it look like? To me, that becomes really exciting and stuff that I want to emphasize as important about trying to do media studies and critical technology studies work in parallel and in collaboration with the work of actually doing that organizing. Because there are more opportunities for the “interdisciplinary research” that we’re all told to do, which I think actually has tangible and practical and theoretical consequences.

JR: Yeah, I appreciate those approaches. As Jeremy was pointing out earlier, the relationship between the military and Google is not suddenly over now. Right now, in late 2021, Google is competing for the Pentagon’s Joint Warfighter Cloud Capability contract. That relationship, between big tech and the military, is baked in. If our book is on the right track at all, there’s nothing we can do to keep them apart. But of course, at times, there will be positive steps taken to regulate this
relationship. 

These developments are always uneven and ambivalent. But with Killer Apps (Packer & Reeves, 2020), we had a difficult time mapping out an optimistic short-term political vision because we set out to wrestle with the common humanisms that dominate popular thinking about AI. But I have to say, folks can look at the important work that some of our colleagues are doing that approaches some of these practical questions in creative ways. Sean Lawson (2020), for example, or Heather Roff (2019)—those are just two of the most interesting folks who are roughly in our disciplinary orbit. Other people who are in critical security studies have gotten into think tanks, and they ask practical regulatory questions—and, of course, funding mechanisms force a certain kind of humanist politics on them, as well. But our project was always answering different questions. And our answers point to less optimistic futures. 


Fenwick McKelvey is Associate Professor at Concordia University. Email: fenwick.mckelvey@concordia.ca . Jeremy Packer is Professor at the University of Toronto, Mississauga. Email: jeremy.packer@utoronto.ca . Joshua Reeves is Associate Professor at Oregon State University. Email: reevejos@oregonstate.edu .


References 

Ackerman, Spencer. (2011, October 20). Libya: The real U.S. drone war. Wired. URL: https://www.wired.com/2011/10/predator-libya/ [March 8, 2022].

Arquilla, John, & Ronfeldt, David. (1996). The advent of Netwar. Santa Monica, CA: RAND Corporation. 

Bratich, Jack Z., Packer, Jeremy, & McCarthy, Cameron. (Eds.). (2003). Foucault, cultural studies, and governmentality. Albany, NY: State University of New York Press.

Chamayou, Grégoire. (2011). A theory of the drone (J. Lloyd, Trans.). New York, NY: New Press. 

DeLanda, Manuel. (1991). War in the age of intelligent machines. New York, NY: Zone. 

Deibert, Ronald. (2011). Tracking the emerging arms race in cyberspace. Bulletin of the Atomic Scientists, 67(1), 1–8.

Diebold, John. (1952). Automation: The advent of the automatic factory. New York, NY: Van Nostrand.

Dyer-Witheford, Nick. (1999). Cyber-Marx: Cycles and circuits of struggle in high-technology capitalism. Champaign, IL: University of Illinois Press.

Dyer-Witheford, Nick, & Matviyenko, Svitlana. (2019). Cyberwar and revolution: Digital subterfuge in global capitalism. Minneapolis, MN: University of Minnesota Press.

Foucault, Michel. (1977). Discipline and punish: The birth of the prison. New York, NY: Pantheon Books.

Hansen, Mark. (2014). New philosophy for new media. Cambridge, MA: MIT Press. 

Kittler, Friedrich A. (1996). The history of communication media. CTheory.net. URL: https://journals.uvic.ca/index.php/ctheory/article/view/14325/5101 [October 26, 2021].

Kittler, Friedrich A. (1997). Media wars: Trenches, lightning, stars. In J. Johnston (Ed.), Essays: literature, media, information systems. Amsterdam, NL: OVerseas Publishers. 

Kittler, Friedrich A. (1999). Gramophone, film, typewriter. Redwood City, CA: Stanford University Press. 

Kittler, Friedrich A. (2010). Optical media: Berlin lectures, 1999 (A. Enns, Trans.). Malden, MA: Polity. 

Kittler, Friedrich A. (2012). Of states and their terrorists. Cultural Politics, 8(3), 385–397.

Lawson, Sean. (2020). Cybersecurity discourse in the United States: Cyber-doom rhetoric and beyond. New York, NY: Routledge.

Liu, Cixin. (2016a). The dark forest. New York, NY: TOR Books.

Liu, Cixin. (2016b). The three-body problem. New York, NY: TOR Books.

McKelvey, Fenwick. (2018). Internet daemons: Digital communications possessed. Minneapolis, MN: University of Minnesota Press.

Mirowski, Phil. (2002). Machine dreams: Economics becomes a cyborg science. Cambridge, UK: Cambridge University Press.

Noble, David. (1984). Forces of production: A social history of industrial automation. New York, NY: Oxford University Press.

Packer, Jeremy. (2008). Mobility without mayhem: Safety, cars, and citizenship. Durham, NC: Duke University Press.

Packer, Jeremy, & Reeves, Joshua. (2020). Killer apps: War, media, machine. Durham, NC: Duke University Press. 

Packer, Jeremy, & Robertson, Craig. (Eds.). (2006). Thinking with James Carey: Essays on communications, transportation, history. New York, NY: Peter Lang.

Packer, Jeremy, Nuñez de Villavicencio, Monea, Oswald, Maddalena, & Reeves, Joshua. (In press). The prison house of the circuit: Politics of control from analog to digital. Minneapolis, MN: University of Minnesota Press. 

Packer, Jeremy, & Wiley, Stephen B. Crofts (Eds.). (2012). Communication matters: Materialist approaches to media, mobility, and networks. New York, NY: Routledge. 

Peters, John D. (1999). Speaking into the air: A history of the idea of communication. Chicago, IL: University of Chicago Press.

Peters, John D. (2003). Space, time, and communication theory. Canadian Journal of Communication, 28(4), 397–411.  

Peters, John D. (2013). Calendar, clock, tower. In J. Stolow (Ed.), Deus in machina: Religion and technology in historical perspective (pp. 25–42). New York, NY: Fordham University Press.

Philipson, Morris (Ed.). (1962). Automation: Implications for the future. New York, NY: Vintage Books.

Reeves, Joshua, & Packer, Jeremy. (2013). Police media: The governance of territory, speed, and communication. Communication and Critical/Cultural Studies, 10(4), 359–384.

Reeves, Joshua. (2017). Citizen spies: The long rise of America’s surveillance society. New York, NY: New York University Press. 

Roff, Heather. (2019). Artificial intelligence: Power to the people. Ethics & International Affairs, 33(2), 127–140. 

Russill, Chris. (2013). Earth-observing media. Canadian Journal of Communication, 38(3), 277–284. 

Russill, Chris. (2016). Earth imaging: Photograph, pixel, program. In S. Rust, S. Monani, & S. Cubitt (Eds.), Ecomedia: Key issues (pp. 277–284). New York, NY: Routledge.

Russill, Chris. (2017). Is the Earth a medium? Situating the planetary in media theory. Ctrl-Z: New Media Philosophy, 7.

Sanger, David E. (2021, June 15). Once, superpower summits were about nukes. Now, it’s cyberweapons. New York Times

Sloterdijk, Peter. (2009). Terror from the air (A. Patton & S. Corcoran, Trans.). New York, NY: Semiotexte.  

Suchman, Lucy. (2020). Algorithmic warfare and the reinvention of accuracy. Critical Studies on Security, 8(2), 175–187. 

United Naitonals Security Council. (2021, March 8). Final report of the UN Panel of Experts on Libya established pursuant to Security Council resolution 1973 (2011). New York, NY: United Nations Security Council. URL: https://digitallibrary.un.org/record/3905159?ln=en [March 8, 2022].

Virilio, Paul. (1989). War and cinema: The logistics of perception. London, UK: Verso. 

Wachowski, Lana, & Wachowski, Lilly. (1999). The matrix. Los Angeles, CA: Warner Bros.

Weizman, Eyal. (2006). Walking through walls: Soldiers as architects in the Israeli–Palestinian conflict. Radical Philosophy, 136.