Media futures: Premediation and the politics of performative prototypes
First Monday

Media futures: Premediation and the politics of performative prototypes by Jorgen Skageby

For centuries our interest in the future has spurred more and less spectacular ideas of potential relationships between bodies, minds and media. Today, we are, perhaps more than ever, surrounded by imaginary media technologies. Through advertising and popular culture our desires for — and fears of — the media of the future are enticed. This paper explores how imaginary media technologies are used to conceive of a relationship between failure and solution and how this relation can be interpreted critically. Theoretically, the paper calls on the notion of performative prototypes and premediation to stress how imagined designs may influence actual technology development, use and imagined interaction. Further, based on the notion that technologies can be interpreted as policies frozen in silicon the paper applies a form of policy analysis which analyses the performative prototypes as so-called problem representations (i.e., as the relations between envisioned problems and imagined resolutions). Three specific cases of fictitious media futures are then used to propose an analytical dimension of speculative solutions. As a general conclusion, the paper points to how imagined technologies calls for a more rigorous discussion of the intentionality and morality of (designed and imagined) machinery; the emergence of cyborg subjectivity; and the normativity perpetuated by designs that potentially limit our imaginable future.


Imaginary media technologies
Theorizing the tacit politics of foreseen failure and speculative resolution
Three cases of speculative resolutions
Reading imagined media technologies as policies
Conclusion: Performative prototypes as problem representations




Long before Marshall McLuhan (1964) suggested that all media technologies are extensions of man, we (as humans) have been interested in envisioning our potential media futures. Today, we are perhaps more than ever surrounded by imagined media technologies as the marketing and hype of new gadgets (Apple products being a particularly obvious case in point) intentionally take place long before they are actually available to buy, leaving potential customers to imagine how to use them. Taking us even further into the future, imagined media technologies also appear as increasingly sophisticated conceptions in various pop-cultural fictional narratives, both visual and textual. However, while the technologies “soon in a store near you” present themselves as the solutions to all our problems, the technologies “soon in a theatre near you” are more prone to enact failure. As such, this paper takes interest in how the imagined technologies presented to us in futuristic fiction fail, and more importantly, in what manner these failures are later resolved. The relationship between actual technological development and technology as presented in fiction has been discussed from many different angles. For example, Sobchack (2004) argues that (science) fiction mainly works as a way to actualize subjective experiences of technology rather than technological functionality in itself:

“... the essence of the SF (science fiction) film’s technological imagination is nothing technological and that, rather, the essence of the genre is the phenomenologically felt meaning of technology [...]” [1]

Rather than putting all focus on the lived experience, Dourish and Bell (2014) describes how fictional accounts can work as a lens to questioning the social and cultural contexts of technology use, as well as assumptions in actual technological research itself. Similarly, Kirby (2010) points to how imagined technologies work as, so called, diegetic prototypes — illustrating both specific functionality as well as cultural and social contexts of use. Following this argument, imagined media can be seen as a space of play where certain functionalities, failures and resolutions can be tested. However, they are also manifestations of political agendas when highlighting certain functionalities, failures and resolutions and obscuring others. On a more theoretical level, developments in frameworks such as posthumanism, transhumanism and bioconservatism (Bostrom, 2005; Braidotti, 2013) illustrate how both current and imagined technologies now actualize complex ontological, epistemological, and ethical matrixes. Consequently, the question at hand for this paper is: acting as cultural probes, what kinds of problems and solutions do imagined media technologies enact, and what are the tacit politics of such problem representations and their suggested resolutions? These questions become important due to the fact that imagined media technologies have a real capacity to influence both real-world technology design and our expectations thereof. The many machines envisioned by Jules Verne, the communicator in Star Trek, or the tablet computer in Stanley Kubrick’s 2001, are but a few examples of how imagined media technologies have acted as conceptual blueprints for posterior actual technical development.

This paper is structured as follows. First, imagined media technologies are introduced and discussed in terms of their connection to real-world development. Second, two theories for considering the tacit politics of imagined media are contrasted. Third, three cases from literature and film are presented. Fourth, a method for reading imagined media as policies (i.e., as problem representations and corresponding resolutions) is proposed and a comparative analysis of the three cases is conducted. Finally, the conclusion suggests that a problem-representation-oriented approach can make alternatives to the prescriptive values embedded in both imagined technologies and real-world alike, more visible.



Imaginary media technologies

The notion of why technologies fail has of course been of interest to many disciplines and scholars. Science and technology studies have proposed a number of comprehensive theories addressing failures as they occur in humans, systems or specific machines. While a complete survey is beyond the scope of this paper, a short outline of some of the most important will help set the frame for this study’s interest in technological failure. By likening technology to the mythological (and inept) creature of the golem, Collins and Pinch (1998) demonstrate how technological failure is intertwined with underlying uncertainties in scientific models. This shows how technology carries with it some inherent doubts and uncertainties (in values, applications and outcomes), which means that an analysis of failure must then recognize how both the problem and (a clear) solution are often more complex than is presented. Perrow (1999) proposes the idea of ‘normal accidents’, which are essentially failures that emerge with the underlying design of a certain system — without trains, no train accidents; without computer code, no viruses — an idea which is also built upon by for example Virilio (2007) and Ihde (2008). This idea shows how the neutrality of technology can be put to question — it becomes more a question of exploring the range of (potential) accidents that can occur. In an early study on computer-caused accidents, D. Mackenzie (1994) suggests an interesting paradox in the design of safe computer systems. By designing and promoting the idea of completely reliable systems, users will become overly dependent on and confident in computer outputs. D. Mackenzie thus proposes the necessity to “keep alive the belief that they [computers] are dangerous” [2] for retaining a critical user attitude. For this paper, a relation between such ‘double-edged’ technologies in the real world and technologies as depicted in movies must also be established. In his 2010 paper David Kirby outlines the relation between real-world design and popular cultural films and coins the term diegetic prototypes. The term diegetic (adopted from film studies) refers to the narrative world within a film — a ‘realistic’ world where an internal logic dictates the agency of human and non-human actants. A diegetic prototype then, is a (technological) object, which is fully functional, or performative, within that diegetic world. As such, it is also a form of imagined media technology with which people engage. In the words of Kirby:

The performative aspects of prototypes are especially evident in diegetic prototypes because a film’s narrative structure contextualizes technologies within the social sphere. Technological objects in cinema are at once both completely artificial — all aspects of their depiction are controlled in production — and normalized within the text as practical objects that function properly and which people actually use as everyday objects. [3]

One of the benefits of diegetic prototypes is that we can imagine and assess technologies that are beyond the current (material) state of the art. The more ambivalent consequence is that diegetic designs represent ordered systems of immaterial media that, much like material designs, has a capacity to change both people’s expectations of technology as well as actual designer practices (Meadows, 2011). Consequently, like any form of imagined world, diegetic prototypes can instigate imagination but also limit it. We could even argue that through this form of premediation (Grusin, 2004), where we emotionally prepare for the future, we allow (interpretations of) diegetic prototypes to express or even dictate hopes, fears and desires in the present. M. Fuller and Goffey (2012) go as far as to argue that impending or imagined products are even more pervasive and resounding than current ones:

[...] rather than preemption being a means by which the present captures the future, the future, that of a splendid product, mobilizes the present for its purposes. [4]

This inherently political potential for the imagined to affect the realized, or more specifically, for diegetic prototypes to influence actual technology development and use, puts new analytical demands on the circumstances under which diegetic prototypes are designed, enacted and interpreted.

Imagined media technologies can of course be studied in many ways. This paper argues that a particularly interesting point of analysis is located in the relation between failure and solution as depicted through the imagined technologies of futuristic fiction. As Dourish and Bell (2014) point out, failure is a recurring theme when it comes to fictitious descriptions of man-machine interaction. Indeed, from Metropolis to the Matrix, the fictional relations between humans and machines have faltered and broken down. However, these depictions of machine-induced failure at the same time also include speculative ways of coping with the same failure — a resolution that ‘deals with the proposed risks of technology’ in some way. Thus, the argument of this paper is that these resolutions are as critically thought provoking as the failure themselves. In the light of the frequent reappearance of imagined failure, this paper consequently seeks to examine the extended chain of action that is:

performative prototype –> foreseen failure –> speculative resolution

Seeing this chain of action, it may be worth noting that there is an interesting overlap between the diegetic prototype as a functional design in its own right and as a ‘plot device’. Diegetic prototypes are not plot generators only, or functional designs only. Rather, the prototype holds a technical capacity, which can be the source of interesting social, moral, and political problems and resolutions. This double role of the diegetic prototype is what is in focus for this paper.

To summarize, this paper argues that while foreseen failure is certainly an interesting topic in itself, the themes in how failure is dealt with are equally worthy of analysis. That is, the tacit politics of technologies and how they are a part of problem-solution complexes needs to be revealed and analysed. To echo the introduction, this paper consequently seeks to examine what kinds of problems and solutions imagined media technologies enact? Furthermore, what are the tacit politics of such problem representations and their suggested resolutions?



Theorizing the tacit politics of foreseen failure and speculative resolution

Science fiction and speculative fiction contains a huge number of differing diegetic prototypes and interfaces (Shedroff and Noessel, 2012). In order to bring focus to the paper, the range of potential objects of study needs to be limited. As such, emphasis will be put on what we may call cyborg relations and more specifically on brain-machine interfacing technologies. The reason for this is that diegetic prototypes of that kind was seen as having a larger potential to explicitly bring out issues around morality, agency and the human condition.

As one prominent example of this, the so-called posthuman condition has recently been exquisitely elaborated upon by Braidotti (2013). She goes on to develop a theory of the subject, where empirical studies of the capacities of bio-technological bodies come to the fore. Of main interest to this paper, however, is a passage where she discusses the “moral intentionality” of technologies. The view presented is that the lack of intrinsic humanistic agency in technologies also makes a normatively neutral structure manifest.

Against claims to the in-built moral intentionality of the technology, I would claim that it is normatively neutral. [5]

Several arguments are made to support this stance. Besides the fundamental argument that machines will (likely) not be able to make independent moral decisions, Braidotti also argues that more autonomous technologies will depend on pre-defined rules, rather than proper decision-making systems (this in order to support the allocation of responsibility). Further, ethical systems should, according to Braidotti, be based on human consensus — that is, correspond to what “seem right to most people” [6].

As a counterpoint to Braidotti’s propositions, we may refer to Verbeek (2011) who argues that technologies co-shape moral decisions and should therefore be assigned certain moral intentionality (or agency even). Designers must consequently take responsibility for this moral intentionality of the technologies they design and develop. As the reach of design — through a continuously changing relation between “the natural” and “the artificial” — expands (Margolin, 1995; Merrick, 2005), this moral obligation becomes even more pertinent. The ethics involved in design that goes ‘beyond the human’ puts issues of agency, morality and corporeality up to new scrutiny. Verbeek builds on Ihde’s (1990) classical analysis of human-technology relations and highlights how future technologies are likely to extend beyond our previous relations to technology. He proposes two new types of relations, which he refers to as cyborg and composite relations respectively. In these relations there is, in the former case, no separating the human from the technology, they enact one another in co-shaping experiences and intentions in the world, and in the latter technology has an agency to act upon the world of its own. Both these types of relations seem pertinent in a discussion of imagined media technologies. However, Verbeek also stresses that the specific relationship between human and technology may vary widely within these types of relations. This paper takes an interest in how these relations may fall apart and how failure can enter the human-technology relation. The contribution that imagined media technologies provide us with is a kind of cultural probe that we can use to develop analytical dimensions and frameworks to address the variety of human/technology relationships.

Diegetic prototypes can be regarded as culturally circulated examples of technological imagination. The imagined failures, and their corresponding solutions, shine a light on norms of success and failure — norms, which in turn become generative for future diegesis. Therefore, we may question the role of diegetic design in promoting certain norms of success and failure. Taking this thought further, scholars have argued that our current (media-augmented) model of progress may be too limited (Light, 2011) and even suggest that we now live with “evil media” and “the evils of design” which carefully augment certain possibilities and, equally thorough, obscure others (M. Fuller and Goffey, 2012; Nelson and Stolterman, 2012). Admittedly, this soon develops into a gray zone, where the central question becomes from whose perspective success and failure are to be defined. For Nodder (2013) the definition of evil design comes down to “[...] that which creates purposefully designed interfaces that make users emotionally involved in doing something that benefits the designer more than them” [7]. Still, rather than pointing fingers and trying to locate ‘where evil dwells’ once and for all it seems more productive to think of technology as (parts of) situated and augmented assemblages that also enable/disable distributed agencies. Following this argument, evil can also be modelled as distributed and augmented, throughout all parts of the assemblage, technology included.

So, like for all forms of creative work, imagined media technologies are political in that they make certain opportunities obvious and others murky (even more so perhaps, by augmenting a preferred reading through a narrative context). Another option then becomes to rethink failure in relation to the current model of success:

We can also recognize failure as a way of refusing to acquiesce to dominant logics of power and discipline and as a form of critique. As a practice, failure recognizes that alternatives are embedded already in the dominant and that power is never consistent; indeed failure can exploit the unpredictability of ideology and its indeterminate qualities. [8]

Halberstam’s notion of failure gives us reason to question what we mean by success (and failure) and which criteria we use to evaluate these notions. This point is taken to an extreme in a recent “retrospective” paper on the future development of human-computer interaction, where the authors suggest that the design agenda of today is “tirelessly focused on the improvement of technology to make it more usable, accessible and fun, while simultaneously more ubiquitous, hidden and capable of understanding and controlling the behaviour of humans” (Kirman, et al., 2013). The consequence of such an agenda, according to the authors, is the ultimate (human) failure. So, while much attention has been given to the empowering potentials of new media technologies, through such practices as remixing, appropriation, and prosumtion, not enough attention has been given to how design, both intentionally and unintentionally, disempower certain agency and subjectivity. These gray zones of agency and morality problematize the claim that technology is normatively neutral (regardless of whether they posses any agency in the humanistic sense of the term or not). As such, there are a number of critical inquiries that can be made.

First, if we are to see assemblages of human and non-human actants (i.e., cyborg relations) as (more or less) coherent units of analysis, how is it possible to disentangle technology from this “mess” of human/non-human transversal relations and say that technology is “normatively neutral”, while all other parts are not? The fact that technologies may rely on “pre-defined rules” only mean that decisions are already made. Further, as pre-defined rules get more sophisticated (as is the case with complex algorithms) who is to say when and by whom a decision is made. Rather, many theorists now argue for an extended view of agency, in terms of, for example, secondary or distributed agency (Kitchin and Dodge, 2011; A. Mackenzie, 2006). To this, we may add algorithmic agency to emphasize the growing trust that is put, knowingly or not, into systematic processing of data and information. Thus, a strict view that equates the lack of “intrinsic humanistic agency” with a normatively neutral technology seems to fall short of including the many ways in which information is already being shaped by technology and thereby also already part of a cyborg relation with a distributed morality and agency.

Second, designers (sometimes) deliberately shape technologies to fit certain political and economical agendas (Nodder, 2013). For instance, this seems to be the case with huge tech companies such as Google, Facebook or Apple. In repeated news stories it turns out that functionality originally perceived and presented as improving conviviality, sociability and usefulness has effectively been turned around, by the same companies, and are now used for privacy-invading, ethically doubtful purposes. Were these moral consequences designed there from the beginning or are they “just” unintended consequences? Does it matter? Should we not hold designers to be accountable and thereby also politicize unintended consequences to a greater extent? In fact, much like fashion studies or film studies include the analysis of directors and designers, so should media studies comprise interaction criticism, design processes and the circumstances under which media technologies are developed.

Third and finally, seeing technology as normatively neutral seems to propose a universal and disembodied conception of technology. Is it not so, however, that a killer drone is different from Facebook, is different from a financial algorithm, which is different from Google glass and so on? Conflating all these gadgets, and their surrounding practices, into Technology (with a big T) seems to overlook the media-specificity of each technology. The specific machinic or algorithmic capacities to shape information make different perceptions possible and thereby also different moral effects.

As a consequence, it would seem that we (as humans/cyborgs) have no incentive or capacity to produce change via technology if all technologies are regarded as neutral. In fact, one important way to distinguish between two technologies is to identify one with a value or a norm that is missing in the other (Margolin, 1995). The premise of design science is thus that technology, implicitly or explicitly, supports certain needs, values and norms (Julier, 2014). Further, these (sometimes artificial) needs and (increasingly questioned) values are now so ubiquitous that it is hard for us to break, resist or fail them without it being very discomforting for us. We surrender to hegemonic models of being and becoming due to a fear of failing, and media technologies often catalyse this capitulation. Diegetic design failures and speculative solutions are tools that can help us to construct models that identify such norms and values, but these models also need to be charged with norm-critical theories on what constitutes success and failure in a specific context. Designers, like researchers, can not be allowed to perform a “god-trick” (Haraway, 1988) of creating purportedly aloof, apolitical and unbiased (i.e., normatively neutral) technology. As such, a normative neutrality of technology seems to have little to offer anyone interested in analysing or bringing about change through technological means. Instead, it can only acknowledge detachment.



Three cases of speculative resolutions

As mentioned, this paper has no room to include a complete survey (let alone analysis) of all relevant examples of diegetic cyborg-relation failures ever conceived of in popular culture. Instead, we will focus on three examples that can bring about an interesting tension in terms of how imagined technologies fail and how failure is dealt with. The first one is from literature, namely Greg Egan’s short story “Learning to be me”; the second is the episode “The Entire History of You” from critically acclaimed British TV series Black Mirror (Armstrong and Brooker, 2012); and the third one is Japanese anime Ghost in the Shell (Oshii, 1995). These three examples will be used to extrapolate an analytical dimension through which normativity in relation to the design of cyborg and posthuman relations can then be discussed.

Learning to be me

Egan’s short story begins with the words: “I was six years old when my parents told me there was a small, dark jewel inside my skull, learning to be me.” The jewel is an advanced technology designed to mimic the human mind. The jewel gradually improves in its capacities in a process that usually stretches over some 20 years. The purpose of this technology, as it is described in the story, is to make decaying brain tissue superfluous. So, once the jewel has “learnt to be me”, so to speak, people take part in a procedure called the switch, where the brain is physically removed from the skull, and the jewel is ‘put in charge’ instead.

However, in the story the jewel becomes self-aware some time before the switch takes place, creating a body with two conscious minds. From there the story develops into a battle for subjectivity, where the jewel comes out as the winner in the end. Notably, the story contains some brief encounters with (heterosexual) love and interestingly enough, the jewel also develops an ambition to get a Ph.D. As an autobiography “Learning to be me” unfolds as an interesting story, where an increasingly conscious simulation literally becomes the narrator. From a strictly humanistic viewpoint, the solution put forward here can still be regarded as a failure (i.e., the jewel claiming subjectivity). Nevertheless, there seems to be a recurring cultural attraction to cases where a simulation lives on.

The Entire History of You

The next example of cyborg relations is taken from the episode “The entire history of you” from the TV series Black Mirror (season 2, episode 3). This episode depicts a future where most people have installed a memory-augmenting technology called “the grain” into their brains. The grain works a bit like a photographic memory — it basically records video of what you see and makes it available for instant reviewing. In short, our protagonist — a white, heterosexual, male — with the help of this technology comes to suspect, and eventually conclude, that his wife is cheating on him. He is thrown into a feverish socio-technical review of himself and his social history. The episode emphasizes how the abundance of ‘perfect memories’ can create an overload in terms of obsessive social control, constant self-judgment and reciprocal vindication and how disrupting the instrumental power of perfect memory is a political action — we need to stop this technology before it becomes so ubiquitous that we will not notice or criticize it [9]. The solution for the main character, then, becomes to ‘purify’ his body from this technology. So in an effort to return to how things were before, he physically removes the grain from his brain. From a critical perspective this speculative solution could be argued to illustrate a case of a returning display of a very heteronormative and dichotomous imagination.

Ghost in the Shell

Ghost in the Shell (Oshii, 1995) takes place in a future megacity, where major Motoko Kusanagi is chasing the hacker known as ‘The Puppet Master’. Most citizens are, in some way(s), technologically augmented (cyborgs) even though there are also people who only carry small implants. These modifications envelop both bodily and cognitive changes. Due to this convergence of the informational and the corporeal, The Puppet Master is provided with the possibility of ‘hacking people’ and controlling their behaviours. A central theme in the movie is Motoko’s personal crisis around what actually constitutes her being and what humanity is. Is the body just an exchangeable shell for a consciousness? While the title may at first glance express support for such a dichotomy, it is far from articulated in the movie itself (in fact, the title Ghost in the Shell was chosen by Shirow in homage to Arthur Koestler’s The Ghost in the Machine). Rather, it shows Motoko’s personal journey in a time when ‘the human’ is not so easily defined anymore and subjectivity emerges as a complex interplay between various agents.

Subsequently, the movie reveals the Puppet Master as an emergent form of conscious entity born “in the sea of information”:

I refer to myself as an intelligent life form because I am sentient and I am able to recognize my own existence, but in my present state I am still incomplete. I lack the most basic processes inherent in all living organisms: reproducing and dying. (the Puppet Master)

Thus, the Puppet Master seeks to merge with a physical body that can provide it with a diffracting range of material and corporeal agency. Motoko, having also sought for new ways to rewrite her subjectivity, allows for this unification to take place. After providing this necessary abridgment of the cases, the paper will now move on to discuss how the three examples enact the relation between performative prototype, failure and resolution.



Reading imagined media technologies as policies

Although the notion of problem-solving is only one aspect of design (Schön, 1988) this paper will use that aspect as en entry point to understanding the sequence that moves from performative prototype through foreseen failure to speculative resolution. As a methodological consequence, I argue that the relation between (diegetic) design failures and (diegetic) design resolutions can be fruitfully and critically analyzed as problem representations. Drawing on Bacchi (2009) I propose that design problems and resolutions exist in a cyclic and coordinating socio-material relationship, where the problem and resolution ‘configure’ each other. Because design is governed by problematisations, we therefore need to study problem representations including their premises and effects. Adhering to such an approach, we propose that design co-creates (what constitutes) problems and that we should question how these (design) problems are conceived of, and ultimately how design comes to govern us. While Bacchi subscribes to the idea that problems are social constructions, it is also important to recognize that the resolutions to problems become meeting points for both material and discursive realities. As mentioned, even diegetic prototypes have an impact on material future products and expectations thereof. Taking inspiration from Bacchi, this paper poses four questions that I argue can be productively applied to the relation between problem and solution within a specific diegetic design space:

  1. What does the diegetic prototype do? What performative capacities does it hold?
  2. What failure(s) are envisioned as emerging from the (individual or cultural) use of the diegetic prototype?
  3. How are these problems, in turn, resolved?
  4. What norms are expressed by this resolution? What alternatives were considered (if any)? What is left unproblematic? Where are the silences?

Because design is a practice where projecting solutions to problems is a key activity (even in diegetic design), the notion of studying the politics of performative prototypes as problem representations is compelling. A critically informed analysis of diegetic prototypes consequently becomes concerned with destabilizing and assessing the limits of the problem space and the ambiguities inbetween normative design doxa, and how these can traverse the im/material matrix. Echoing the introduction of this paper, the overarching frameworks of transhumanism, posthumanism and bioconservatism become useful for understanding this problem space and its ambiguities. Put very simply, transhumanism is concerned with scientific and technological ways to overcome (or radically transform) various human ‘bodily and cognitive failures’ (e.g., illness, age, death) bioconservatism is a more reactionary position, which emphasizes the importance of a ‘purity’ or ‘naturalness’ of being human; and posthumanism seeks to question anthropocentrism and its consequences for other (potential) agencies in the world (S. Fuller and Lipinska, 2014; Hopkins, 2008; Roden, 2015).

A dimension of speculative solutions

The first case, the Jewelhead, is what we may call a transhumanist take on failure and its corresponding solution. Without getting into detail on transhumanist history, theory and practices (which are rich and varied), the Jewelhead conforms to the overall transhumanist ambition to outdo the limitations of the human body. Failure is conceptualized as a struggle over subjectivity where only one subjectivity can prevail; the brain or the jewel. The solution is thereby the termination of the subjugated subjectivity, in this case the brain. The other example, the grain in Black Mirror, is more nostalgic in its desire to return to the ‘pure body’, uninfected by technology. In the context of this particular case, it also seems important to highlight how the main reason for this resolution is the discovery of the wife cheating on him — arguably a signifier of a very heteronormative imagination. Other types of relations would of course express other power differentials, but in this case, the resolution can be interpreted as fairly norm-conservative. In summary, the two examples come to form a dichotomy between the pure simulated mind (where the body is only decaying human tissue) and the pure body (where technology disrupts our ‘normal’ sense of subjectivity). The third example, Ghost in the Shell, however is a bit more nuanced in its depiction of the relation between failure and resolution — it is what we may call posthuman. Very briefly, posthumanism wants to criticize, or rather problematize, the anthropocentrism that has dominated the humanities for so long: “The starting point is the illusion of a generic human, an abstraction without nationality, gender, sexual orientation, age or physical challenges” [10]. Dichotomies such as nature-culture, subject-object and man-animal are systematically questioned. Further, posthumanism holds a special relationship to bodies and technologies. As Heinricy (2010) puts it: “Posthumanists don’t want to escape bodies, but instead want to know how body and consciousness change with various configurations of body and technology.” [11] The view expressed by Motoko in Ghost in the Shell can thus be interpreted as clearly posthumanist in essence (McBlane, 2010). She seeks to rewrite her subjectivity in a new and radical way and explores the porous borders between man and machine highlighted by her technologically modified body. In a key scene Motoko expresses the tensions and dissonances she experiences:

We have the right to resign if we choose. Provided we give the government back our cyborg shells ... and the memories they hold. Just as there are many parts needed to make a human a human, there’s a remarkable number of things needed to make an individual what they are. A face to distinguish yourself from others. A voice you aren’t aware of yourself. The hand you see when you awaken. The memories of childhood, the feelings for the future. That’s not all. There’s the expansion of the data net my cyber-brain can access. All of that goes into making me what I am. Giving rise to a consciousness that I call “me”.

Motoko questions the strict dichotomy between mind and body, technology and nature and proposes a view of the body which is modular (a megastructure of sorts) where different parts can be included, modified and exchanged (Dinello, 2010). As such, she also expresses a holistic view of the body and spirit where the collaboration between the parts generate emergent capacities that become something more than the sum of the parts themselves. Culturally, this could be traced further back through the fascination for ghosts, demons, the supernatural and even the ‘transhumanist’ which are prominent in Japanese culture (Dillmann and Schneider, 2008). However, being a more complex account, Ghost in the Shell also show certain drawbacks of cyborg relations. Motoko is arguably betrayed by a commercialized and corrupt state that, by having control over the technological modifications of her body, has made her into a tool in its service. Further, the movement from mnemotechniques to mnemotechnologies (Stiegler, 2010) codifies neural processes and opens them up to hacking and surveillance. Nevertheless, the resolution of merging Motoko (cyborg) and the Puppet Master (computational entity) becomes a non-nostalgic way of making resistance and demanding the right to shape subjectivity herself (even intersubjectively or as an appendage to another lifeform). In no way is Motoko grieving her ‘lost humanity’. Rather she enacts an anticipatory ‘power to live’ (MacWilliams, 2008) and evolve which indicates a fundamentally posthuman stance in Ghost in the Shell.

It should be noted that Ghost in the Shell has also been criticized for seducing the audience into thinking that it represents a subversive criticism while, more subtly, still perpetuating limiting norms, such as false hopes of (technology-driven) liberation and social justice (Silvio, 1999). These are certainly legitimate objections. However, in relation to a more intense belief in the surpassing of the human body (transhumanism) or the longing for the pure human body (nostalgic body-conservatism), they are also concerns that belong to the realm of the posthuman.


Table 1: Three types of diegetic design failures and corresponding resolutions.
Diegetic prototype The Jewel: mind can be emulated, brain is a decaying vesselDiegetic prototype The Grain: photographic memoryDiegetic prototype Cyborg bodies, with a range of corporeal and cognitive enhancements
Failure Battle over subjectivity as the jewel reaches self-awareness before the switchFailure Retrospection becomes a way of life (and specifically, reveals a threat to heteronormativity)Failure Hacking of minds, control, surveillance and cognitive dissonance
Resolution Subjugated subjectivity terminated. “Pure” simulated mind perpetuatesResolution Go back to “pure” body by removing technology (break the loop)Resolution Merge between cyborg and informational entity, giving birth to “a new lifeform”


Having identified the different approaches to resolutions that these cases present I, much like Chu (2010), propose that these three cases can be represented along an analytical dimension — where opposing viewpoints in relation to the status of technology, the body and the mind are expressed. More specifically the two endpoints of this dimension is first, the (hard) transhumanist perspective of the pure mind which can be transferred to and contained within machinery, and second, the nostalgic bioconservative perspective, which expresses a view where a technological co-construction of subjectivity sees the subsequent and necessary purification of the ‘normal’ body and mind. Inbetween these two extremes lay a posthuman spectrum where purity of both mind and body is questioned.

(Hard) Transhumanism | Posthumanism | Bioconservatism

Black Mirror and Jewelhead both display somewhat dystopian storylines of self-centeredness and power played out through cyborg relations. While both the transhumanist and the nostalgic viewpoints are persistent tropes in pop culture, the ambiguous posthuman space inbetween these two opposing viewpoints (Halberstam, 1991) seems more capable of providing a sustained basis from where questions of intentionality and normativity can emerge, without having to resort to entrenched ‘purities’.



Conclusion: Performative prototypes as problem representations

Arguably, we now live with imagined technologies all around us. In a world of perpetually “new” media, an overabundance of “available soon” versions and future updates are pushed onto us from commercials and technology-rhetorical rumors. Our desires and anticipations for “just-not-yet-available” technologies have indeed proven to be vivid (Skågeby, 2011). Even more futuristic, and possibly engaging, are the books, games, films and TV series where imagined, and sophistically actualized, technologies play a crucial role. As already mentioned, several researchers have already underlined the importance of fiction and failure in relation to technology. Hayles (2006) suggests that fictional man-machine traumas serve “as the archetypal moment of breakdown that brings into view the extent to which our present and future are entwined with intelligent machines” [12]; Lothian (2012) explores futuristic fiction that deviates (or fails) the recurring dominant narratives and Thacker (2000) suggests that speculative fiction can configure the future as the conditions of possibility and constraint for social change (i.e., correcting failures) in the present. According to Thacker, science fiction can produce what is essentially a political commentary on the possibilities of imagining radical otherness and difference. The question, however, becomes to what extent is radical otherness and difference actually imagined via imagined media technologies and more specifically how they represent and resolve problems? While we certainly need to be exposed to failure in order to remain critical (as suggested by D. Mackenzie (1994)), we must also question the frames that problems and solutions are presented within.

In conclusion, treating a specific performative prototype as a problem representation opens up to new inquiries about the ideological foundations and political effects of both artefacts and vocabularies. Indeed, the strategic dimension of defining problems in specific ways may point to previously obscured power struggles, where new forms of counter-power may emerge. Concrete designs actualize policies —; they are, as Banks (2013) puts it, “politics frozen in silicon” — and provide points of reference for understanding how problem representations are materialized, negotiated and interpreted. This paper has only very briefly examined how diegetic design and imagined technologies around us tend to keep dichotomous tropes alive. It has questioned the non-neutrality of designed technologies and suggested that a problem-representation-oriented approach can make alternatives to the prescriptive values embedded in technologies more visible.

On a final note, failure is all around us. Several theorists of man-machine relations make the point that failure is a vent through which underlying wider political issues can come to the surface. While this is certainly true, we must also remember to view the solutions applied to failure with the same critical eye and ask: what does socio-technical failure reveal, and what does the solutions continue to obscure? End of article


About the author

Jörgen Skågeby is an associate professor at the Department of Media Studies, Stockholm University. His research concerns the structures, algorithms, behaviours, and designs of (combinations of) natural, cultural and artificial systems, which store, process, access, and communicate information. Theoretical frameworks include material interactions theory, media archaeology, and design fiction. Skågeby’s work is regularly published in renowned international journals, recently including for example Popular Communication, Convergence and Journal of Information Technology.
E-mail: jorgen [dot] skageby [at] ims [dot] su [dot] se



1. Sobchack, 2004, p. 145.

2. D. Mackenzie, 1994, p. 247.

3. Kirby, 2010, p. 41.

4. M. Fuller and Goffey, 2012, p. 101.

5. Braidotti, 2013, p. 44.

6. Braidotti, 2013, p. 45.

7. Nodder, 2013, p. xv.

8. Halberstam, 2011, p. 88.

9. The disadvantages of perfect memory (and its accompanying meticulous and exclusive attention to detail) has of course been explored before. See for example Borges’ (1942) Funes the Memorious.

10. Åsberg, et al., 2012, p. 9, own translation.

11. Heinricy, 2010, p. 39.

12. Hayles, 2006, p. 157.



C. Åsberg, M. Hultman, and F. Lee (editors), 2012. Post-humanistiska nyckeltexter. Lund: Studentlitteratur.

D.A. Banks, 2013. “The politics of communication technology,” Cyborgology (4 May), at, accessed 17 January 2016.

N. Bostrom, 2005. “A history of transhumanist thought,” Journal of Evolution & Technology, volume 14, number 1, pp. 1–25, and at, accessed 17 January 2016.

R. Braidotti, 2013. The posthuman. Malden, Mass.: Polity Press.

S.-Y. Chu, 2010. Do metaphors dream of literal sleep? A science-fictional theory of representation. Cambridge, Mass.: Harvard University Press.

H. Collins and T. Pinch, 1998. The golem at large: What you should know about technology. Cambridge: Cambridge University Press.

C. Dillmann and U. Schneider, 2008. “Grusswort,” In: M.-C. Menzel (editor). Ga-netchu! Das Manga-Anime-Syndrom. [Berlin]: Henschel, p. 8.

D. Dinello, 2010. “Cyborg goddess,” In: J. Steiff and T.D. Tamplin (editors). Anime and philosophy: Wide eyed wonder. Chicago: Open Court, pp. 275–285.

P. Dourish and G. Bell, 2014. “‘Resistance is futile’: Reading science fiction alongside ubiquitous computing,” Personal and Ubiquitous Computing, volume 8,number 4, pp. 769–778.
doi:, accessed 17 January 2016.

M. Fuller and A. Goffey, 2012. Evil media. Cambridge, Mass.: MIT Press.

S. Fuller and V. Lipińska, 2014. The proactionary imperative: A foundation for tranhumanism. Basingstoke: Palgrave Macmillan.

R.A. Grusin, 2004. “Premediation,” Criticism, volume 46, number 1, pp. 17–39.
doi:, accessed 17 January 2016.

J. Halberstam, 2011. The queer art of failure. Durham, N.C.: Duke University Press.

J. Halberstam, 1991. “Automating gender: Postmodern feminism in the age of the intelligent machine,” Feminist Studies, volume 17, number 3, pp. 439–460.
doi:, accessed 17 January 2016.

D. Haraway, 1988. “Situated knowledges: The science question in feminism and the privilege of partial perspective” Feminist Studies, volume 14, number 3, 575–599.
doi:, accessed 17 January 2016.

N.K. Hayles, 2006. “Traumas of code,” Critical Inquiry, volume 33, number 1, pp. 136–157.
doi:, accessed 17 January 2016.

S. Heinricy, 2010. “Take a ride on the catbus,” In: J. Steiff and T.D. Tamplin (editors). Anime and philosophy: Wide eyed wonder. Chicago: Open Court, pp. 3–11.

P.D. Hopkins, 2008. “Is enhancement worthy of being right?” Journal of Evolution & Technology, volume 18 number 1, pp. 1–9, and at, accessed 17 January 2016.

D. Ihde, 2008. Ironic technics. [Copenhagen]: Automatic Press/VIP.

D. Ihde, 1990. Technology and the lifeworld: From garden to earth. Bloomington: Indiana University Press.

G. Julier, 2014. The culture of design. Third edition. London: Sage..

D. Kirby, 2010. “The future is now: Diegetic prototypes and the role of popular films in generating real-world technological development,” Social Studies of Science, volume 40, number 1, pp. 41–70.
doi:, accessed 17 January 2016.

B. Kirman, C. Linehan, S. Lawson, and D. O’Hara, 2013. “CHI and the future robot enslavement of humankind: A retrospective,” CHI EA ’13: CHI ’13 Extended Abstracts on Human Factors in Computing Systems, pp. 2,199–2,208.
doi:, accessed 17 January 2016.

R. Kitchin and M. Dodge, 2011. Code/space: Software and everyday life. Cambridge, Mass.: MIT Press.

A. Light, 2011. “HCI as heterodoxy: Technologies of identity and the queering of interaction with computers,” Interacting with Computers, volume 23, number 5, pp. 430–438.
doi:, accessed 17 January 2016.

A. Lothian, 2012. “Deviant futures: Queer temporality and the cultural politics of science fiction,” Ph.D. dissertation, University of Southern California, at, accessed 17 January 2016.

A. Mackenzie, 2006. Cutting code: Software and sociality. New York: Peter Lang.

D. MacKenzie, 1994. “Computer-related accidental death: An empirical exploration,” Science and Public Policy, volume 21, number 4, pp. 233–3248.

M.W. MacWilliams (editor), 2008. Japanese visual culture: Explorations in the world of manga and anime. Armonk, N.Y.: M.E. Sharpe.

V. Margolin, 1995. “The politics of the artificial,” Leonardo, volume 28, number 5, pp. 349–356, and at, accessed 17 January 2016.

A. McBlane, 2010. “Just a ghost in a shell?” In: J. Steiff and T.D. Tamplin (editors). Anime and philosophy: Wide eyed wonder. Chicago: Open Court, pp. 27–38.

M. McLuhan, 1964. Understanding media: The extensions of man. New York: McGraw-Hill.

M.S. Meadows, 2011. We, robot: Skywalker’s hand, blade runners, Iron Man, slutbots, and how fiction became fact. Guilford, Conn.: Lyons Press.

H. Merrick, 2005. “Alien(ating) naturecultures: Feminist SF as creative science studies,” Reconstruction, volume 5, number 4, at, accessed 17 January 2016.

H.G. Nelson and E. Stolterman, 2012. The design way: Intentional change in an unpredictable world. Cambridge, Mass.: MIT Press.

C. Nodder, 2013. Evil by design: Interaction design that leads us into temptation. Indianapolis, Ind.: Wiley.

M. Oshii (director), 1995. Ghost in the shell (Kokaku kidotai). Tokyo: Production I.G., Inc.

C. Perrow, 1999. Normal accidents: Living with high-risk technologies. Princeton, N.J.: Princeton University Press.

D. Roden, 2015. Posthuman life: Philosophy at the edge of the human. London: Routledge.

D. Schön, 1988. “Designing: Rules, types, and worlds,” Design Studies, volume 9, number 3, pp. 181–190.

N. Shedroff and C. Noessel, 2012. Make it so: Interaction design lessons from science fiction. Brooklyn, N.Y.: Rosenfeld Media.

C. Silvio, 1999. “Refiguring the radical cyborg in Mamoru Oshii’s Ghost in the Shell,” Science Fiction Studies, volume 26, part 1, at, accessed 17 January 2016.

J. Skågeby, 2011. “Pre-produsage and the remediation of virtual products,” New Review of Hypermedia and Multimedia, volume 17, number 1, pp. 141–159.
doi:, accessed 17 January 2016.

V. Sobchack, 2004. “Science fiction film and the technological imagination,” In: M. Sturken, D. Thomas, and S. Ball-Rokeach (editors). Technological visions: The hopes and fears that shape new technologies. Philadelphia, Pa.: Temple University Press, pp. 145–158.

B. Stiegler, 2010. “Memory,” In W.J.T. Mitchell and Mark Hansen (editors). Critical terms for media studies. Chicago: University of Chicago Press, pp. 64–87.

E. Thacker, 2000. “Fakeshop: Science fiction, future memory & the technoscientific imaginary,” CTheory, at, accessed 17 January 2016.

P. Virilio, 2007. The original accident. Translated by J. Rose. Cambridge: Polity.



J. Armstrong and C. Brooker (writers); B. Welsh (director), 2012. “The entire history of you,” Black mirror (television series). London: Zeppotron.

M. Oshii (director), 1995. Ghost in the shell (Kokaku kidotai). Tokyo: Production I.G., Inc.


Editorial history

Received 16 July 2015; revised 19 January 2016; accepted 19 January 2016.

Creative Commons License
This paper is in the Public Domain.

Media futures: Premediation and the politics of performative prototypes
by Jörgen Skågeby.
First Monday, Volume 21, Number 2 - 1 February 2016

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.