Ordering space: Alternative views of ICT and geography
First Monday

Ordering space: Alternative views of ICT and geography by Quinn DuPont and Yuri Takhteyev



Abstract
We analyze two ways of thinking about ICTs in the production of space. One is what we call the “mimetic” view. This view focuses on ICTs’ ability to bring representations from one locale into another. Debates about ICTs and geography have historically been driven by this “mimetic” view and continue to be constrained by it. In contrast, we discuss what we call the “algorithmic” view of ICTs, which focuses on computational re-ordering of representations and subsequent reordering of real-world entities. Recently, scholars of ICTs, communication, and geography have increasingly drawn on examples that fall under the “algorithmic” view, yet the distinction between the two views has not been clearly articulated. This paper clarifies this distinction.

Contents

Introduction
Mimesis: Visual and textual
The algorithmic view
Algorithmic geography
Conclusion

 


 

Introduction

Alexander Graham Bell’s famous demo of the telephone in 1877 sparked a discussion about what future forms of remote communication could be enabled by such technology. Soon reports were spreading that Thomas Edison was building a system that would make it possible to not only hear but also see another person over distance — a videophone. In the following years the videophone started appearing in fiction. Perhaps the most notable of such early accounts is Albert Robida’s Le Vingtième siècle. La vie électrique (The Twentieth century. Electronic life) trilogy, published in 1890 (see Figure 1). The trilogy describes life in the distant year of 1955 in vivid detail, including many future inventions and their place in the society of the future. One of the devices central to Robida’s futuristic vision is the “téléphonescope” — or “Télé” for short. Robida describes the Télé as a device consisting of “a simple crystal sheet, flush with the wall or set up as a mirror above a fireplace” capable of displaying images, combined with “a simple telephone” for transmission of sound [1]. This combination of being able to see and hear at a distance is a “complete, absolute” illusion “as if one were sitting in the front row” of a theater [2].

 

La vie electrique cover

 

While the initial introduction of the Télé presents it as a device for remote entertainment (an alternative to visiting the theater), the characters also use it for a variety of other purposes, such as remote work and education. For example, Robida depicts a young woman relying on the Télé for taking courses offered by masters in Zurich without leaving her provincial home in Lauterbrunnen (Figure 2).

 

La vie electrique

 

While Robida’s account of the future afforded by the Télé was at times quite enthusiastic, critical perspectives on long-distance interaction soon started to appear. Forster’s 1909 dystopic novella “The Machine Stops” presents a world almost entirely devoid of face-to-face interaction. People live individually in underground cells that they rarely leave. They communicate with others via “the Machine,” which can make images of remote people and objects appear on a hand-held device described as a round glowing plate. Forster’s characters question how “space is annihilated” by the Machine’s remote communication features. In the following decades videotelephony also started appearing in films, such as Metropolis (1927) and Modern Times (1936), finding a variety of treatments.

While the discussion of videotelephony remained almost entirely speculative in the first quarter of the twentieth century, subsequent years revealed gradual progress towards actual videotelephony, culminating with the release of AT&T’s Picturephone in 1964. This transition from fiction to reality led to increasing attention from scholars, in particular in psychology and social science. Such scholarship started to provide an important critical perspective, aiming to evaluate claims made by futurists. The most notable of these was perhaps Short, Williams, and Christie’s (1976) book Social psychology of telecommunications, which discussed in some depth what could be expected from telecommunication as a form of remote interaction. They focused on such issues as gaze and eye contact, as well as what the authors called “the coffee and biscuits problem” — if one side is having biscuits, they can’t pass them over to people on the other side.

Human geography and communications scholars began to problematize these new technologies in terms of transport and telecommunications. Janelle (1973) discussed the dialectic of a “shrinking world” and human extensibility, leading to time-space convergence/divergence. While Janelle optimistically saw an upper limit to a world where all major places are within 10 minutes of travel, the lesson learned from Pool (1977) was that advances in telecommunications have not actually positively contributed to existing modes of human “linkage.” By the 1980s, geographers were questioning these changes as a binary of space and place (Falk and Abler, 1980; Gatrell, 1983), the latter developing into a robust theory of social meaning (e.g., Pred, 1984).

Partially fueled by new technological advances, the discussion of the future “space-annihilating” effects of ICTs resumed full force in the late 1990s, expressed most famously by Cairncross’ (1997) book The death of distance. This book argued that the decreasing cost of data transmission would soon bring about the “death of distance,” which would in turn change the world in unfathomable ways. In the subsequent years, the death of distance thesis was subjected to withering critique on several fronts.

One line of attack, what might be called the “psychological” criticism, followed Short, et al. (1976) in pointing out that videoconferencing technologies simply would not match the richness of face-to-face interaction. This line of criticism was perhaps best represented by Olson and Olson (2000). They stressed the importance of physical co-presence for establishing joint references, nuanced social cues, and informal, unplanned conversations.

The second line of attack, what might be called the “spatial” criticism, argued that regardless of the efficacy of information technology the result is hardly a placeless world (Castells, 1996; Sassen, 2006). Rather, a small number of places have grown in importance, making the world increasingly more “spiky” (Florida, 2008). These regions have grown in importance in fairly predictable capitalist ways (see, e.g., Harvey, 2006); rural areas are on the decline, and the global south remains underdeveloped despite access to global telecommunications. This argument is perhaps made in the clearest form by Brown and Duguid (2000). They pointed out that these technologies have not stemmed the flow of engineers coming to the San Francisco environs.

Neither the “psychological” nor the “spatial” criticism of the death of distance thesis reject ICTs’ effect on long-distance interaction. Rather, they demolish the most naive forms of the thesis, but leave room for the more nuanced arguments. The “psychological” criticism accepts that ICTs enable long-distance collaboration but shows that such collaboration cannot today, and won’t in the near future, match the quality of face-to-face interaction. This criticism has led, however, to research programs to address such limitations, especially as driven by technological developments. The “spatial” criticism also accepts the effect of ICTs on distance, targeting the assumption that effective remote communication would necessarily have a straightforward equalizing effect. Instead, such authors argue, the effects of communication are often more subtle and, quite frequently, benefit those who already have power.

At the same time, the problem of communication at a distance was also taken up by media scholars. Early on, the “Toronto School” led by McLuhan, theorized a shrinking world made possible by “prosthetic” ICTs that extend our bodies into space (McLuhan, 2003). For most of the last century such ICTs meant broadcast and mass media, such as television, radio, and film. Starting in the late 1970s, however, the configuration of media begun to change as computers were networked and personalized information and communication services were developed — such as e-mail, messaging, and generalized resource sharing (Abbate, 1999). Correspondingly, forms of virtual communication changed, evolving from unitary (conversational) patterns to complex multi-pattern forms (Bordewijk and van Kaam, 1986). So-called “new media” offered a radical departure from the capabilities of mass media, however, scholars drew numerous linkages between new telecommunications and virtual environments and “old” media, such as cinema (Manovich, 2001).

This multi-disciplinary discussion naturally evolved into what practical uses “distance-annihilating” ICTs can be put to, and how success and efficacy might be evaluated. Computer-mediated education — as first envisioned by Robida, with virtual classrooms — became a significant topic of study, for example: questioning the degree of collaboration possible in virtual places (Acker, 1995); or, assessing the challenges of psychological bonding and socialization in a virtual environment (Haythornthwaite, et al., 2000). Similarly, the possibilities of remote work were both lauded and critiqued from a number of different perspectives. Hislop (2002) argued that the very nature of knowledge itself — its carefully crafted nature — makes communication and knowledge sharing difficult in virtual environments. And, perceived discontinuities may occur and challenge the synchronicities necessary for knowledge formation, which is sometimes thought to be the hallmark of modern, high-fidelity remote work (Watson-Manheim, et al., 2012). In real settings, the use of ICTs for computer-mediated communication and collaboration were both exciting and disappointing, useful and as well as a barrier to the expressed goals.

In most cases, research on the effects of ICTs on distance have focused on a particular notion of the representation of space that their media produce. We call this notion the “mimetic view.” In its simplest form, the “mimetic view” involves a focus on ICTs’ ability to function like Robida’s Télé, bringing to the human viewer audiovisual likenesses of people and objects located far away. We contrast this view with the “algorithmic” view. It involves a focus on the use of ICTs that deploy a range of different representational capacities to algorithmically reorder rather than just transmit digital representations. While algorithms can be used for many purposes, here we focus on their ability to reconfigure representations at a distance.

In both the mimetic and algorithmic views, perceptions of distance, but more importantly, space itself, changes with the development and use of technologies that represent the world. Lefebvre (1991) calls these technologized spaces the “spaces of representation” and argues that their symbolic elements exceed their determinations in our understanding. This ability to exceed our understanding is an important characteristic of algorithmically ordered representations, and is usually where ethical choices are present. Both views are representational, but we argue that they produce space differently, and thus produce different effects.

In this paper we argue that the effects of such computed reordering on geography often is characterized in terms of the mimetic view, leading to an accidental backgrounding of important effects and characteristics. Our goal is to draw a sharp (sometimes too sharp) analytical distinction between the mimetic and algorithmic views. We identify three ways in which the algorithmic ordering of geography affects social and economic orders. The first, “routing,” is perhaps the most familiar. We describe how information technologies move people and objects to create new spaces of relationships. As in Kitchin and Dodge’s (2011) famous example of the code/space of an airport check-in lounge, algorithmic technology situates people in relations with each other — their distances, economics, and meanings become part of the complex of code itself. Amazon.com also uses this effect to move items rapidly from lean distribution centers in strategic locations to consumers around the world. Our second example, “algorithmic environments,” is somewhat more radical. In this example, technologies are being used to create environments that pull in the opposite direction, often quite agnostic of their geospatial grounding. In the case of the Web site GitHub (https://github.com), work is broken into discrete pieces and ordered in complex ways. Similarly, Facebook orders (or even “manages”) personal relationships according to a hidden algorithmic logic, not necessarily prioritizing spatial location. The third and final example, “computing geography,” builds on the radical reordering of “algorithmic environments,” but explicitly takes geographical location into consideration and uses it as a variable for computational ordering. Amazon, again, uses geographical location as an important datum for its cloud services offerings, giving consumers the explicit choice of where to locate data and computing resources. More controversially, the Web site Freelancer (https://www.freelancer.com) promises to do for humans what Amazon does for computing resources. Through a simple application programming interface, human labour can be bought, sold, and re-ordered automatically, in response to programmatic demands. In this dystopia, humans become like Amazon’s boxes, their labour shipped around the world, bought by computers responding to some algorithmic need.

 

++++++++++

Mimesis: Visual and textual

Robida’s Télé and today’s Skype illustrate a particular way of thinking about the role of ICTs and media in the production of space, a view that has often dominated debates about the “death of distance.” We call this the “mimetic view.” Like in Robida’s Télé, a likeness of an individual and her surroundings is transported to another distant place. Consequently, distance ceases to be an obstacle to interaction and communication. Videotelephony, however, is the slim edge of the wedge. There are many other uses or forms of ICT that cannot be reduced to transmission of “likeness.” Nonetheless, these share important aspects (and limitations) with technologies that rely on such transmission. We thus use the broader concept of “mimesis” to capture the common features of this model.

Mimesis, from the Greek word meaning “imitation,” is a slippery concept that has enjoyed a long history of theorization (Auerbach, 2013). Mimesis is often taken to imply likeness, verisimilitude, or realism; a form of representation that in some ways conveys that there is a “truth” about the represented entity. This special kind of representational doubling has at times been seen as illusory and false, a line of thought going back to Plato. Other authors, starting with Aristotle, have stressed that mimesis produces something that is separate from the being it represents, but which nonetheless can be quite useful. Mimetic representations are often used for delegation where one thing symbolically “stands for” another, sometimes evoking a strong sense of “presence” for the viewer (Prendergast, 2000).

The power of mimesis has much to do with the flexibility of the relationship between the original and its representation, and the contextual variability of verisimilitude needed for the representation’s ability to “stand in” for the original. The domain of visual art illustrates some of this complexity. Some forms of painting aim for a strong degree of realism, giving the viewer a strong sense of presence. Others, such as abstract art, may not give such a sense, but may still symbolize other relational qualities and be mimetic in that sense. Yet still others blur the line; cubist art for example, often portrays a recognizable subject, yet quite unlike any known reality.

In our use of the term “mimetic” we do not intend to engage in the philosophical debate about the proper analysis of the linkages between representation and “likeness” or realism. Rather, we draw on the notion of “mimesis” to identify a spectrum of facilities offered by ICTs. On the one side of this spectrum we have technologies that transmit some “likeness” of people and objects to remote places. This can mean high-fidelity audio and video. Alternatively, however, such technologies may take advantage of people’s tolerance for lo-fidelity “likeness.” At the other side of this spectrum we have technologies that draw on other methods to create a sense of “presence” for the viewer, using the recipient’s ability to fill in the details through imagination. For example, a person may feel being present in a different (or altogether imaginary) place through textual descriptions of such places or of actions taking place in them.

The excitement about the upcoming “death of distance” in the 1990s coincided with an intensified interest in virtual reality (VR), in large part due to increased sophistication of display and computing technologies (two decades later, we have resumed interest in VR, as new commercial products have entered the market). What we have learned since the 1990s, however, is that the technology needed to create convincing alternative worlds is very difficult to produce. We discovered that the human brain is extremely sensitive to phantasmagoric experiences, and while it can at times be “tricked” into thinking a fabricated reality is (more or less) real, even the smallest parallax can ruin the illusion.

Yet, for all the challenges and hopes of creating a perfectly illusory experience, lower fidelity technologies can be powerfully mimetic in the right context. For example, with its poor graphics (even for the time), slow response time, and unbelievable landscapes, the virtual world Second Life briefly captured our imagination, and ushered in a flood of optimistic journalism and academic study. Second Life allowed the use of avatars that varied in the degree of resemblance to the people they represented. However, Second Life still created an important sense of presence for the users, a topic explored by research.

This continuum between high-fidelity and low-fidelity mimetic experiences highlights the ways in which mimesis is primarily perceptual, which is why “screen studies” tends to pick up on metaphors of vision and hearing (our most immediate sense perceptions). Phenomenologically, mimetic experiences tend to be powerfully hermeneutical, typically operating at the level of common, shared intuition. Screen media, such as television or film, is fundamentally similar to painting, which tend to use certain universal symbols and conventions that “make sense” to a wide range of people (but not all peoples, at all times). These conventions are so natural-seeming and powerful that we often equate representation itself with the mimetic view. We usually think of lower fidelity (less clear) representations as “less” representational simply because they run counter to these assumptions. Or, alternatively, we “fill in” our own beliefs to make up for such deficiencies.

While discussions of videophone-like technologies (and higher-fidelity VR) have often dominated the death of distance debates, many scholars have also noted the importance of text-based communication, including e-mail, Internet relay chat (IRC) or instant messaging (IM). Such technologies often make use of discrete media in the sense that we later discuss and posit as one of the requirements for the “algorithmic” model. Yet, in the right context textual spaces can be powerfully mimetic. Turkle’s (1997) Life on the screen and Dibbell’s (1993) “A rape in cyberspace” tell powerful stories about mimetic experiences in online textual spaces.

The distinction between space and place (Tuan, 1977) that arose from the humanistic turn in geography has usually been understood in mimetic terms. In the past, space was characterized as absolute, relative, or relational, and thus could be quantified, abstracted, and represented. With the development of theories of the social construction of place, deeply normative concerns replaced previously quantitative or rationalistic approaches. Unlike conceptions of space, place has behavioural and cultural expectations (Harrison and Dourish, 1996). With the advent of “virtual” places, cues that represent norms in the physical world had to be invented for the new digital reality. Typically, these conventions dictate a “spatial mode of interaction” where digital metaphors stand in for their physical counterparts. In what is now a rather dated example, Harrison and Dourish (1996) suggest that text-based multi-user dungeons (MUDs) are designed in a strongly spatial mode, but end up acquiring their norms from social, place-based elements.

There is no bright line that separates mimetic textual spaces from the technologies that we assigned to the “algorithmic view” below. Some environments such as Facebook combine important elements of mimesis with computational reordering (which we discuss later).

 

++++++++++

The algorithmic view

There is, however, another way to think about ICTs and their role in the production of space. Versions of this view have been increasingly present in recent literature, especially in media studies (Hayles, 2002; Galloway, 2004; Chun, 2011; Cheney-Lippold, 2011), but also human geography (Thrift and French, 2002; Graham and Zook, 2011; Kitchin and Dodge, 2011). Yet, the boundary between the two views has not been clearly articulated, and the analytical distinctions have not been fully explored. We call this alternative the “algorithmic” view of ICTs. It involves a focus on the use of ICTs to algorithmically reorder rather than just transmit digital representations. While this distinction is applicable to all representational technologies, here we focus on how computed orderings are used to rearrange material objects within space.

Most of the essential features of the algorithmic view can be illustrated by Hermann Hollerith’s tabulating machine first developed in the late 1880s, around the same time that Robida’s vision of the telephonoscope was crystalizing. Compared to the various versions of the Télé, the tabulating machine does not appear to have captured much of the speculative imagination of its contemporaries — despite the fact that unlike the imaginary Télé the tabulating machine was real and was being put to widening use since the 1890s (Austrian, 1982; Campbell-Kelly, 1990; Cortada, 1993; Yates, 1997). In fact, when digital computers — in many ways the descendants of the algorithmic technologies of the nineteenth century — came to be studied by social science and humanities scholars it was largely for their ability to support mimetic uses. Yet as we will try to show, the algorithmic uses of ICTs are having as significant — and quite possibly more significant — impact on our experience of geography than the mimetic uses.

Hollerith’s tabulating machine was essentially a device for counting and sorting punch cards — sheets of paper marked with perforated holes, an idea inherited from textile looms, including the famous Jacquard loom. In the initial use, each card represented a household recorded by the 1890 U.S. census. It is important to stress that Hollerith’s punch cards and the virtual (imaginary) images on Robida’s Télé are both forms of representation — but of a very different kind. Unlike the Télé, punch cards represent data digitally.

Although the term “digital” is often used today for things related to electrical computers, digitality is not inherently tied to any particular technology, but rather is the conceptual underpinning of a range of “computing” technologies. Arguably the most powerful description of digitality comes from Nelson Goodman (1968) in his Languages of art. In this work Goodman establishes a complex set of syntactic and semantic criteria for “notational schemes,” equivalent to what we are here referring to as digitality. These notational schemes include familiar discrete writing systems such as binary, Morse code, and musical notation. For Goodman, a notational scheme must be syntactically disjointed (equivalent inscribed marks — tokens — must be interchangeable), and differentiated (different inscribed marks must be able to be distinguished from each other). And, for algorithmic processing, the “marks” must also be unambiguous (each inscribed mark must symbolize only one thing).

Fitting with Goodman’s formulation, the holes in a punch card uniquely symbolize abstracted aspects of the subject or object. This form of representation specifically aims to avoid the wholeness and complexity of the subject, passing the abstracted parts through a kind of sieve to reduce it to a collection of attributes or identities. The holes in a punch card imply a rigid identity relationship between the symbolic (or ideal) and the real world. When used for the 1890 U.S. census, each coded punch card is understood to uniquely represent a household, with each punched hole identifying an abstracted attribute (such as male/female).

As a fundamental aspect of the process, turning complex reality into discretely punched holes produces representational “violence,” especially for those subjects and objects that do not fit neatly within the established categories. This process, and its consequences, was described at a social and infrastructural level by Bowker and Star (1999), and Furner (2007). Far from “just” a social product, however, much of this representational violence is caused by the mismatch between the semantically rich external world and the semantically sparse algorithmic world of the notational machine. In order to process notation in a deterministic fashion, the machine manipulates pre-established identities by reordering notational marks (punched holes for the Hollerith tabulator, or electrical signals for contemporary computers). This ordering process is fundamentally a form of transcription (ordering), rather than translation (transforming), which by itself does not create any additional representational violence. However, turning the external world into disjoint and differentiated categories (into a notational scheme), necessarily does cause omissions, distortions, and reductions. Of critical importance, what we choose to do with such reordering is a further ethical choice.

While the punch card purposely provides a very limited representation compared to the richness of the mimetic image or description, part of its power lies in the fact that such abstracted representations can be processed in an automated and efficient way. In the earliest tabulators, the primary form of processing involved counting holes in a particular location on the card (when the electrical circuit closed, the accumulator registered mathematical addition) and basic sorting, which required a substantial amount of manual effort. For example, on the 1890 U.S. census such attributes included sex, age, race, conjugal condition, as well as a few other aspects of the members of the household. However, contrasted with a free-form description of the household, the census is a highly restrictive form of representation. Later tabulator models streamlined the process and enabled more sophisticated manipulation of representations, including support for full-text alphabetic fields. Even though richer descriptions quickly became possible, they were avoided for all data collection processes destined for algorithmic processing, which required meaningful categories of notation (and their corresponding punch card representations). This ability to process the coded representations made the tabulator performative, in the sense of actually being able to do things.

The purpose for which the tabulator was built is also indicative of the later use of algorithmic systems. The U.S. census office would use the machine to order representations of the population as a step towards eventually exercising control over the actual population. In other words, the re-ordering of the cards was a step towards “re-ordering” the represented people. While in the early years the gap between the two orderings was sufficiently wide that computational reordering of people could seem like a mere metaphor, in the years that followed such systems were used to identify, gather and dislocate or exterminate specific subsets of the population (Lubar, 1992; Luebke and Milton, 1994).

While algorithmic uses of ICTs are not the only element of today’s “control society” (Deleuze, 1992; cf., Beniger, 1986), they are its essential component. When the material world is ordered it can be used for disciplining (Foucault, 1979) or controlling (Deleuze, 1992) subjects. Computers, thus, become ideal tools of such discipline and control. The expression of order (as discipline or control) can take the form of becoming a “slave to the algorithm” (Slater, 2013). For example, socially sorting people can be performed for customer, credit, and crime profiling, often for commercial or discriminatory ends (Kitchin and Dodge, 2011). Yet, while the terms “discipline” and “control” normally connote nefarious abuse of individual freedoms, many elements of algorithmic ordering are beneficial, even to those who are being ordered. The highly ordered system of air traffic control is one of many examples (Kitchin and Dodge, 2011).

Technologies that support algorithmic representations are common and influential, yet have hitherto been overshadowed by research on mimetic uses. We argue that the syntactic and semantic criteria for notational systems captures the essential aspects of how these technologies contribute to the algorithmic view. The origins of the algorithmic view can be found in early digital technologies and their use in displacing populations, but their effect is no less pronounced today. As argued previously (Kitchin and Dodge, 2011), code often shapes space.

 

++++++++++

Algorithmic geography

While the algorithmic view is applicable to representational technologies more generally, we now turn to some of the specific ways in which their use changes our experience of geography. We organize our discussion into three sections, each with a different primary example.

Routing

Perhaps the most obvious way in which algorithmic technology affects our experience of geography is through its role in facilitating the movement of material objects. One such version is described by Kitchin and Dodge (2011), who show the extent to which software underlies today’s air travel infrastructure, resulting in a fusion of software and spatiality that they term “code/space.” For example, an airport check-in area is a code/space because if the software running the check-in process fails, the space stops being a check-in area at all. Moreover, even before passengers arrive at the check-in they are screened and sorted through passenger name records. So entrenched are algorithms in modern air travel infrastructure, that even before this algorithmic classification, passengers are subject to the software-driven interpellation of their desire, producing subjects that willingly and voluntarily participate in the ideology and practice (Kitchin and Dodge, 2011). Kitchin and Dodge (2011) call this process “algorithmic classification,” a particular species or subset of what we are calling the algorithmic view.

It is important to note that while the mimetic use of ICTs has historically been important for managing travel and transportation, it is the algorithmic uses that enable some of the most dramatic recent changes. Many examples use “digital” technologies but remain mimetic. For example, while pilots may use voice transmission technology to communicate with dispatchers, rely on digital maps to identify their location, or pilot the airplane using fly-by-wire controls (the plane itself becomes a code/space) these are mimetic uses of ICTs. Nonetheless, many other elements of air travel are increasingly managed through the automated ordering of abstracted representations.

Given the complexity of modern air traffic control it is apparent that without software-assisted routing algorithms global flight volumes would be impossible, and when this software contains an error the results are often cataclysmic. In order to make such an enterprise profitable airlines must use sophisticated backend software to develop and maintain flight routes and keep flights full.

As an example of another use of algorithmic ordering, but one that changes space in ways quite distinct from Kitchin and Dodge’s code/space view, let us consider the case of Amazon.com, which we present in an extremely simplified, illustrative form. From the customer’s perspective, the process starts with an interest in buying, say, a book. Amazon’s Web site presents the customer with a ranked selection from a vast store of representations of books. The customer’s decision to buy a book is registered in the database and is routed to a distribution center. This routing process aims to minimize the cost of physical processing and delivery, considering the location of the customer and the items, as well as the customer’s other purchases. The order, then, may be split between multiple distribution centers depending on stock levels. Alternatively, execution of a shipment may be strategically delayed to make it possible to aggregate other items for shipment.

Within the distribution center, orders are packed by a range of connected technologies: human employees who receive orders on handheld computers, robotic shelving, as well as old-fashioned technologies such as forklifts moving palletized goods. As items are picked, packaged and shipped, each step is accompanied with an item identification scan to ensure that the representations are updated with the latest status information and tracked. The delivery of the shipments is normally handled by a different company, which uses its own technology for optimizing delivery and status tracking, but typically integrates with Amazon’s systems (known as enterprise resource planning).

The combined effect of algorithmic ordering in the case of Amazon.com is the dramatic reduction of delivery time and cost, to a point where getting a book via Amazon can be both cheaper and faster than going to a local bookstore. More recently, Amazon has been transitioning to the use of smaller and more localized distribution centers, which has made it possible to further reduce delivery times and to start experimenting with selling perishable items, such as groceries. Timely processing of such orders necessitates yet more complex computational ordering.

While the importance of computerized logistics has been increasingly recognized (Shaw and Sidaway, 2011), it is often described metaphorically using the notion of “flow” (see e.g., Hesse and Rodrigue, 2004; Knowles, 2009). From the perspective advanced in this paper, the “flow” metaphor is highly misleading, as it connotes uninterrupted continuity. Algorithmic technologies, however, are most powerful when dealing with discretized representations. In the case of transportation, this involves organizing items into boxes, which can then be individually represented and computationally ordered.

Algorithmic environments

Another important way in which algorithmic technologies affect our experience of geography is by shifting many activities from place-dependent material contexts to algorithmically managed computational environments that more readily accommodate remote participation. Such computational environments may resemble the mimetic systems that we have discussed above, and in practice no bright line can be drawn between mimetic and algorithmic environments for many cases. Yet, the algorithmic view highlights in a novel way that there is, in principle, an important analytic difference.

For mimetic environments, supporting interaction across space or time is often one of the main design objectives. Consequently, they are often judged by how close they come to replicating the gold standard of communication — face-to-face interaction. Algorithmically driven environments, on the other hand, typically offer benefits derived from automated ordering and manipulation of representations, which cannot be achieved in face-to-face communication. Consequently, algorithmic environments do not present the dilemma of whether to include remote participants at the cost of lowering the quality of communication.

The GitHub system is an important example of an algorithmic environment, providing an assemblage of computational services that facilitates modern software development. The most important service offered by GitHub is its revision control system, called “git.” Like other similar systems, git keeps track of modifications to software code, facilitating collaborative software work. Building on the idea in earlier systems, git provides strict identity for revisions, which makes it possible to keep track of them even as they are re-ordered or moved between subprojects. Due to the careful management of revisions, git also provides powerful mechanisms for automatic merging of revisions coming from different branches. Illustrating our discussion of the algorithmic view, these capabilities of identity and performativity enable git to order and control software work.

While git forms the foundation of GitHub services, several other ones are added. For example, GitHub provides users with an ability to discuss revisions and bug reports. While such discussion features have certain similarities with the mimetic textual environments discussed earlier, they differ from them in an important way. Content such as comments is tightly organized around specific units of work, such as revisions and bugs. In this way, the flow of discourse is discretized — broken into containers — which can then be ordered. Such computable ordering makes it possible for a developer to find discussions that are specifically related to the issues at hand.

Systems such as revision control and bug tracking were not developed with the intention of supporting remote collaboration per se. Early versions of these systems were used by collocated work teams (Naur and Randell, 1969; see Ambriola, et al., 1990; Ruparelia, 2010, for an assessment of the history of software version control). Such systems were adopted for their algorithmic benefits: they made it easier to keep track of units of work, rearrange them (automatically), and to associate bugs and revisions that address them. At the same time, the effect of such systems has been to move the locus of software work from physical environments to computational ones, setting users and code into relations mediated by ICTs. This transition has increasingly helped put remote participants on a more equal footing with the collocated ones.

While the GitHub system is algorithmic at its core with elements of mimesis, Facebook provides an example of an environment that would be better described as a hybrid of algorithmic and mimetic uses of ICTs. Many of the interactions between Facebook users (“friends”) are conducted using natural language or images and create a strong sense of presence. Such interactions establish strong markers of familiarity and sociality, creating an environment that Harrison and Dourish (1996) might call a “place.”

At the same time, however, Facebook presents a powerful and profound example of the algorithmic use of ICTs. The earliest version of Facebook functioned primarily as a database aiding users to meet new people. In this sense, Facebook was algorithmic from the beginning. As mimetic features were added later, they remained embedded within the larger algorithmic context. Facebook does not merely group user content into topical or community-based sets. Rather, it presents each user with a unique ordering of content based on a proprietary model incorporating social relationships, privacy settings, and each user’s preferences (Pasquale, 2015). While the order of the content presented to the user is roughly inversely chronological, the exact ordering deviates from simple temporal logic, incorporating a range of factors.

Much of the power of Facebook arises from mimetic and algorithmic elements working together. Facebook offers users a strong sense of presence: feeling as though interactions occur with real friends, privy to intimate or even mundane moments. Yet, in ways not usually made visible (unlike GitHub, the machinery of Facebook is almost completely opaque), these “intimate” moments are highly managed, controlled, and ordered. This ordering creates an experience that in some ways is more powerful than face-to-face interaction. For example, it makes it possible to maintain social contact with hundreds of people with nuanced degrees of engagement.

The algorithmic use of ICTs is also crucial to Facebook’s ability to monetize its business through advertising. While traditional advertising techniques are often mimetic, focusing on giving the viewer a sense of presence in the idealized world painted by the advertiser, modern advertising techniques used by companies such as Facebook and Google rely heavily on algorithmic matching of users and advertisers’ messages. Abstracted representations of users and ad bids are entered into automated instant auctions, conducted in the milliseconds before the Web browser loads a page.

Computing geography

The algorithmic environments that we have considered above are, for the most part, agnostic of geography. Their power to affect our experience of geography derives primarily from the fact that the geographic location of participants is of relatively little importance within those environments. This geographical agnosticism, however, can only go so far. There are many situations in which the geographic location of people and objects must be dealt with. Algorithmic technologies are increasingly adapted to deal with this by incorporating geography as another variable to be computed and ordered. In some ways, this involves merging the approach of GitHub with elements of Amazon.com’s delivery system.

Such geographic awareness is exemplified by the ways in which geography is factored in Amazon.com’s cloud computing system, called Amazon Web services (AWS). AWS is a collection of computing services that users purchase and manage online. AWS allows companies to flexibly purchase computing resources “just in time” and therefore lower costs associated with the management of these resources. Like other cloud computing systems, AWS at first appears to herald the death of distance: without getting up from the couch a user can set up a data center at a scale that previously would have required leasing physical premises, installing hardware, and worrying about who has access to the building. AWS’ system is, of course, based on hardware that is situated somewhere. As it turns out, the location of hardware matters. First, a company using AWS may be required by law to keep certain data within particular jurisdictions (for example inside the European Union). Second, space impacts data transfer across locations due to latency and bandwidth (and the costs associated). In particular, companies often prefer to locate servers near customers to allow faster access.

AWS solves these problems by explicitly incorporating geographic location into its graphical user interface and application programming interfaces (APIs) (Figure 3). Importantly, AWS’ geography is discretized and abstracted. Users choose computing resources within geographic units called “regions,” identified by codes such as “us-west-1.” Regions are accompanied by loose geographic descriptions, such as “Northern California,” “Ireland”, or “Tokyo.” Users are not told (and presumably do not need to know) where exactly their data is stored within the region.

 

us-west-1

 

Regions are computable in the sense that they can be explicitly used as attributes for the automatic management of resources. For example, a user would use a command such as the following to create a virtual 50GB storage volume in “Northern California”:

vol = conn.create_volume(50, "us-west-1")

Customers can use this approach to integrate management of geographically specific computing resources into a larger system. For example, a company using AWS can monitor resource usage in different cities in real time, creating resources in appropriate regions to respond to demand.

The idea of managing “cloud” resources through a geography-aware API is taken to a further extreme by Freelancer, a Web site that aims to automate the process of hiring freelancers for contract work. Like AWS, Freelancer offers an API, though in this case for directly controlling human resources. Focusing on piecemeal, discrete work, the API aims to take the idea of algorithmic control over individual people to its extreme. As with AWS, the API specifically accounts for the location of the freelancer. This makes it possible to search for (and automatically hire) people located in specific cities or in broader areas, for example by country or time zone. The location of people available for hire is also displayed prominently on Freelancer’s Web interface.

Systems, such as AWS and Freelancer, merge elements of algorithmic environments, such as GitHub, with the geographic awareness of routing systems. They do so by making place itself another variable in the larger computational environment. The complexity of geography is reduced to digital notation (disjointed, differentiated, and unambiguous), making it possible to rapidly reorder objects of interest using abstracted physical location as one of the many attributes. Under this approach location can be either ignored or accounted for, depending on the needs of the specific ordering.

 

++++++++++

Conclusion

Our paper aimed to take a new look at the effect of ICTs on our experience of geography by drawing a distinction between what we call “mimetic” and “algorithmic” uses of ICT. Mimetic uses transmit “likeness” and create an experience of presence. Algorithmic uses provide for automatic reordering of discrete, abstracted representations. Discussions of ICTs’ effect on geography have often been dominated by a perspective that emphasizes mimetic uses. This view has often focused on understanding the extent to which mimetic uses of ICTs are actually effective in bringing about the death of distance. The algorithmic uses of ICTs, however, also have substantial effect on our experience of geography, and appear to be growing over time. We illustrate such effects by considering how algorithmic technologies underlie the movement of goods and people, the role they play in moving work and social activities into computational environments, and the way in which geography itself can become a variable in contemporary computational environments. Scholars studying ICTs, communication, and geography need to better attend to understanding such effects. End of article

 

About the authors

Quinn DuPont is a Ph.D. candidate in the Faculty of Information at the University of Toronto.
Direct comments to: quinn [dot] dupont [at] utoronto [dot] ca

Yuri Takhteyev is a status-only faculty member in the Faculty of Information at the University of Toronto and Chief Technical Officer at Rangle.io.
E-mail: yuri [dot] takhteyev [at] utoronto [dot] ca

 

Notes

1. As cited in Willems, 1999, p. 359.

2. Ibid.

 

References

Janet Abbate, 1999. Inventing the Internet. Cambridge, Mass.: MIT Press.

Stephen R. Acker, 1995. “Space collaboration, and the credible city: Academic work in the virtual university,” Journal of Computer-Mediated Communication, volume 1, number 1.
doi: http://doi.org/10.1111/j.1083-6101.1995.tb00319.x, accessed 23 July 2016.

Vincenzo Ambriola, Lars Bendix, and Paolo Ciancarini, 1990. “The evolution of configuration management and version control,” Software Engineering Journal, volume 5, number 6, pp. 303–310.

Erich Auerbach, 2013. Mimesis: The representation of reality in Western literature. Translated by Willard R. Trask. Fiftieth anniversary edition. Princeton, N.J.: Princeton University Press.

Geoffrey D. Austrian, 1982. Herman Hollerith: Forgotten giant of information processing. New York: Columbia University Press.

James R. Beniger, 1986. The control revolution: Technological and economic origins of the information society. Cambridge, Mass.: Harvard University Press.

Jan L. Bordewijk and Ben van Kaam, 1986. “Towards a new classification of tele-information services,” InterMedia, volume 34, number 1, pp. 16–21.

John Seely Brown and Paul Duguid, 2000. The social life of information. Boston: Harvard Business School Press.

Frances Cairncross, 1997. The death of distance: How the communications revolution will change our lives. Boston: Harvard Business School Press.

Martin Campbell-Kelly, 1990. “Punched-card machinery,” In: William Aspray (editor). Computing before computers. Ames: Iowa State University Press, pp. 122–155.

Manuel Castells, 1996. The rise of the network society. Oxford: Blackwell.

John Cheney-Lippold, 2011. “A new algorithmic identity: Soft biopolitics and the modulation of control,” Theory, Culture & Society, volume 28, number 6, pp. 164–181.
doi: http://doi.org/10.1177/0263276411424420, accessed 23 July 2016.

Wendy Hui Kyong Chun, 2011. Programmed visions: Software and memory. Cambridge, Mass.: MIT Press.

James W. Cortada, 1993. Before the computer: IBM, NCR, Burroughs, and Remington Rand and the industry they created, 1865–1956. Princeton, N.J.: Princeton University Press.

Gilles Deleuze, 1992. “Postscript on the societies of control,” October, number 59, pp. 3–7.

Julian Dibbell, 1993. “A rape in cyberspace,” Village Voice (23 December), at http://www.villagevoice.com/2005-10-18/specials/a-rape-in-cyberspace/, accessed 26 April 2016.

Thomas Falk and Ronald Abler, 1980. “Intercommunications, distance, and geographical theory,” Geografiska Annaler. Series B, Human Geography, volume 62, number 2, pp. 59–67.
doi: http://doi.org/10.2307/490390, accessed 23 July 2016.

Richard L. Florida, 2008. Who’s your city? How the creative economy is making where to live the most important decision of your life. New York: Basic Books.

Michel Foucault, 1979. Discipline and punish: The birth of the prison. Translated by Alan Sheridan. New York: Vintage Books.

Alexander R. Galloway, 2004. Protocol: How control exists after decentralization. Cambridge, Mass.: MIT Press.

Anthony C. Gatrell, 1983. Distance and space: A geographical perspective. Oxford: Clarendon Press.

Nelson Goodman, 1968. Languages of art: An approach to a theory of symbols. Indianapolis, Ind.: Bobbs-Merrill.

Mark Graham and Matthew Zook, 2011. “Visualizing global cyberscapes: Mapping user-generated placemarks,” Journal of Urban Technology, volume 18, number 1, pp. 115–132.
doi: http://doi.org/10.1080/10630732.2011.578412, accessed 23 July 2016.

Steve Harrison and Paul Dourish, 1996. “Re-place-ing space: The roles of place and space in collaborative systems,” CSCW ’96: Proceedings of the 1996 ACM Conference on Computer Supported Cooperative Work, pp. 67–76.
doi: http://doi.org/10.1145/240080.240193, accessed 23 July 2016.

David Harvey, 2006. Spaces of global capitalism. London: Verso.

N. Katherine Hayles, 2002. Writing machines. Cambridge, Mass.: MIT Press.

Caroline Haythornthwaite, Michelle M. Kazmer, Jennifer Robins, and Susan Shoemaker, 2000. “Community development among distance learners: Temporal and technological dimensions,” Journal of Computer-Mediated Communication, volume 6, number 1.
doi: http://doi.org/10.1111/j.1083-6101.2000.tb00114.x, accessed 23 July 2016.

Markus Hesse and Jean-Paul Rodrigue, 2004. “The transport geography of logistics and freight distribution,” Journal of Transport Geography, volume 12, number 3, pp. 171–184.
doi: http://doi.org/10.1016/j.jtrangeo.2003.12.004, accessed 23 July 2016.

Donald Hislop, 2002. “Mission impossible? Communicating and sharing knowledge via information technology,” Journal of Information Technology, volume 17, number 3, pp. 165–177.

Donald G. Janelle, 1973. “Measuring human extensibility in a shrinking world,” Journal of Geography, volume 72, number 5, pp. 8–15.
doi: http://doi.org/10.1080/00221347308981301, accessed 23 July 2016.

Rob Kitchin and Martin Dodge, 2011. Code/space: Software and everyday life. Cambridge, Mass.: MIT Press.

Richard D. Knowles, 2009. “Transport geography,” In: Rob Kitchin and N.J. Thrift (editors). International encyclopedia of human geography. Oxford: Elsevier, pp. 441–451.

Henri Lefebvre, 1991. The production of space. Translated by Donald Nicholson-Smith. Oxford: Blackwell.

Steven Lubar, 1992. “‘Do not fold, spindle or mutilate’: A cultural history of the punch card,” Journal of American Culture, volume 15, number 4, pp. 43–55.
doi: http://doi.org/10.1111/j.1542-734X.1992.1504_43.x, accessed 23 July 2016.

David M. Luebke and Sybil Milton, 1994. “Locating the victim: An overview of census-taking, tabulation technology and persecution in Nazi Germany,” IEEE Annals of the History of Computing, volume 16, number 3, pp. 25–39.
doi: http://doi.org/10.1109/MAHC.1994.298418, accessed 23 July 2016.

Lev Manovich, 2001. The language of new media. Cambridge, Mass.: MIT Press.

Marshall McLuhan, 2003. Understanding media: The extensions of man. Edited by W. Terrence Gordon. Critical edition. Corte Madera, Calif.: Gingko Press.

Peter Naur and Brian Randell (editors), 1969. “Software engineering: Report on a conference sponsored by the Nato Science Committee, Garmish, Germany, 7th to 11th October 1968,” at http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF, accessed 23 July 2016.

Gary M. Olson and Judith S. Olson, 2000. “Distance matters,” Human-Computer Interaction, volume 15, numbers 2–3, pp. 139–178.
doi: http://doi.org/10.1207/S15327051HCI1523_4, accessed 23 July 2016.

Frank Pasquale, 2015. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.

Ithiel de Sola Pool (editor), 1977. The social impact of the telephone. Cambridge, Mass.: MIT Press.

Allan Pred, 1984. “Place as historically contingent process: Structuration and the time-geography of becoming places,” Annals of the Association of American Geographers, volume 74, number 2, pp. 279–297.
doi: http://doi.org/10.1111/j.1467-8306.1984.tb01453.x, accessed 23 July 2016.

Christopher Prendergast, 2000. The triangle of representation. New York: Columbia University Press.

Albert Robida, 1890. Le vingeti&egave;me: La vie électrique. Paris: Librairie illustrée.

Nayan B. Ruparelia, 2010. “The history of version control,” ACM SIGSOFT Software Engineering Notes, volume 35, number 1, pp. 5–9.
doi: http://doi.org/10.1145/1668862.1668876, accessed 23 July 2016.

Saskia Sassen, 2006. Cities in a world economy. Third edition. Thousand Oaks, Calif.: Pine Forge Press.

Jon Shaw and James D. Sidaway, 2011. “Making links: On (re)engaging with transport and transport geography,” Progress in Human Geography, volume 35, number 4, pp. 502–520.
doi: http://doi.org/10.1177/0309132510385740, accessed 23 July 2016.

John Short, Ederyn Williams, and Bruce Christie, 1976. The social psychology of telecommunications. London: Wiley.

Josephine Berry Slater (editor), 2013. “Slave to the algorithm,” Mute, volume 3, number 4, at http://www.metamute.org/editorial/magazine, accessed 23 July 2016.

Nigel Thrift and Shaun French, 2002. “The automatic production of space,” Transactions of the Institute of British Geographers, volume 27, number 3, pp. 309–335.
doi: http://doi.org/10.1111/1475-5661.00057, accessed 23 July 2016.

Yi-Fu Tuan, 1977. Space and place: The perspective of experience. Minneapolis: University of Minnesota Press.

Sherry Turkle, 1995. Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster.

Mary Beth Watson-Manheim, Katherine M. Chudoba, and Kevin Crowston, 2012. “Perceived discontinuities and constructed continuities in virtual work,” Information Systems Journal, volume 22, number 1, pp. 29–52.
doi: http://doi.org/10.1111/j.1365-2575.2011.00371.x, accessed 23 July 2016.

Philippe Willems, 1999. “A stereoscopic vision of the future: Albert Robida’s Twentieth Century,” Science Fiction Studies, volume 26, number 3, pp. 354–378.

Joanne Yates, 1997. “Early interactions between the life insurance and computer industries: The Prudential’s Edmund C. Berkeley,” IEEE Annals of the History of Computing, volume 19, number 3, pp. 60–73.
doi: http://doi.org/10.1109/85.601736, accessed 23 July 2016.

 


Editorial history

Received 28 April 2016; accepted 22 July 2016.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Ordering space: Alternative views of ICT and geography
by Quinn DuPont and Yuri Takhteyev.
First Monday, Volume 21, Number 8 - 1 August 2016
http://www.firstmonday.org/ojs/index.php/fm/article/view/6724/5603
doi: http://dx.doi.org/10.5210/fm.v21i8.6724





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.