Speedism, boxism, and markism: Three ideologies of the Internet
First Monday

Speedism, boxism, and markism: Three ideologies of the Internet by Jan Nolin

The Internet is one of man’s greatest inventions. As all transformative technologies, it leaves a stamp on society, social action and values. This is actually a case of the Internet and society mutually constructing each other. Therefore, as the Internet is in constant transformation, social values rebound and impact on further development. This paper is concerned with systems of values grouped around core ideas, here described as ideologies, which continuously renegotiates the development of the Internet. Three basic ideas are identified as underpinning the development of the packet switching system during the 1960s. It is argued that the historical development of the ARPANET, the Internet and the World Wide Web, as well as current developments, are all variations of these three ideas: the distributed network, the envelope and the identifier. It is maintained that these are translated into value systems, ideologies, held by different social groups. These three ideologies are conceptualised as speedism, boxism and markism. These are discussed in relation to various trends in past and current development of the Internet. This paper is also concerned with concepts articulated by Jonathan Zittrain in his book The future of the Internet and how to stop it (2008), in particular the generative Internet and tethered appliances.


Earlier thinking on the Internet shaping society
Earlier thinking on the Internet as a value system
The generative Internet
The distributed network
The development of packets
The distributed network, the envelope and the identifier
Speedism, boxism & markism and the definition of Internet
The development of the Internet and speedism, boxism & markism
The layering of the Internet
The end–to–end principle
The World Wide Web
The National Science Foundation and the privatization of the Internet
The migration to broadband
Three ideologies
The ideology of speedism
The ideology of boxism
The ideology of markism
When speedism dominates
When boxism dominates
When markism dominates
Is the generative character of the Internet threatened?
The tethered Internet: An alien idea
Closing discussion




Arguably the most severe problem in analyzing the Internet concerns the complex relationship between this artifact and society. The overreaching problem focused in this essay is how to describe a socio–technological phenomenon that fluidly responds to new societal trends and, in addition, impacts on social movements as well. I want to emphasize that I’m not concerned, in this article, with the larger issue on how the Internet shapes society, but rather how it as a socio–technological artifact is developed by various groups in society with partly conflicting interests.

I have found the concept of ideology, as a value system grouped around a core idea, interesting to use for analyzing the Internet as a socio–technical artifact. In a much debated essay Winner (1980) poses the question: “do artifacts have politics?” His answer was affirmative, arguing that distinct ideas were built into certain key technologies, restricting available opportunities for actions. Similar perspectives have been pursued by actor network theory and their articulation of the concept of actant, in which the technology is seen as an active agent in the social processes [1].

Within the tradition of social shaping of technology (SST) social groups are seen to shape the artifact while it in turn influences the structure of social action — often in the form of restrictions (Bijker, 1995). This is a process of mutual construction of society and technology. It would be difficult to find a better example of this than the Internet. This is a technology that has fundamental effects on social processes, but is also continuously shaped by our interaction and development of it.

In this paper I am focused on the way that different social groups have shaped and continue to shape the Internet while it simultaneously shapes us. One dimension of this complex process is that the Internet historically always has remained flexible enough to be developed in different directions. Even to this day and for the foreseeable future, there’s been no “closure”, a point where the artifact has been sufficiently stabilized. Instead, there has been a continued influx of ideas and opportunities, stimulating the formation of distinctly different systems of values. Once established, these value systems have strived to pull the further development of the Internet in a direction consistent with the individual value base. This will also be in conflict with other systems of values. As a result, there is a continuous “tug of war” in which different groups with conflicting “Internet ideologies” attempt to embed the artifact with their ideological values.

My approach is to sketch this competition as favorable for society at large as it tends to supply a dynamic and rich development. However, I see dangers in one individual ideology dominating future development. It is therefore important to understand the ideological force field involved. It is the purpose of this article to sketch three ideological drivers in Internet development.

There is no rulebook on how the Internet will continue to develop, even though the World Wide Web Consortium (W3C) and the Internet Corporation for Assigned Names and Numbers (ICANN) have certain jurisdictions. As the Internet is continuously renegotiated in character, different groups of actors with various interests and value systems will attempt to push in separate directions. If successful, these value systems, ideologies, will be built into the artifact and thereafter, once again, influence social values.

The way that I use the concept of ideology in the following essay is simple, seen as a system of interconnected values built on a core idea. I aim to identify the foundational ideas behind the invention of the Internet. I will argue that these ideas are both so basic and plastic that practically all further developments can be seen as a variation of how these three ideas are put together, much as different types of DNA code are varied. The three ideas not only constitute the Internet, they can also be used to define it. The Internet is then developed in a continued tension between these three ideas that propel it in different directions. I will identify ideologies connected to the three basic ideas of the Internet. These are systems of values that have shaped the Internet. These then serve to generate different patterns of interacting with the Internet, which in return renegotiate the character of the Internet. It is an endless loop of the technology and various social groups involved in mutual construction.

I will also be concerned with relating my ideas to some of those suggested by Zittrain (2008), particularly those of the generative Internet and tethered appliances.



Earlier thinking on the Internet shaping society

There have been a number of earlier contributions to the idea of connecting different value systems to the Internet. However, the main tendency in the literature has been to analyze how the Internet changes society (not the other way around). Most analysts identify fundamental societal changes, while a few, such as Mattelart (2000) and Webster (2006), maintain that the network society is an old story. Consequently, for these thinkers, the basic rules of society remain the same. It is the political and economical structures that shape technology, not the other way around.

While the role of politics is certainly important in shaping societies, historically there has been an obvious absence of heavy–handed Internet regulation. This seems to have several causes. Most essentially, Internet is a global phenomenon that disrupts the relation between geographic location and power (Johnson and Post, 1997; 1996). This makes it difficult for the nation–state to claim legitimacy in regulating as well as assembling the necessary resources for creating local restrictions on a global network. Geographical location tends to become a non–issue on the Internet. Furthermore, contrary to the development of other media such as television or radio, Internet service providers have not needed government licenses in most industrial countries.

The lack of regulation can also be explained in terms of the Internet being such a quickly developing and changing technology. The “hands off approach” can also be the result of ideology. Already in 1977, Langdon Winner argued that politicians are susceptible to the idea of a technological development out of control, being happy to leave this area to the business community. However, there are certain commercial ventures that have highlighted the need for Internet regulation and found a favorable response from politics. Lessig (1999; 2002; 2006; Lessig, et al., 1999) argues that U.S. Internet politics rhetorically have been aimed to allow the Internet to take care of itself. However, in the area of intellectual property rights there has actually been a massive expansion. Such tendencies were also visible in a study of various national policies on the information society in which Moore (1998) finds two contrasting strategies, one relying on the business community and the other attempting state intervention. Barney (2000) argues that the political reluctance to realize the role of Internet politics opens the door for big business, in turn circumventing democratic potential.

Even though it is not specifically aimed at the Internet, The Rise of the Network Society trilogy of Castells (2000) introduces a classical duality between commercial agenda setting promoted by the business community and the resistance identity of the people. This is an iteration of a mainstream idea within Marxist theory in general and critical theory in particular. The dystopic image of a dwindling public sphere sketched by Habermas (1989) could actually be used as a kind of narrative for the history of the Internet as a new kind of public sphere that increasingly, with time, is taken over by commercial interests in collaboration with the corporatist state. Castells (2001) specifically addresses the Internet and society, positioning it as the technological basis for a new kind of network society in the information age. Castells is here concerned with how the market paradigm renegotiates society, corrupting the “freedom paradigm” of the early days.

Kedzie (1997) is much more optimistic, finding significant statistical correlation between the third waves of ICT (Toffler, 1980) and democracy (Huntington, 1991). Similarly, in the works of Dyson (1998; 1997), the classic duality, analyzed by Castells, is overcome by identifying a new private sector of social entrepreneurs and commercial activists. While Dyson probably would agree with Castells [2 that the “Internet is the fabric of our lives”, Dyson stresses that Internet serves to undermine all kinds of central authorities. She identifies the decentralizing and fragmenting character of the Internet as a destabilizing force and therefore as a revolutionary tool for redefining local and global citizenship.

The analysis that Dyson performs is in some ways similar to Winner (1997), but his work carries a more negative outlook. The ideology of cyber libertarianism is seen as promoted by the business community as a way of translating the idea of a “totally free Internet” into one that is freely available for purchase and ownership. In the same vein, Haywood (1998) and Holderness (1998) maintain that the metaphor of the network as a marketplace of ideas is fundamentally flawed as it will allow the powerful to grow stronger and the weak to become increasingly marginalized. Similarly, Sunstein (2001; 2007) discusses the Internet as an echo chamber in which people tend to seek out the same kind of opinions and values that they already possess. In this way, the Internet would serve to strengthen, rather than enrich, existing mindsets. Mattelart (2000) takes a related notion to another level in his criticism of globalism. As transboundary logics undermine the nation–states, it becomes important to control the rules of the global network. Therefore, the Earth’s major cultures will join in battle for control over the global monoculture.



Earlier thinking on Internet as a value system

It is “how we think about the Internet that matters,” states TCP/IP inventor Vinton Cerf [3] and adds that we tend to become confused since any thinking on the Internet is bound to become either restricted or a strange mix of ideas on technical and social levels. One way of discussing the Internet is in terms of metaphors. Stefik (1996) attempted to identify four metaphors: the digital library, electronic mail, the electronic marketplace and digital worlds. This is an interesting strategy for capturing several dimensions of the Internet that are actually rooted in the complexity of the artifact itself. My own approach builds on a similar idea.

One of the most ambitious attempts at capturing the complexities of the Internet in social context is made by Jordan (1999). He identifies a number of different perspectives relating to Internet and power, arranged at three distinct levels: the individual, the social and the imaginary. As Jordan shifts focus between levels, different complex relations are revealed.

Another way of talking about the Internet is to discuss different ideologies. As earlier mentioned, Winner (1997) introduces the concept of cyber libertarianism to characterise an ideology that holds the free expansion of the net as a deterministic idea/ideal. Winner builds this ideology mostly on authors on the far right that emphasise private ownership of the Internet. I reject cyber libertarianism as a useful concept for analyses of Internet ideologies for two reasons. First, it mingles what I see as aspects of the three different ideologies in my framework. Second, it is a dated idea as corporate actors have migrated into another ideology. I will return to this later in the paper.

Apart from the contribution by Winner (1997), Birdsall (1996) on Internet as a political ideology and the work of Sarikakis (2004) on ideologies that shape Internet policy, the concept of “ideology” has not been much favored by analysts. One reason for this absence, might be the fear of being labeled a “technological determinist”, someone who believes that there are essential qualities within the technology that decides societal transformation. Lessig [4] explicitly warns against “the fallacy of ‘is–ism’ — the mistake of confusing how something is with how it must be”. Lessig prefers to talk about “architecture” in a manner akin to my use of “ideology”. The main difference is that I position the value systems, ideologies, within social groups attempting to influence Internet, while Lessig is concerned with “code”. On the other hand, I see these value systems being developed in connection with technological developments of the core ideas of the technology. I share, with Lessig, an attempt to depict Internet as highly plastic and possible to develop in many different directions. Our strategies for modelling this plasticity are, however, different.

Most writers discussing values and the Internet have avoided analyzing this specific technology as plastic enough to enable different value systems. An exception is Froomkin (1998), identifying three technologies that makes it difficult to control the Internet: the packet switching network, user access to cryptographic tools and tools for creating anonymity. In this way, different architectural dimensions of the artifact can underpin alternative strategies for avoiding control.

In a fascinating attempt at managing technological and sociological ideas, Internet architects Clark, et al. (2005) uses actor network theory to revitalize the technological perspective on Internet development. Similar to my own approach, they argue that different stakeholders have varied interests and will “tussle” on core issues of Internet evolution. The authors avoid identifying particular stakeholder groups/interests, referring instead to three “tussle spaces”: economics, trust and openness.

The Internet analyst who most clearly pushes for the idea that there are values and societal consequences embedded in Internet technology is Lawrence Lessig (1999; 2001; 2006; Lessig, et al., 1999). The most prevailing theme in Lessig’s writing is a criticism of the heavy–handed Digital Millennium Copyright Act of 1998 and its legislative offspring. I share this concern. Lessig argues that the (U.S.) constitutional right of ownership of ideas enables a closing of the open society in the name of property. He refutes the popular conception that “the net has a nature, and that its nature is liberty” [5]. His perspective, that there are a number of possible variations to the architecture of the Internet, is also one that informs this paper. Lessig argues that there is a need to counter the tendency of transforming the Internet according to commercial architecture. Therefore, it becomes important to establish a powerful intellectual commons.

Another vital theme in the writings of Lessig is on the consequences of politics and ownership being gradually built into the essential foundation of the Internet. As “code is law”, restrictions in programming systematically disallows certain types of usages within the following layers.

I differ with Lessig on how to view the development of the Internet since the late 1990s. According to Lessig (2002) there has been a steady corruption of the values of the early Internet. I, instead, maintain that the Internet has been enriched by more values. Lessig (2002) contains a detailed criticism of what can go wrong when broadband becomes standard. It is possible to, as Hass (2007), claim that most of these misgivings have turned out to be unfounded. However, the negative trajectory of a strictly controlled Internet through broadband technology may have been avoided in part because of the discussion that Lessig and others have pursued.

Zittrain (2008) can be seen to develop some of these ideas connected to the notion of “generativity”.



The generative Internet

Zittrain (2006; 2008) makes an ambitious attempt to break new ground in analysing the Internet as a whole. Although he does not use the concept of ideology, he identifies key aspects which lie close to what I have alluded to above. In line with Lessig, he identifies the growth of the Internet as a battle between centralised corporate networks using powerful privately owned applications (networks as appliances), on the one hand, and the more organic and innovative generative Internet on the other. In a sense, this is conceptualised as a trade–off between security/commercialization and flexibility/creativity.

The basic idea behind the appliancized network is central control of content and access. While earlier research has seen the development of appliances as non–threatening to the layered character of the Internet (Gillett, et al., 2001), Zittrain sees these as undermining the generative character of the Internet. The argument is that there is a risk that the generative foundation of both the PC and the Internet will be lost as sturdy and privately owned appliances appear as safe havens against security leaks, identity theft, spamming and virus attacks. The Internet’s “generative characteristics primed it for extraordinary success — and now position it for failure” [6]. As the openness of the generative Internet creates possibilities for unfair usage, it is quite possible that the tradition of the generative net is losing in power. Of vital importance will be how Internet users react to misuse: through continued openness or by closing down and policing the Internet.

While agreeing with the main points of the analyses, I will in this article supply a complementary ideological analysis. I will argue that the Internet and the World Wide Web essentially build on a series of innovations that nevertheless can seem to be variations of three distinctly different ideas: the distributed network, the envelope and the identifier. As these ideas foster a number of ideals, norms and values in the ways social groups relate to the Internet, I see them stimulating three different value systems: speedism, boxism and markism. I will characterise these as ideologies. I maintain that an awareness of these supplies a complementary perspective on the trade–off discussed by Zittrain. I will argue that Zittrain has identified important problems relating to a development where boxism plays the dominant role, but there are nevertheless just as vital difficulties involved in a powerful evolution toward speedism or markism.

While Zittrain works with a historical perspective, there is an emphasis on corporate history, the development of “bad code” and our response to it. My complementary history emphasises the ideas underpinning the different innovations. In the following, I will briefly outline three different ideas and thereafter identify the roles that these have played in the historical development of the Internet and World Wide Web. Following this, I will discuss these ideas as ideologies that need to be balanced in a well functioning Internet. It is possible to identify three different scenarios in which the development of Internet is characterised by the dominance of one constitutive innovation/idea. I argue that it would be highly problematic if one ideology dominates over the others. I will also relate my findings to the ideas concerning the threatened generative Internet. I will conclude the article by arguing that for the future of the Internet there is a need to balance the ideologies in order to counter the three negative scenarios.



The distributed network

In the early 1960s, Paul Baran, an employee of the Rand Corporation, was given a task by the U.S. Air Force, to build a communication system that would survive a nuclear strike so that simple commands of “fire” and “ceasefire” would nevertheless reach its destination (Naughton, 2000; Abbate, 1999). Similar projects with a non–military dimension were independently initiated in the same period by Leonard Kleinrock (1961) at MIT and Donald Davies in the U.K. Of fundamental importance was also the work of Licklider in the form of an early paper (Licklider, 1960) and leadership of the ARPANET vision following his arrival at ARPA in 1962. What would be exceptional with the work of Baran was his development of the form of network. He made a distinction between three types of networks: centralised, decentralised and distributed [7].

The centralised network contained one main node and therefore became extremely vulnerable to attack. The decentralised network hosted much more potential, with several centralised nodes connected to each other. Still, even if the vulnerability was considerably lessened, it still remained. Two or three strategically placed strikes would be just as devastating as one to the decentralised network. Baran therefore constructed a third alternative that did not have any centre at all. Different nodes in the network would be connected to neighbouring nodes so that dead ends were eliminated. Essentially, every node had a contact with any other node in a great number of variations.

The distributed network appeared exceptionally robust and simple and remains the basic design for the Internet today.



The development of packets

The distributed network cannot function as an Internet by itself. Baran had developed an idea for how messages should be communicated on the Internet. However, since Baran’s innovation was not communicated beyond restricted military circles, it was to be Donald Davies and associates who in the U.K. developed a somewhat more detailed blueprint for a packet switching network (Davies, 1965; Davies, et al., 1967; Campbell–Kelly, 1988; Naughton, 2000). The distributed network that Baran had designed was inspired by the message switching system of telegraph companies. This became a refinement of the circuit switching telephone system, which essentially meant one circuit, user, at a time. He was concerned with the way information would queue up in certain nodes. Shorter messages would be stalled behind the long messages and a number of nodes that were situated later in the chain of communication would be passive, waiting for the next input. This made for an inefficient system.

The end result was switching at the level of messages, constructing envelopes, uniform sized packets. All messages would therefore be chopped into these small packets. Each one was given a sequence number and address information (source and destination). The system enabled each node to process a large number of packets quickly, switching between them from different messages. There was no need to read anything but the destination address in the header and then pass it on to the next node, somewhat closer to the destination.

The packet idea mirrored the robustness of the distributed network. Different packets could arrive in non–sequential order. They could also take different routes. It did not matter as long as all of them arrived whole to the destination.



The distributed network, the envelope and the identifier

Obviously, the packets could not function without an address which clearly stated an identity. The packet can be seen as an envelope or a box with unknown contents and it must therefore, crucially, be marked up with some kind of identification tag. The ideas of an envelope and an identifier are therefore tightly connected. The identifier in the form of an address is, of course, of ancient origin and had, as well, been used in computers from the start. The identifier in the packet switching system is based on the feature of metadata, a few pieces of information that described the contents of the envelope.

There are a series of distinct processes involved in connecting and separating the envelope and the identifier. These processes are as follows:

  • Naming (Markup)
  • Disassembling
  • Identifying
  • Matching
  • Sorting
  • Reassembling

In this way, the envelope idea and the identifier idea are fluently intertwined in order to create a specific kind of flow over the Internet. It is characterised by strict neutrality concerning the anonymous content of messages. This effect is generated by a lean system of identification which is only concerned with indiscriminately passing along great numbers of packets without loss of quality of data.

For the sake of clarity, let me briefly define the three fundamental ideas:

The distributed network: different ideas aimed at increasing extension, speed and efficiency of the Internet. These features shape the creation of a certain value system within different social groups and an ideology that I call speedism.

The envelope: different ideas aimed at separating and placing sets of information in distinct enclosed areas (envelopes, boxes, folders, applications, layers) with specific functions within larger systems. As long as the specific function works in the larger system, there is expressed neutrality concerning the content. This idea can be emphasised systematically within different social groups in a way that I call the ideology of boxism.

The identifier: different ideas aimed at identifying marked up metadata and through processing (identifying, sorting, matching) facilitate both surfing experiences and the identification of individual surfing patterns. This idea can also produce a social value system, the ideology of markism (an obvious wordplay with Marxism).

The idea of the identifier was, as will be shown, the most underdeveloped of the three ideas in the 1960s, 1970s and 1980s. The World Wide Web was mostly a revolution in the development of the identifier.



Speedism, boxism & markism and the definition of Internet

As the three ideas can be said to constitute the Internet, as an innovation, it can be defined with reference to these: “Internet is a global distributed network processing metadata and connected uniform sized packets of information through a packet switching system.”

This is a definition that is fairly congruent with other common definitions. However, I would maintain that it is more precise than other definitions, since it emphasises the three innovations. The definition clarifies the existence of metadata and packets, as well as their relationship. Most essentially, no definition I have seen mentions the fundamental idea of the “distributed network”, being content to talk about a network or a global network. In this way, most definitions fail to communicate an understanding of the robustness of the Internet. In my opinion, the only way to define the Internet is by its most basic functions. More elaborate definitions will tend to mirror a particular stage in the development of the technology. Chadwick (2006), who struggles for four and a half pages on the issue of defining the Internet, ends up warning against brief definitions since the Internet is constantly evolving, making definitions obsolete.

From my perspective, the development of the Internet from ARPANET and forward is a series of creative variations of the same theme: highlighting one of the three ideas while making innovative use of the two others. In order to demonstrate this, I will briefly discuss the main developments of the Internet during the 1970s as well as the dramatic evolution of the World Wide Web in the early 1990s.



The development of the Internet and speedism, boxism & markism

Radically new innovations on the Internet usually meant an emphasis on one of the three ideas, while still employing all of them. E–mail, developed in 1970, mainly took the envelope idea further by using the traditional postage system as a model.

The file transfer protocol (FTP), created in 1972, was a refinement of the packet switching system, which most basically served to make the distributed network more efficient. Many of the innovations of the 1970s had the same effect, making the distributed network more efficient or more extensive. When Vinton G. Cerf and Robert E. Kahn constructed a way of connecting different networks, they did it by putting the packets in an additional envelope with metadata that the gateways between different networks could understand (Cerf and Kahn, 2005). This new protocol was called Transmission Control Protocol (TCP) and made the distributed network global.

Cerf and associates would later in the 1970s split TCP into two parts: TCP and Internet protocol (IP). TCP/IP is actually a collection of more than a hundred protocols and served as a foundation for the four basic layers of the Internet, each dealing with a different cluster of problems.



The layering of the Internet

The layer is a fundamental envelope idea. Each layer becomes specialised and information professionals working on one level need not have an understanding of the other layers. As specialists only need to know how the different layers are sorted in relationship to each other, these are actually boxes piled upon each other. The layering of the Internet has been the main vehicle for expansion with the ideology of boxism. At the same time, as long as these layers are generative, they are actually serving to expand on the idea of the distributed network (speedism). In that way, layers are utilised to increase the size of the Internet and to improve speed and efficiency. At the top layer of TCP/IP, the application layer, it is possible to keep adding layers of different kind. For instance, in building the semantic Web, functional layers of software are built in “the semantic Web layer cake” where each layer deals with different problems (Volz, et al., 2003).

What would happen if new layers were to fundamentally privilege either the envelope idea or the identifier idea? As localised layers affecting geographical areas (such as China) or specific applications, there would seemingly be no fundamental threat to the basic ideas of the Internet. However, if such layers were to be institutionalised as standardised features of a top level, this may indeed be the end of the Internet. It is this scenario that is focused by Zittrain and to which I will return to presently.

However, predicting the development of the Internet is a tricky business. The introduction of the World Wide Web as a fundamental layer clearly privileged the identifier idea and it did not destroy the distributed network. The effect was quite the contrary. As I see it, it was a question of ideas relating to the identifier catching up after two decades of neglect.



The end–to–end principle

While the layering of the Internet was an expression of the envelope idea, the articulation and implementation of the end–to–end principle in the mid–1980s served to strengthen the distributed network. Internet architects Saltzer, et al. (1984) primarily saw this as the most efficient way of handling a heterogeneous network. The end–to–end principle was a design norm that stipulated that control mechanisms whenever possible should be placed with the sender and receiver of messages, rather than as built in services in the network itself. In other words, the network could only supply a general service and be open for all kinds of applications. In this way, control was distributed to the user.

The principle actually deals with the issue of how strong the layering idea would be allowed to be in the development of the Internet. This can be characterized as a conflict between the values of boxism and speedism. Should the higher layers be allowed to have centralized and controlling functions, or should that power be distributed to the endpoints of the system? In a recollection, Reed (2000) states that this strategy was highly controversial when it was initially outlined in the late 1970s. At the time, the norm was to allow every point in the system to register and process the purpose and meaning of every packet. It would seem rational to continue solving each problem with a new autonomous layer. Saltzer, et al. [8] argued that the designers should not attempt to “help” the user by assigning more functions to layers as they would often misunderstand the variety of different problems appearing on the heterogeneous network. These problems would therefore be better handled at the endpoints with the most appropriate software.

This principle actually weakens both the idea of the identifier and the idea of the envelope, while instead fundamentally strengthening the speed and flow of Internet communication. Having said this, it can be argued that the end–to–end principle historically have been balanced by identifier strategies. There has been a tradition of identifying and giving different priorities to data transfers of different types (Hass, 2007).



The World Wide Web

When Tim Berners–Lee started the construction of what would become the World Wide Web, his aim was a more efficient system of retrieving information, more specifically research data and publications (Berners–Lee and Fischetti, 1999). He made a series of innovations of which the most fundamental were the Uniform Resource Locator (URL), the Hypertext Transport Protocol (HTTP) and the Hypertext Markup Language (HTML).

These are the same ideas as earlier. The URL is the identifier, specifying the address of a particular Web page. The HTTP is the vehicle for skipping between pages, making navigation of the distributed Internet much more efficient. In order to function, HTTP relied on some kind of envelope layer, most importantly TCP/IP. HTML is another identifier and a technique for structuring documents that brought the markup of metadata to a completely new level.

As I earlier mentioned, development of ideas relating to the identifier had been lagging behind. However, when the World Wide Web was launched in 1991, it signalled a dramatic turn of events and a steadily increasing growth of identifier ideas and technologies. To this day, this is an ongoing development. When the Mosaic browser was launched in 1993, it considerably expanded the identifying elements, most importantly including images.

The World Wide Web is essentially a new layer (envelope) placed upon the earlier layers of the old Internet. However, while earlier layers through the end–to–end principle seem to have privileged expansion, speed and efficiency of the distributed network, this layer emphasised ideas relating to the identifier. The Web accelerated the content of Internet activity. Still, as it became more attractive, considerably more users generated toward it and the effect became a continuous dramatic expansion of the distributed network.

In the struggle for market share in the mid–1990s, Netscape and Microsoft’s Internet Explorer continuously developed HTML in different directions. Eventually, this created a chaotic situation where markup displayed differently or even not at all in certain flavours of Web browsers. It was also during this time that work was commenced on the remarkably flexible standard of Extensible Markup Language (XML). This became part of a clear trend in which it became increasingly important to markup all kinds of content. The anonymous idea of blindly transporting all kinds of content still ruled on the level of TCP/IP. However, Web content was described (or “marked up”) not only on the basis of the presentation of text, but also on content structure.

With the development of XML as a major force in most types of Internet development, there is a return to an old familiar idea: separating the envelope from the identifier. One of the fundamentals of XML is to separate content structure from presentation. This means that documents are marked up in two different ways. One relates to the identifier: what is the content? The other relates to the envelope: what is the form?

What does this mean? It is a strengthening of the idea of the identifier. The content is not restricted to one uniform envelope, but can freely be moved to different kinds of envelopes. There is clearly a tug of war between ideas relating to the envelope and the identifier. With an injection of ideas concerning applications and other types of boxes with anonymous content, the identifier perspective is weakened. Similarly, when, as with WWW, identifier ideas take centre stage, ideas relating to the envelope are weakened.



The National Science Foundation and the privatization of the Internet

Many of the ideals connected to the distributed network was developed when the Internet in the 1980s was separated from ARPANET (and the military) and came in the care of the National Science Foundation (NSF). The National Science Foundation Network created the Internet backbone, focusing on growth of the network as well as speed. When it went online in 1986, its speed was 56 kb/s. Five years later it had been raised to 45 Mb/s. The National Science Foundation worked to increase connectivity and open the Internet up to the world.

NSF was so successful that both the government and private sector recognized that privatization of the Internet was inevitable (Kesan and Shah, 2001). The process of privatization however included a series of fundamental mistakes that inhibited competition, creating opportunities for unfair exploitation of information resources.

It became possible to charge additional fees for backbone access, video access, acceptable server use and other services [9]. In addition, it became possible to restrict home networks and filter controversial data, such as those coming from file sharing connections. All of these strategies required a development of identifier ideas. Methods for identifying users and categorizing their behaviour became necessary in order to commercially exploit Internet usage.



The migration to broadband

Following the privatization process, old telephone modem connections for home users were migrated to broadband. This process can be seen as a major step forward for values connected to the distributed network. Yoo (2004) identified four important shifts as Internet went from narrowband to broadband.

The first shift was from institutional to mass market users. This required an influx of layering ideas, enabling a smarter network that could handle such an increase in the number of users with such an extensive variation of needs.

Second, there was a shift into more network intensive applications, such as audio downloads, online gaming, Internet telephone and streaming video. This, again, required the creation of suitable layers. In addition, identifier ideas were used in order to make a distinction between different users, allowing them to purchase the bandwidth that suited their specific needs.

The third shift concerned a development of security strategies. This, also, led to protection strategies building on envelope (layering) and identifier ideas.

The fourth shift was from the anonymous structure to one where law enforcement became possible. Naturally, this process was enabled through identifier ideas.

All of these shifts were therefore processed with the help of envelope and identifier ideas. This is important to note as they counter balanced a major evolution for values relating to the distributed network. Without these strategies, a series of problems would have evolved as a result of the values of the distributed network becoming too dominant.



Three ideologies

The basic ideas of the Internet have stimulated the development of three value systems intent on developing the Internet in different directions, implying a series of technological choices. This perspective is congruent with those Internet analysts, reviewed at the outset of this paper, who saw the Internet as a metaphor or as architecture. I differ from most of them in two ways. First, I place a stronger emphasis on the importance of the technology developing through interaction. The technology changes the users (and society), but use and design initiatives will also transform the artifact. Second, the Internet is developed dynamically through a fundamental conflict between different value systems. As a consequence, different groups with different interests will use and renegotiate this technology in different ways.

The three basic ideas of Internet technology supply us with different opportunities and quite often there is a conflict between what seem to be implied by one system of ideas and another.

The three basic ideas of Internet technology supply us with different opportunities and quite often there is a conflict between what seem to be implied by one system of ideas and another.



The ideology of speedism

The distributed network as an ideology is all about expansion, speed and efficiency: speedism.

With speedism, the Internet is developed through an endless creative retransformation of technological instruments to further the free flow of information.

With this value system, there is a concern with how information is efficiently carried from one point to another. This ideology is about a particular kind of flow, one that is free in its choice of transportation. This value system is both proactive and reactive. It wants to further increased speed and efficiency. There is also a need to react against strategies, built on other value systems that serve to restrict the free flow of information. The distributed network is built on the idea of a flow of information that cannot be stopped. According to this vision, the Internet is a dynamic web with a multitude of different forms of interaction. All forms of centralized control are rejected, and indeed, actively undermined. Thereby, creative hotspots are created in order to stay one step ahead of various control mechanisms, such as legislation. For the followers of speedism, it becomes crucial to conserve the different ideas that are associated to the end–to–end principle, as these serve to lessen the influence of layering and identifying.

With speedism, innovations are constantly transforming the character of the Internet, allowing technological change to develop at its own pace: exceedingly quick. In fact, from this ideological standpoint it can be argued that any attempt at regulation will only serve to spur on and accelerate technological transformation.

For instance, as the centralised file sharing institution of Napster was neutralised, Gnutella and similar distribution systems was built on the idea of the distributed network (Alderman, 2002). In other words, the regulation of the centralised technology served to accelerate transformation into a technology built on the distributed network that could not be controlled.

Following this development, new instruments of regulation were developed in the form of detection software. However, simultaneously BitTorrent technology was being developed that made legal detection work much more difficult. This is a file sharing concept that even more expands on the idea of the distributed network as well as another layer of the envelope idea. As the same envelope can be retrieved from a number of different users, the queuing problem is avoided according to the same model once developed in the packet switching system.

The response from those attempting to control file sharing became the development of indirect detection honed in on torrent file sharing. This, once again, led to accelerated technological transformation and the development of social networking systems such as OneSwarm. With this technology, envelopes are constantly reidentified as they travel in the network, making legal detection work exceedingly difficult. This development shows how speedism can be strengthened as an influential value system within Internet development through a creative manipulation of the envelope and the identifier. These devices are used to improve speed, accessibility and privacy.



The ideology of boxism

The ideology of boxism has several characteristics that are at opposites with speedism.

With boxism, the Internet is developed through layering and aggregation of resources to portals and appliances.

Boxism is about control and uniformity. By placing all data in the same kind of uniform package, it becomes possible to organize information in an effective manner. In addition, as every piece of information is wrapped in the same kind of package, they also lose identity. The envelope idea is everywhere on the Internet: layers, applications, social networking sites, folders, etc. Boxism is a move toward sameness and anonymity. There is an interesting tension in this ideology since it stands for both control and absence of control. There is no control or interest in the content of the envelope. All aspects of control relate to the form of the envelope. According to this vision, the Internet is square, reliable and predictable. People interact with the Internet through modules situated at a well controlled layer. At this module, the users can personalise according to their tastes and interests. Interaction is controlled and regulated outside the modules, much less so inside them.

Appliances are boxes, envelopes, containing other boxes. Traditionally, there is the computer desktop with different folders that contain more folders. More recently, there has been a surge of integrated technologies such as smartphones containing a wealth of appliances (apps). Another example is the portal technology of for instance IBM’s Websphere or Microsoft’s Sharepoint, in which the user’s personal home page can be designed with different applications, boxes, called portlets. Various social media such as Facebook, MySpace, YouTube, Flickr etc. are all expressions of boxism. Regardless of which, the appliances are usually locked in a certain way. While Zittrain (2008) sees this as a corruption of the generative character of the Internet, Lessig (2008) signals some optimism that these “sharing economies” and “hybrid economies” will revitalize the “Read/Write” cultures at the expense of “Read/Only” cultures.

As alluded to earlier, these appliance–making technologies become a fundamental threat to the innovation of the Internet if they are transformed into a standardised top layer of non–generative or semi–generative character.

Boxism fits well with the needs of corporate actors and policy–makers. By adding a module landscape as a layer, the Internet becomes more predictable and regulated. With more transparent patterns of Internet behaviour, commercial exploitation becomes much easier. With this ideology, the chaotic and creative character of the distributed Internet is something that needs to be tamed. Portals and appliances create safe areas, guided tours of the wild wild Internet. As Zittrain (2008) suggests, people have started to move away from risks associated with the Internet toward appliances with guaranteed functionality that are continuously updated to guarantee maintained security. He is particularly concerned with tethered appliances that keep the user bound (on a leash) to a vendor. The right to updates are exclusive to the vendor and user privileges are kept low for security reasons. In a sense, these are the purified artifacts of the very thing that Lessig (1999) warns of, enclosures facilitated by the coding of business lobbyists, politicians, lawyers and programmers. At the same time, they can be seen as platforms for free flow of information, for the sharing and hybrid economies that Lessig (2008) welcomes.

With boxism, the distributed network and the identifier can be used for purposes of control. Zittrain [10] examined the way that tethered appliances can be used for mass surveillance as well as targeted eavesdropping. The democratic potential of the distributed Internet, in which everyone can publish as well as have access to all other publications, is thus turned on its head: everybody active on the Internet can be monitored. With boxism it is a particular surveillance that is of interest, monitoring patterns of usage, specifically concerning appliances. In other words, one wants to know how the packets are moved (not what is in them). This information is of use both for the development of new appliances and for regulation. There is less interest in identifying the content of information accessed and which individuals access what. For that, I have to examine to the ideology of markism.



The ideology of markism

The ideology of markism is a contrast to boxism. To point to something is to identify it and give it a specific identity. Associated to this ideology is also the idea of sorting, to place information in different order and to match different specific sets of information to each other.

According to markism, the Internet is developed with the help of increasingly sophisticated software built on the principles of marking, identifying and processing data packets, ultimately with the help of artificial intelligence.

Markism presents us with a seductive set of values that serves to make the surfing experience more safe, fun and effective. The introduction of the World Wide Web certainly entailed a decisive push in this direction. As such, the balance between what the human and what the computer does are shifted. The key question is: how much work should a computer do for us? Markism also stands for visualizing, emphasizing and positioning something in relation to something else. It is as much concerned with showing the uniqueness of something as boxism is about making something uniform. Important is also the idea of fixating, putting a piece of information in a specific place. This can be seen to be the contrast to the ideology of speedism, which emphasises flow. According to the ideology of markism, most information can (and should) be processed with the help of markup languages. Computers mediate the Internet experience for us as users. In the near future, advanced artificial intelligence can in many ways appear superior to humans when processing, comparing, measuring, revising and evaluating information in a wide variety of different ways. There is undeniable power involved in owning these processes. It would seem to be an essential human and political dilemma to decide how much of this power is to be given away to machines. The ideology of markism promotes far–reaching powers to smart computers.

In the following sections, I will discuss some of the basic conflicts concerning the three ideologies and certain problems that could occur if any one of these would dominate the future development of the Internet.



When speedism dominates

Speedism, taken to its extreme, rejects the controlling ambitions of boxism and the personalisation of markism. The vision of the Internet becomes one of a boundless network, ever expanding and impossible to control. A given user will continuously be able to access new technology that will circumvent attempts at regulating and identifying Internet communication. Attempts at regulation would only serve to initiate more rapid technological change. Participation would be totally anonymous and new creative functions would be continuously layered onto the ever expanding Internet.

While there certainly are many positive connotations to be found in this vision, there are many drawbacks. This is an Internet impossible to police. Criminal networks would be able to utilise the Internet both as an instrument for organising scams, thefts and other illegal activities and for coordinating resources. Users of child pornography, corporate spies and terrorists could create their own sheltered communication networks. Copyrighted material would be routinely swapped with guaranteed impunity. As the other ideologies are reduced in importance, it would be also an Internet that would be difficult to navigate. Controlling institutions such as W3C and ICANN would have grave difficulties in upholding standards, ultimately with an Internet loosing coherence. Search engines would have difficulties in producing relevance rankings. The knowledgeable hackers would hold positions of power as they would know where the backdoors of dominating systems could be found. Control is at the endpoint, but many users will be unable to assemble the resources they need for safe and efficient Internet experiences.

Vastly different interest groups can emphasise different dimensions of this ideology. I will make a distinction between four different versions of this ideology.

First, the ideology of the generative network, as formulated by Zittrain. Included in this perspective are the ideals associated with the end–to–end principle. Layering should contain a minimum of functionality and instead distribute control to the endpoints. The Internet should remain open for all perspectives and applications as long as they do not weaken the basic generative character of the Internet. This is also an image of an ever evolving Internet that never settles down into a finished, polished product. There is much to admire about this position. I do, however, feel that it must be seen to be a perspective within speedism, thus ignoring certain needs, arguments and claims that are vital for the followers of boxism and markism. Quite possibly, this ideology needs to be tempered by boxism and markism to survive. Indeed, Zittrain himself introduces the idea of “the generative pattern”, that the innovation that is successful through openness is doomed to exploitation and closure.

Second, the ideology of independent cyberspace. This influential set of ideas have been most eloquently formulated by John Perry Barlow in his famous “A declaration of the independence of cyberspace” (Barlow, 1996). He states:

“On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather … You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear … Cyberspace does not lie within your borders … We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station or birth … Your legal concept of property, expression, identity, movement, and context do not apply to us. They are all based on the matter, and there is no matter here.” (Barlow, 1996)

With this ideology, cyberspace becomes a global nation with its own rules.

Third, the ideology of piracy. This is closely related to the ideology of the independent cyberspace and could perhaps be regarded as a subset of that. This is a vision in which it is taken for granted that new forms of digital technology make copyright legislation meaningless. As information wants to be free, regulation is seen as outdated and market mechanisms out of sync with the information society. For instance, it is possible to track the development of a youth oriented piracy culture which has moved into the realm of politics. For instance, Sweden hosts the infamous file sharing site The Pirate Bay, as well as an ideological think tank called the Pirate Bureau and a political party: The Pirate Party. The latter has a political platform which suggests file sharing to be made legal (and encouraged), the patents system abolished and privacy guaranteed (Li, 2009). In the Swedish election to the European Parliament, May 2009, the Pirate Party received a staggering 7.1 percent of the electoral vote. This is a fascinating example of how the technology in a concrete way can make politics.

Fourth, the ideology of the corporate network. Historically, it is possible to position the cyber libertarianism discussed by Winner (1997), entailing commercial freedom, as corporate variation of this ideology. However, as unrestricted freedom has turned out to enable massive instances of copyright intrusion, most mainstream corporate actors strive for increased regulation in order to transform the Internet into a controlled marketplace. Corporate actors are thus more likely to push some kind of boxism (see below). Still, there are a number of corporate actors, mostly smaller, that have strategically utilised the spectacular free flow of the distributed Internet in order to spread out their products for free or almost for free. With this strategy, an added goodwill value, economic or promotional, is generated with new creative methods. For instance, if you have produced an amateur video with five million views on YouTube, there are ways of cashing in on a substantial social capital. Some of the ideas associated to the financial model of the long tail (Anderson, 2004) — creating huge profit through small individual sales of a large catalogue of products — can also be placed in this category.



When boxism dominates

With this ideology, the dominating vision is one of a controlled and well regulated Internet. New layers have been introduced that carries central power, weakening the architectural idea of endpoint to endpoint free–flowing communication. As speedism has been tempered, the pace of technological change has slowed down. The dominance of boxism over speedism can be summarized in the words by Lessig [11]: “The pressure to protect the controlled is increasingly undermining the scope for the free.” The Internet has settled down into stable and predictable functions or platforms that can be utilised for flourishing commercial ventures. As big businesses develop their individual boxes, the universalised ambitions of markism takes a back seat.

With boxism, there are clear criteria on what you can and cannot do. The Internet consists of a great number of modules and for different kinds of information practices, people move from module to module. Each module can be personalised, adapted to the preferences of the user and at times it becomes a fluid process to move between different modules. While there are other ways of experiencing the Internet in a more raw fashion, these powerful and personalised modules have seduced people into almost exclusively utilising them in their consumption of the Internet. Still, as the content of the traffic is anonymous in character, there is still great freedom for the individual within certain boundaries. Individuals send envelopes to each other with invitations to experience interaction in each other’s modules. “Visit me tonight, there is a party in my module.”

Building on Zittrain (2008), this is a first step into the “tethered” Internet, discussed further below. In any case, it is the favoured ideology within the corporate Internet culture. “Cloud computing” (running the program from the Web rather than on the PC), social media, portals and eLearning platforms are all manifestations of this ideology. Users become seduced into exclusively utilising the Internet through these boxes of aggregated content that they can personalise to our heart’s content.

Another idea for weakening the end–to–end principle is the so–called “active network architecture” (Tennenhouse and Wetherall, 2007). The idea is to replace the anonymous and passive packets with active capsules that contain miniaturized programs that interact with each network point. This would enable “a means of implementing fine–grained application–specific functions at strategic points within the network” [12]. Active networking quickly created a design controversy as a possible challenge to the end–to–end principle (Reed, 2000).



When markism dominates

It is possible to identify four different variations of markism as a dominant Internet ideology.

The first of these is a computer–driven markism. The idea is to utilise the research traditions within artificial intelligence (on describing information for the processing of machines rather than humans) and digital libraries (on storing and cataloguing information) (Ossenbruggen, et al., 2002).

The second is a judicially driven markism. This system of ideas can benefit by computers that work intimately with metadata. However, the real purpose is to counteract the anonymity of boxism and the creativity of speedism in order to link real–life persons to specific virtual events in a legally satisfactory manner. In other words, the policing of the Internet requires sophisticated instruments for identification.

The third is a surveillance driven markism. This is a value system intent on monitoring online behavior in order to control off–line behaviour.

The fourth is a search engine driven markism. Major search engines such as Google and data mining corporations such as DoubleClick constantly strive to aggregate information on Internet usage. This is a value system which actually carries traits of all of the three above.

I will, below, deal with these in turn.

Computer–driven markism can be represented by Berners–Lee, et al. (2001) and their notion of the semantic Web. In this future scenario, everyone has their own custom–made semantic Web agent that navigates the Internet in our place. I see these as elaborations of the personalised modules within boxism.

In order for the semantic Web to function, Internet content will be marked up for computer processing, rather than for human reading. Not only do these agents surf in our stead, they also interact with each other, thereby saving us time. In an example given by Berners–Lee, et al. (2001) communication between a brother and sister can be kept to a minimum in planning the medical treatment of their mother, since their respective agents can negotiate the essentials together with the agent of the doctor.

This is an image of an imminent future in which the Internet is so exceptionally well marked up and the computers are so good at identifying language patterns that the computer can take care of a large part of our tedious information practices. It will be able to interpret our knowledge needs, translate them to a strategy for searching, retrieve relevant documents and then process this information into a tidy summary. This is essentially what this version of markism taken to its extreme would do. It is an Internet where there is so much metadata that our manual information practices are insignificant compared to that of a computer.

Returning once again to Zittrain (2008) and the idea of the generative Internet, I actually detect a substantial challenge to the innovation of the Internet in the development of the semantic Web [13]. The semantic Web is intended to, just as appliances, be layered upon the current top layer. The bottom layer is XML and already in place. All of the other layers of the semantic Web are placed on top of XML (Ossenbruggen, et al., 2002). In the end, these layers are designed for the semantic Web agents rather than for humans.

It is important to point out that the semantic Web agents are not intended to go into the envelope and read the actual content. Instead, they navigate extended presentations of metadata. The quality of their performance is therefore dependent on the establishment and continued development (including frequent updates) of XML based metadata.

While there are obviously many advantages to having personal semantic Web agents, I would argue that this is actually not a system that empowers us as knowledge seekers. Attaining knowledge is not only about reading a document that is placed before us. Rather, it is a whole process in which the knowledge seeker articulates an interest in connection with the activity of scanning different types of documents. If we, the humans, lazily give this away to the computers, it is we who become the stupid machines.

The second variation, the judicially driven markism, suggests an extensive system of identification and markup in order to control and police the Internet. Lessig (1999) discusses this variation as “architectures of identification”, including the technologies of password systems, cookies and digital certificates. Lessig describes these techniques as “layer architectures of identity onto the existing identity–ignorant architectures of TCP/IP” [14].

The judicially driven markism can also be seen as an ideology promoted by the U.S. Patriot Act. Furthermore, it is connected to the regulations against copyright theft such as the U.S. Digital Millennium Copyright Act, the trade agreement, TRIP,S and the European attempt at file sharing regulation, IPRED. As the creative and evasive movements of copyright piracy can build on the endless flexibility of the distributed network, effective counteractions must invade on individual privacy to be effective. Without an elaborate system of identification, the legal system will only catch the least knowledgeable file sharers. This would be unsettling from a democratic viewpoint.

Naturally, semantic Web agents can hunt for these kinds of lawbreakers considerably more effective.

The third variation can be called the surveillance driven markism. It is distinctly different from the second variation, which works to restrict the distributed network. The surveillance driven markism exploits the distributed network for routine mass surveillance. Zittrain [15] discusses this as “privacy 2.0” following the development of cheap sensors and Web cameras. While the judicially driven markism consists of a value system intent on monitoring online behaviour, the surveillance driven markism is concerned with offline social processes. Sun Microsystems CEO Scott McNealy signaled already in 1999: “You have no privacy. Get over it” [16]. Video clips available on the Internet can be utilised as a resource for tracking the individual through the evolution of facial recognition software such as Google–owned Neven Vision. Both publicly accessible web cameras, e.g., those placed at the U.S.–Mexican border and cellphone video clips distributed on YouTube, are instruments that are actively utilising the Internet as a peep show. The result is an army of citizens documenting each other for various purposes, all together creating a vast resource for privacy infringement.

The value systems of the powerful search engines and third–party data mining companies constitute a specific kind of markism which I have termed search engine driven markism.

The most powerful actor within this type of markism is Google. Even though it strives to “do no evil”, it leads a strong trend of targeting and analyzing individual surfing behaviors through traffic analysis (Conti, 2009). In defining the Google vision, cofounder Larry Page states: “The ultimate search engine would understand everything in the world. It would understand everything you asked it and give you back the extract right thing instantly” [17]. Naturally, in order for this to work, the search engine must either generalize the specific needs of the user in a mainstream fashion (which is no good) or have surveilled the individual user extensively. It requires the kind of “hyper scrutinized reality” that Zittrain [18] warns against. It is also a vision strangely similar to that of the semantic Web.

The most damaging aspect of a dominating search engine driven markism is the obvious threat to sensitive information such as innovations being developed, pending patents, organization memberships, anonymous sources to journalists, personal illness, sexual preferences, abortions, and corporate strategies and policies. While most companies aggregating search and surfing patterns have restrictive privacy policies, that is clearly a much too inefficient protection. As Internet security expert Greg Conti puts it: “Information is a slippery thing” [19].



Is the generative character of the Internet threatened?

As I now have identified the basic ideas underpinning the Internet as an invention as well as the three ideologies, it is useful to move back to the basic argument of Zittrain (2008), that the generative character of the Internet is threatened. From my vantage point, I see this as three separate arguments:

  • A threat against the open character of the Internet (a development of a stream of thought developed by Lessig);
  • A threat against the layered character of the Internet; and,
  • A threat against both the open and layered character of the net through tethered semi–generative layers.

Concerning the first point, Internet as a balanced artifact would seem to be resistant to any group of interest controlling closing down functionality. Zittrain does not really conceptualise the Internet as a distributed network. Instead, he is concerned about the generative aspect “a certain incompleteness in design, and corresponding openness to outside innovation” [20]. He also identifies a trend of appliances that are complete rather than incomplete: iPods, BlackBerries, game consoles, etc. While this remains an important analytical point, it is important to keep in mind that the distributed network probably can resist multiple corporate attempts to bind it. It is, of course, possible to appliancize a great number of nodes in the network, closing them off from generative layers of innovative content. However, one could ask: is it possible to close them all down? In addition, the distributed network is also far more efficient than a decentralised one. To compromise the distributed network entails fundamentally lowering the degree of efficiency. For these reasons, I have another perspective on the growth of appliances such as Facebook and Iphones. For Zittrain, the emergence of appliances on the Internet is a migration of an idea that belongs on the PC [21]. Still, these can also be seen as elaborations of the envelope idea, one of the constitutive ideas of the Internet. There should be room for that. In other words, this argument does not worry me.

The second argument concerns the layered character of the Internet. Zittrain maintains that the current Internet is constructed so that lower layers are incomplete, structured not to constrict or disfavour any kind of activities that will be layered upon them. In this way, he identifies a trend of a new kind of layers consisting of appliances. In itself, I do not see this as damaging to the innovation itself. Layers and appliances are sprung from the envelope idea and therefore quite natural for the Internet. As earlier layers almost exclusively have had an effect of extending the idea of the distributed Internet, it becomes increasingly difficult to corrupt this dimension of the Internet through additional layering. Seemingly, the robustness of the distributed network is rather safe on the top layers. There would be much more concern if there was a threat against the lower layers. So far, I’m not worried.

The third argument is a kind of twist on the second. Appliances can be built to be seemingly generative, but actually be strictly controlled by an individual company. That would mean that the original designer would not only own the bottom layer, but also implicitly control all the layers above. With time, standardised Web browsing software would accommodate for the upper layers and the Internet would stabilise itself in a situation where a series of fundamental lower layers are controlled by specific corporate actors. In my analyses, this is a profoundly disturbing scenario. Hopefully, the debate spurred by Zittrain, can serve to create an awareness of this particular threat to generative Internet technology.

I will below take a closer look at this particular threat to the Internet.



The tethered Internet: An alien idea

The identification of appliances using the tethered strategy is, from the vantage point of this article, both interesting and disturbing. It is an idea which does not naturally follow any of the basic three ideas. I would characterise it as an attempt to place a layer upon the Internet which is not naturally connected to the Internet.

So, this is how it becomes possible to destroy the Internet, through infiltration and entrapment in a pattern familiar from the old frontier days of the Wild West. Corporations buy the most important patches of land and then control everything built on it as well as all traffic going through it. In this case, crucial appliances are created that entice the user to gravitate toward them. In the second step, generative layers are built upon these appliances. In the third step, Web browsers are adapted to what have now become essential and natural elements of the Internet. Finally, owners of the original appliances can exercise control in a variety of ways.

While the distributed net in theory would seem more or less indestructible, here is a flaw, it can be destroyed by introducing the decentralised network idea on another level. If this corrupted layer of appliances becomes standardised and fundamental for Web browsing, than it has created a decentralised network that controls the distributed network. In a further step, one could imagine a centralised network connecting to the decentralised network in the form of a coalition of the key owners of the bottom appliances or policy–makers with the aim to regulate. As Zittrain argues:

“Generative networks like the Internet can be partially controlled, and there is important work to be done to enumerate the ways in which governments try to censor the Net. But the key move to watch is a sea change in control over the endpoint: lockdown the device, and network censorship and control can be extraordinarily reinforced. The prospect of tethered appliances and software as service permits major regularatory intrusions to be implemented as minor technical adjustments to code or requests to service providers.” [22]

The tethered strategy aggregates a number of negative aspects of the envelope ideology and the identifier ideology. Together, they can conspire to destroy the distributed Internet. Since the Internet is fundamentally defined by the distributed network, this would be the end of the Internet. Something else, and exceptionally different, would come in its place.



Closing discussion

The Internet can be seen as an artifact that is shaped by social action and also, in return, influences the structure of social action. This paper has been concerned with the different value systems that are involved when various social groups continuously shape the Internet. I have identified three constitutive ideas of the Internet: the distributed network, the envelope and the identifier. I discussed the key contributions in the development of the Internet and World Wide Web as variations of these different ideas.

Building on the three constitutive ideas, I constructed a definition of the Internet. I also suggested that the constitutive ideas could be seen as different value systems, each built on a core idea. I discussed the three ideologies of speedism, boxism and markism.

It is quite natural that different social groups with different ideologies continuously renegotiate the shape of the Internet. If the Internet is to be developed as a rich and flexible innovation, it is important that representatives of different ideologies continue to make their impression on the Internet in order to counterbalance each other. I identified negative scenarios mainly when one ideology would be allowed to dominate. A heavy emphasis on speedism supplies us with an ever developing anarchist Internet that societies have lost control of. Similarly, with the dominance of boxism, it becomes too controlled. In a scenario where markism takes a leading role, there may be too much space for artificial intelligence to navigate the Web instead of humans. Other versions of markism entails the creation of efficient instruments for the legal system as well as corporate actors to police any kind of major or minor Internet misdemeanour or indiscretion, leaving elementary privacy challenged.

The historical review that I have presented seems to suggest that the presence of different value systems have a soothing and balancing effect on the Internet. Development of identifier ideas serve to supply some kind of regulation that can counter some of the freewheeling force generated by speedism. At the same time, the envelope idea of adding more layers has the potential of facilitating both privacy and external control. The creative development of extending the distributed Internet counters the controlling ambitions visible both within boxism and markism.

There are a number of possible scenarios in which this balance can be dramatically shifted. Some of these are difficult to track and identify as specific threats to the balance, since trends are introduced slowly and may become distinctly visible only when it is too late. Zittrain discusses one such scenario in which strong corporate actors actually can corrupt the idea of the Internet itself. It is important for Internet researchers, legislators and Web developers to be aware of the potential of other kinds of threats that dramatically shift the balance of ideas. Regardless of which, Zittrain’s suggestion of transforming Internet users into participants that share the responsibility of shaping the Internet is valuable. For the future of the Internet we need all become “netizens”. End of article


About the author

Jan Nolin has a background in science and technology studies as well as policy studies. His current research interests are within Internet policy studies. He is a professor of library and information science at the Swedish School of Library and Information Science, University of Borås, Sweden.



1. Latour, 2007, pp. 54–55.

2. Castells, 2001, p. 1.

3. Cerf, 1996, p. ix.

4. Lessig, 2006, p. 32.

5. Lessig, 1999, p 30.

6. Zittrain, 2008, p. 149.

7. See figure 1, Baran, 1964, p. 2.

8. Saltzer, et al., 1984, p. 287.

9. Lessig, 2002, pp. 156–158.

10. Zittrain, 2008, pp. 108–110.

11. Lessig, 2002, p. 177.

12. Tennenhouse and Wetherall, 2007, p. 81.

13. Although this is a discussion that is missing in Zittrain (2008).

14. Lessig, 1999, p. 34.

15. Zittrain, 2008, pp. 205–231.

16. Quoted from Chadwick, 2006, p. 257.

17. http://www.google.com/corporate/tenthings.html.

18. Zittrain, 2008, p. 212.

19. Conti, 2009, p. 17.

20. Zittrain, 2008, p. 101.

21. Zittrain, 2008, p. 102.

22. Zittrain, 2008, p. 125.



Janet Abbate, 1999. Inventing the Internet. Cambridge, Mass.: MIT Press.

John Alderman, 2002. Sonic boom: Napster, MP3, and the new pioneers of music. Cambridge, Mass.: Perseus.

Chris Anderson, 2004. “The long tail,” Wired, volume 12, number 10, at http://www.wired.com/wired/archive/12.10/tail.html, accessed 19 September 2010.

Paul Baran, 1964. “On distributed communications networks,” IEEE Transactions on Communications Systems, volume 12, number 1, pp. 1–9, and at http://www.rand.org/pubs/papers/2005/P2626.pdf, accessed 19 September 2010.

Darin Barney, 2000. Prometheus wired: The hope for democracy in the age of network technology. Chicago: University of Chicago Press.

John Perry Barlow, 1996. “A declaration of the independence of cyberspace,” at http://www.buscalegis.ufsc.br/revistas/index.php/buscalegis/article/viewFile/27624/27182, accessed 7 October 2010.

Tim Berners–Lee and Mark Fischetti, 1999. Weaving the Web: The original design and ultimate destiny of the World Wide Web by its inventor. San Francisco: HarperSanFrancisco.

Tim Berners–Lee, James Hendler and Ora Lassila, 2001. “The semantic Web: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities,” Scientific American, volume 284, number 5, pp. 34–43, and at http://kill.devc.at/system/files/scientific-american_0.pdf, accessed 19 September 2010.

Wiebe E. Bijker, 1995. Of bicycles, bakelites, and bulbs: Toward a theory of social technical change. Cambridge, Mass.: MIT Press.

William F. Birdsall, 1996. “The Internet and the ideology of information technology,“ The Internet: Transforming our society now, INET 96. Proceedings of the annual meeting of the Internet Society (25–28 June, Montreal), at http://www.isoc.org/inet96/proceedings/e3/e3_2.htm, accessed 19 September 2010.

Martin Campbell–Kelly, 1988. “Data communications at the National Physical Laboratory (1965–1975),” IEEE Annals of the History of Computing, volume 9, numbers 3–4, pp. 221–247, and at http://www.archive.org/details/DataCommunicationsAtTheNationalPhysicalLaboratory, accessed 19 September 2010.

Manuel Castells, 2001. The Internet galaxy: Reflections on the Internet, business and society. Oxford: Oxford University Press.

Manuel Castells, 2000. The rise of the network society. Second edition. Oxford: Blackwell.

Vinton C. Cerf, 1996. “Introduction,” In: Mark Stefik. Internet dreams: Archetypes, myths and metaphors Cambridge, Mass.: MIT Press, pp. ix–x.

Vinton G. Cerf and Robert E. Kahn, 2005. “A protocol for packet network interconnection,” ACM SIGCOMM Computer Communication Review, volume 35, number 2, pp. 71-82.http://dx.doi.org/10.1145/1064413.1064423

Andrew Chadwick, 2006. Internet politics: States, citizens, and new communication technologies. Oxford: Oxford University Press.

David D. Clark, John Wroclawski, Karen R. Sollins and Robert Braden, 2005. “Tussle in cyberspace: Defining tomorrow’s Internet,” IEEE/ACM Transactions on Networking, volume 13, number 3, pp. 462–475, and at http://groups.csail.mit.edu/ana/Publications/PubPDFs/Tussle2002.pdf, accessed 19 September 2010.

Greg Conti, 2009. Googling security: How much does Google know about you? Upper Saddle River, N.J.: Addison–Wesley.

Donald W. Davies, 1965. “Proposal for the development of a national communication service for on–line data processing,” unpublished memorandum (15 December), at http://epubs.cclrc.ac.uk/work-details?w=33613, accessed 19 September 2010.

Donald W. Davies, Keith A. Bartlett, Roger A. Scantlebury, and Peter T. Wilkinson, 1967. “A digital communication network for computers giving rapid response at remote terminals,” Proceedings of the First ACM Symposium on Operating System Principles, pp. 2.1–2.17.

Esther Dyson, 1998. Release 2.1: A design for living in the digital age. New York: Broadway Books.

Esther Dyson, 1997. Release 2.0: A design for living in the digital age. New York: Broadway Books.

A. Michael Froomkin, 1998. “The Internet as a source of regulatory arbitrage,” In: Brian D. Loader (editor). Cyberspace divide: Equality, agency and policy in the information society. London: Routledge, pp. 129–163.

Sharon E. Gillett, William H. Lehr, John T. Wroclawski and David D. Clark, 2001. “Do appliances threaten Internet invention?” IEEE Communications Magazine, volume 39, number 10, pp. 46–51.http://dx.doi.org/10.1109/35.956112

Jürgen Habermas, 1989. The structural transformation of the public sphere: An inquiry into a category of bourgeois society. Translated by Thomas Burger with the assistance of Frederick Lawrence. Cambridge: Polity.

Douglas A. Hass, 2007. “The never–was–neutral net and why informed end users can end the net neutrality debates,” bepress Legal Series, at http://www.kentlaw.edu/faculty/rwarner/classes/privacy/materials/network_neutrality/hess_neverwasneutralnet.pdf, accessed 19 September 2010.

Trevor Haywood, 1998. “Global networks and the myth of equality: Trickle down or trickle away?” In: Brian D. Loader (editor). Cyberspace divide: Equality, agency and policy in the information society. London: Routledge, pp. 19–34.

Mike Holderness, 1998. “Who are the world’s information poor?” In: Brian D. Loader (editor). Cyberspace divide: Equality, agency and policy in the information society. London: Routledge, pp. 35–56.

Samuel P. Huntington, 1991. The third wave: Democratization in the late twentieth century. Norman: University of Oklahoma Press.

Tim Jordan, 1999. Cyberpower: The culture and politics of cyberspace and the Internet. New York: Routledge.

David R. Johnson and David G. Post, 1997. “The rise of law on the global network,” In: Brian Kahin and Charles Nesson (editors). Borders in cyberspace: Information policy and the global information infrastructure. Cambridge, Mass.: MIT Press, pp. 3–47.

David R. Johnson and David G. Post, 1996. “Law and borders: The rise of law in cyberspace,” First Monday, volume 1, number 1, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/468/389, accessed 7 October 2010.

Christopher R. Kedzie, 1997. “The third waves,” In: Brian Kahin and Charles Nesson (editors). Borders in cyberspace: Information policy and the global information infrastructure. Cambridge, Mass.: MIT Press, pp. 106–128.

Jay P. Kesan and Rajiv C. Shah, 2001. “Fool us once shame on you — fool us twice shame on us: what we can learn from the privatizations of the Internet backbone network and the domain name system,” Washington University Law Quarterly, volume 79, pp. 89–220.

Leonard Kleinrock, 1961. “Information flow in large communication nets,” proposal for a Ph.D. thesis (31 May), at http://www.cs.ucla.edu/~lk/LK/Bib/REPORT/PhD/, accessed 19 September 2010.

Bruno Latour, 2007. Reassembling the social: An introduction to actor–network–theory. New York: Oxford University Press.

Lawrence Lessig, 2008. Remix: Making art and commerce thrive in the hybrid economy. London: Bloomsbury Academic.

Lawrence Lessig, 2006. Code: Version 2.0. New York: Basic Books.

Lawrence Lessig, 2002. The future of ideas: The fate of the commons in a connected world. New York: Random House.

Lawrence Lessig, 1999. Code and other laws of cyberspace. New York: Basic Books.

Lawrence Lessig, Charles Nesson and Jonathan Zittrain, 1999. “Open code/open content/open law,” Strategic planning session: session paper, Harvard Law School, Cambridge, Mass., at http://cyber.law.harvard.edu/sites/cyber.law.harvard.edu/files/opencode.session.pdf, accessed 19 September 2010.

Miaoran Li, 2009. “The pirate party and the pirate bay: How the pirate bay influences Sweden and international copyright relations,” Pace International Law Review, volume 21, number 1, pp. 281–307, and at http://digitalcommons.pace.edu/intlaw/290/, accessed 19 September 2010.

J.C.R. Licklider, 1960. “Man–computer symbiosis,“ IRE Transactions on Human Factors in Electronics, volume HFE–1, pp. 4–11, and at http://www.internet-didactica.es/descargas/man-computer_symbiosis.pdf, accessed 19 September 2010.

Armand Mattelart, 2000. Networking the world, 1794–2000. Translated by Liz Carey–Libbrecht and James A. Cohen. Minneapolis: University of Minnesota Press.

Nick Moore, 1998. “Confucius or capitalism? Policies for an information society,” In: Brian D. Loader (editor). Cyberspace divide: Equality, agency and policy in the information society. London: Routledge, pp. 149–160.

John Naughton, 2000. A brief history of the future: The origins of the Internet. London: Phoenix.

Jacco van Ossenbruggen, Lynda Hardman and Lloyd Rutledge, 2002. “Hypermedia and the semantic Web: A research agenda,” Journal of Digital information, volume 3, number 1, at http://journals.tdl.org/jodi/article/viewArticle/78/77, accessed 19 September 2010.

David P. Reed, 2000. “The end of the end–to–end argument,” at http://www.cs.sfu.ca/~vaughan/teaching/431/papers/ReedEndOfTheEndToEnd.pdf, accessed 19 September 2010.

Jerome H. Saltzer, David P. Reed, and David D. Clark, 1984. “End–to–end arguments in system design,” ACM Transactions on Computer Systems, volume 2, number 4, pp. 277–288.http://dx.doi.org/10.1145/357401.357402

Katharine Sarikakis, 2004. “Ideology and policy: Notes on the shaping of the Internet,” First Monday, volume 9, number 8, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1167/1087, accessed 19 September 2010.

Mark Stefik, 1996. Internet dreams: Archetypes, myths and metaphors Cambridge, Mass.: MIT Press.

Cass Sunstein, 2007. Republic.com 2.0. Princeton: Princeton University Press.

Cass Sunstein, 2001. Republic.com. Princeton: Princeton University Press.

David L. Tennenhouse and David J. Wetherall, 2007. “Towards an active network architecture,” ACM SIGCOMM Computer Communication Review, volume 37, number 5, pp. 81–94, and at http://ccr.sigcomm.org/archive/1996/apr96/ccr-9604-tennenhouse.pdf, accessed 19 September 2010.

Alvin Toffler, 1980. The third wave. New York: Morrow.

Raphael Volz, Daniel Oberle, Steffen Staab, and Boris Motik, 2003. “KAON SERVER — A semantic Web management system,” Twelfth International World Wide Web Conference, Budapest (Alternate paper tracks), at http://www2003.org/cdrom/papers/alternate/P029/p29-volz.html, accessed 19 September 2010..

Frank Webster, 2006. Theories of the information society. Third edition. London: Routledge.

Langdon Winner, 1997. “Cyberlibertarian myths and the prospects for community,” ACM SIGCAS Computers and Society, volume 27, number 3, pp. 14–19.http://dx.doi.org/10.1145/270858.270864

Langdon Winner, 1980. “Do artifacts have politics?” Daedalus, volume 109, number 1, pp. 121–136, and at http://zaphod.mindlab.umd.edu/docSeminar/pdfs/Winner.pdf, accessed 19 September 2010.

Christopher S. Yoo, 2004. “Would mandating broadband network neutrality help or hurt competition? A comment on the end–to–end debate,” Journal on Telecommunication and High Technology Law, volume 3, pp. 23–68, and at http://www.jthtl.org/content/articles/V3I1/JTHTLv3i1_Yoo.PDF, accessed 19 September 2010.

Jonathan Zittrain, 2008. The future of the Internet and how to stop it. London: Yale University Press.

Jonathan Zittrain, 2006. “The generative Internet,” Harvard Law Review, volume 119, pp. 1,975–2,005, and at http://www.harvardlawreview.org/issues/119/may06/zittrain.shtml, accessed 19 September 2010.


Editorial history

Received 9 June 2009; revised 19 September 2010; accepted 30 September 2010.

Creative Commons Licence
“Speedism, boxism and markism: Three ideologies of the Internet” by Jan Nolin is licensed under a Creative Commons Attribution 3.0 Unported License.

Speedism, boxism, and markism: Three ideologies of the Internet
by Jan Nolin.
First Monday, Volume 15, Number 10 - 4 October 2010

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.