The tensions of securing cyberspace
First Monday

The tensions of securing cyberspace: the Internet, state power & the National Strategy to Secure Cyberspace

Abstract
The tensions of securing cyberspace: the Internet, state power & the National Strategy to Secure Cyberspace by Michael T. Zimmer

The National Strategy to Secure Cyberspace exposes a growing tension between the nature of the Internet and the regulatory powers of the traditional nation–state. The National Strategy declares, with all the strength and authority of the United States government, the desire to secure a space many consider, by its very nature, chaotic and beyond the reach any organized or central control. This paper will argue that both the structural architecture of the Internet and the substantive values codified within it clash with governmental efforts to "secure cyberspace."

Contents

Introduction
A brief history of the National Strategy to Secure Cyberspace
The architecture of the Internet
The structural tensions with state power
The substantive tensions with state power
Conclusion

 


 

++++++++++

Introduction

In October 2002, while millions of people all over the world were working, shopping and surfing online, the Internet sustained a crippling "denial of service" attack. For close to one hour, the thirteen root servers that manage the Internet’s addressing system were bombarded by millions of bogus requests for information, overwhelming them with data until the servers failed. Seven of the 13 root servers failed that day, and two others failed intermittently during the attack. Thanks to the distributed nature of the Internet’s architecture, ordinary users experienced no slowdowns or outages. Such denial of service attacks are common and easy to perpetrate, but the size and scope of this event make it unique. While similar attacks in February 2000 disrupted Amazon.com, eBay, Yahoo and other e–commerce cites for several hours, the coordinated attack in October 2002 — on all the root servers at once — was a rarity. This coordinated attack took place — perhaps by coincidence, perhaps not — only one month after the U.S. Department of Homeland Security released its initial draft of the National Strategy to Secure Cyberspace.

Initial reactions to this policy document focused on the boldness of the intent expressed within its title: securing cyberspace. The primary goal of the National Strategy is not simply to secure computers in the U.S., or to secure the nation’s vital infrastructures, or even to secure the physical networks. Instead, the American government intends to build strategies and enact plans to secure cyberspace, the ephemeral space that exists only in relation to the medium of the Internet. This is more than just an exercise in semantics. By declaring, with all the strength and authority of the U.S. government, that they hope to secure a space most of us consider, by its very nature, chaotic and beyond the reach any organized or central control, the National Strategy exposes a growing tension between the nature of the Internet and the regulatory powers of the traditional nation–state. This paper will argue that both the structural architecture of the Internet and the substantive values codified within it clash with governmental efforts to "secure cyberspace."

 

++++++++++

A brief history of the National Strategy to Secure Cyberspace

As the Internet has become an increasingly important part of daily life, businesses, governments and individuals alike have begun to realize how it might be exploited to threaten their interests. As early as 1997, scientists and government officials recognized the growing threat, as reported in the Proceedings from the Carnegie Mellon Workshop on Network Security:

"The United States Information Infrastructure faces a continuous barrage of attacks from hackers with an assortment of tools available to them. ... A plethora of easily accessed "tools" puts the capability for sophisticated, computer–based information warfare into anyone’s hands, regardless of geographical location, nationality, or motivation. ... The rapid growth rate of the Internet increases the feasibility of computer–based attacks while decreasing the chance of detection." [1]

For many, this threat has been taken more seriously since the devastating events of 11 September 2001, exacerbating fears that the Internet could be used as a weapon against American assets. As President Bush’s message in the opening pages of the National Strategy demonstrates, the threat of a future cyber–attack is taken seriously at the highest levels of the government: "The policy of the United States is to protect against the debilitating disruption of the operation of information systems for critical infrastructures and, thereby, help to protect the people, economy, and national security of the United States" [2]. Steeped in civic idealism and putting forth the every–optimistic "call to action," the National Strategy provides a blueprint for the way that government, corporations and individuals need to view Internet security.

The National Strategy to Secure Cyberspace identifies three strategic objectives: (1) Prevent cyber–attacks against America’s critical infrastructures; (2) Reduce national vulnerability to cyber–attacks; and, (3) Minimize damage and recovery time from cyber–attacks that do occur. To meet these objectives, the National Strategy outlines five national priorities: The first priority, the creation of a National Cyberspace Security Response System, focuses on improving the government’s response to cyberspace security incidents and reducing the potential damage from such events. The second, third, and fourth priorities (the development of a National Cyberspace Security Threat and Vulnerability Reduction Program, the creation of a National Cyberspace Security Awareness and Training Program, and the necessity of Securing Governments' Cyberspace) aim to reduce threats from, and vulnerabilities to, cyber–attacks. The fifth priority, the establishment of a system of National Security and International Cyberspace Security Cooperation, intends to prevent cyber–attacks that could impact national security assets and to improve the international management of and response to such attacks. Ultimately, the National Strategy encourages companies to regularly review their technology security plans, and individuals who use the Internet to add firewalls and anti–virus software to their systems. It calls for a single federal center to help detect, monitor and analyze attacks, and for expanded cyber–security research and improved government–industry cooperation.

A national strategy is certainly both necessary and appropriate to effectively deal with the many problems of computer network security. However, despite the apparent relevance of such a plan amid the current administration’s "war on terrorism," the National Strategy seems to have slipped in importance for both the Bush administration and the information technology industry. One obvious indication was the dramatic decrease in the visibility of the National Strategy. Original plans called for the final version of the National Strategy to be released on 19 September 2002, complete with a presidential signing ceremony at Stanford University amid technology luminaries like Microsoft chairman Bill Gates. The White House decided to hold back the final plan, and instead released a draft to seek further comment from the industry. After a few months of spattering coverage and debate — mostly in industry publications — the U.S. Department of Homeland Security unceremoniously released the final version of the National Strategy on Valentine’s Day, 14 February 2003 (Krebs, 2003).

The non–controversial nature of the final National Strategy drew sharp and immediate criticism (Krim, 2003; Fitzgerald, 2003; Fisher 2003; Lemos and McCullagh, 2002; Forno, 2002). Many information security experts shared the criticism of Richard Forno (2002), who noted that the National Strategy simply "‘addresses’ various security ‘issues’ instead of directing the ‘resolution’ of security ‘problems’ — tiptoeing around the problems instead of dealing with them head–on and demanding results." Rather than target specific industry segments and require that they secure themselves by recommending tough new laws and regulations, the National Strategy instead recommends that industry and individuals simply take greater care. Unlike earlier drafts that asked the private sector to take concrete steps to protect their systems, the majority of the final document directs the government to lead by example by tightening the security of federal information systems. Omitted from the final plan were proposals to require technology companies to contribute to a security research fund and for Internet service providers to bundle firewall and other security technology with their service. Adding to the National Strategy’s perceived weaknesses, the White House cyber–security czar, Richard Clarke, resigned from his post only two weeks prior to its release (Krebs, 2003). Without the continued support of Clarke — or someone else with equivalent political clout and technical knowledge — the National Strategy very well may "languish as just another policy document with plenty of good ideas but few teeth" (Fisher, 2003).

Such critiques are reasonable. The National Strategy is short on regulations and long on recommendations. True, more rigorous steps could be taken. The government could take steps to transform the architecture of the Internet to make it more regulable, thereby increasing national security. The government could require that all forms of encryption have a "back–door" for government to enter and examine the data. It could force Internet service providers to install security technologies that would require the use of government–issued personal digital IDs, effectively preventing anonymous access to the Internet. More radically, the government could mimic efforts in China to restrict and funnel access to the global Internet through State–controlled nodes, effectively creating a national intranet (Deibert, 2002b; Kalathil and Boas, 2001). While critics of the current National Strategy to Secure Cyberspace likely are not suggesting the government strengthen the National Strategy by taking such drastic measures, similar steps could be justified in the name of "national security," and would likely provide increased protection for the nation’s vital infrastructures.

Nevertheless, such efforts — indeed, any effort to "secure cyberspace" — conflict with the prevailing "nature" of the Internet. What most critics of the National Strategy fail to recognize is the rising tension between the nature of Internet and controlling tendencies of State power. This tension has both a structural and substantive element. Structurally, the Internet is a global, distributed network governed by open and interoperable protocols, resulting in a non–hierarchical, end–to–end and anarchic network. These structural features of the Internet are obstacles for the exertion of State power. Richard Clarke acknowledged this structural tension when the National Strategy was first announced: "The government cannot dictate. The government cannot mandate. The government cannot alone secure cyberspace" (Lemos and McCullagh, 2002). Thus, the National Strategy, much to the consternation of its critics, stresses that primary responsibility for Internet security must come its community of users, rather than the government.

The structural explanation for the tension between the nature of the Internet and the government is only half of the story. There also is a substantive tension, that is, a tension between the very essence of the Internet, its biases and values, and the predilection of the government to exert control. For many, the Internet embodies a new libertarian utopia where freedom from State control reigns. Building from the structural nature of the Internet, we discover that the architecture of the Internet codifies certain substantive values — values which clash with governmental efforts to "secure cyberspace."

My intent is not to debate the benefits to national security (or the threats to civil liberties) of the provisions of the National Strategy are implemented. I am not suggesting whether stricter or looser recommendations are appropriate. But by placing their critique of the National Strategy squarely on its lack of "teeth," its detractors overlook the underlying tensions between the architecture of the Internet and the fundamental nature of State power. In this paper, I aim to illuminate these structural and substantive tensions, and reflect on the fact that any attempt to reconcile these tensions impacts both the national security efforts of the government and the very nature of the Internet as we now know it. As a first step to understanding these tensions, however, an overview of the Internet’s architecture is required.

 

++++++++++

The architecture of the Internet

Before we can understand the tensions that exist between the core values of the Internet and the government’s attempt to secure cyberspace, we first have to understand the architecture of the Internet. In his essay on the architectural principles of the Internet, Carpenter wrote, "the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network" (Network Working Group, 1996a). This section will provide a foundation for the understanding these key architectural principles of the Internet: The development of packet–switching protocols, its end–to–end design, and its decentralized standard setting process, within which Carpenter’s essay was published and distributed.

The Internet, of course, is not a thing; it is the interconnection of many things — the (potential) interconnection between any of millions of computers located around the world. Each of these computers is independently managed by persons who have chosen to adhere to common communications protocols, particularly a fundamental protocol suite known as TCP/IP, which makes it practical for computers to share data even if they are far apart and have no direct line of communication (Hall, 2000; Stevens 1994). The TCP/IP protocol suite makes the Internet possible. Its most important feature is that it defines a packet–switching network, a method by which data can be broken up into standardized packets that are then routed to their destinations via an indeterminate number of intermediaries. Under TCP/IP, as each intermediary receives a packet intended for a party further away, the packet is forwarded along whatever route is most convenient at the nanosecond the data arrives. By comparison, rather than telephoning a friend, one would tape record a message, cut it up into several pieces, and hand the pieces to people heading in the general direction of the intended recipient. Each time a person carrying tape met anyone going in the right direction, he or she could hand over as many pieces of tape as the recipient could comfortably carry. Eventually the message would get where it needed to go.

Neither sender nor receiver need know or care about the route that their data takes, and there is no particular reason to expect that data will follow the same route twice. More importantly from a technical standpoint, the computers in the network can all communicate without knowing anything about the network technology carrying their messages. As Carpenter outlines, "The Internet level protocol must be independent of the hardware medium and hardware addressing" [3]. Because of the way the protocols were designed, any computer on the network can talk to any other computer on the network, resulting in a robust, interoperable peer–to–peer relationship.

The second key aspect of the Internet’s architecture is a design principle called "end–to–end" (Saltzer et al., 1997). With end–to–end design, the network does not choose how the network itself will be used. Control, or intelligence, is placed at the "ends," the computers used to access the network. Computers within the network are only required to provide the most basic level of service — data transport via the TCP/IP protocols. The network itself is kept simple, incapable of discrimination. Without intelligence imbedded in the network all packets that conform to the protocol are transmitted, regardless of content, regardless of intent, and without any knowledge (or care) of what types of applications or people are utilizing the packets on the ends of the network.

The TCP/IP protocol suite and the end–to–end design of the Internet have become standard practices among the Internet community. The decentralized nature of the standard–setting process represents a third key element of the Internet’s architecture. The Internet is not controlled by a single company or agency. Instead, the Internet is administered, if that even is the word, by an international, unincorporated, non–governmental organization known as the Internet Engineering Task Force (IETF), which allows unlimited grassroots participation and operates under a large, open agenda (Network Working Group, 2001; Borsook, 1995). In marked contrast to more traditional standards organizations (ANSI, ISO), the IETF has no strict bylaws, no board of directors, nothing so much as official "membership" — anyone can register for and attend any meeting. "The closest thing there is to being an IETF member is being on the IETF mailing lists" [4]. The culture of the IETF invokes open and democratic participation. As long–time IETF member, and MIT professor, Dave Clark remarked, "We reject: kings, presidents, and voting. We believe in: rough consensus and running code" (Clark in Borsook, 1995).

A primary activity of the IETF is Internet standard–setting. The Internet Standards Process is concerned with all protocols, procedures, and conventions that are used in or by the Internet, including the TCP/IP protocol suite (Network Working Group, 1996b). While the process is somewhat complex, it is "designed to be fair, open, and objective; to reflect existing (proven) practice; and to be flexible" [5]. The process is gradual, deliberate and negotiated; it provides ample opportunity for participation and comment by all interested parties (Galloway, 2004). At each stage of the standardization process, a specification is repeatedly discussed and its merits debated in open meetings and/or public electronic mailing lists, and it is made available for review via worldwide on–line directories. This is accomplished through the extensive use of "Request for Comments" (RFC) documents. RFCs cover a wide range of topics in addition to Internet standards, from early discussion of new research concepts to status memos about the Internet to philosophical and historical treatments of the Internet. RFCs have become the principle means of open expression in the computer networking community, the accepted way of recommending, reviewing and adopting new technical standards. As Galloway notes, the RFC process "is a peculiar type of anti–federalism through universalism — strange as it sounds — whereby universal techniques are levied in such a way as ultimately to revert much decision–making back to the local level" (Galloway, in press). It was a working anarchy.

Combining these three features of the Internet — packet–switching protocols, end–to–end network design, and its standard–setting process — results in a distributed, and essentially un–intelligent, computer network rooted in an anarchic ethos. Borrowing from Vaidhyanathan, this anarchy is not necessarily chaotic and dangerous: "Anarchy is organization through disorganization. ... anarchy is a process, a set of behaviors, and a mode of organization and communication" (Vaidhyanathan, in press). It is this sense of anarchy, what Vaidhyanathan labels "information anarchy," that becomes the core value of the architecture of the Internet. In his outline of the architectural principles of the Internet, Carpenter recognizes its anarchic nature outright: "In search for Internet architectural principles, we must remember that technical change is continuous in the information technology industry. The Internet reflects this. ... The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely" [6]. This anarchic nature of the Internet inevitably — perhaps automatically — challenges any principle of stability or authority within its reach. Unsurprisingly, tensions emerge when the "information anarchy" is confronted with government attempts to "secure cyberspace."

 

++++++++++

The structural tensions with state power

The previous section detailed the architecture of the Internet, illustrating the common description of the Internet as a distributed, anarchic network. These structural features of the Internet are obstacles for the exertion of State power. As the network spreads and as communication flows become both more opaque and swift, three key structural tensions rise to the surface in relation to the government’s national security efforts.

First, the Internet lacks any loci of control from which to monitor and/or block potentially harmful actions. Inherent to the Internet’s architecture, there are no central nodes through which all information passes [7]. Nor is there any single route through which particular messages travel as the packet–switching protocols partition and distribute data across numerous independent trajectories along the network. The distributed nature of the Internet makes any form of central monitoring or control almost impossible, complicating any governmental attempt to monitor network traffic for potential threats.

Even if centralized monitoring was possible, however, the protocols and end–to–end design of the Internet present a second tension vis–á–vis governmental control: the network is indiscriminate as to its content. As noted above, data is routed throughout the Internet without any knowledge or prejudice of what is being transmitted. With the intelligence of the Internet existing on the ends, the network itself is unable to determine what the content or purpose of any particular packet. The architecture of the Internet makes it impossible to distinguish between packets that have malicious intent and those that are normal. This constrains any desire by the government to identify potentially harmful packets as they pass through the network [8].

Finally, a third structural tension emerges due to the supra–national character of the Internet. It becomes increasingly impossible for a State, restricted within its artificial borders, to enforce its rules and laws over such a medium that is oblivious to geography. Regarded from the perspective of national security, the concept of national borders gradually becomes less clear with the worldwide expansion of networks. The Internet — with its end–to–end design and non–discriminatory protocols allowing unfettered international access and use — is a "new continent that knows neither borders nor treaties" [9]. In this way, the supra–national quality of the Internet poses not only a structural constraint on a State’s ability to govern the medium beyond its borders, but also substantively changes the very notion of territorial sovereignty.

 

++++++++++

The substantive tensions with state power

The rise of information technologies, including the Internet, impacts the way governance is organized and power is exercised in our society. As Castells notes, "Networks constitute the new social morphology of our societies, and the diffusion of networking logic substantially modifies the operation and outcomes in processes of production, experience, power and culture" [10]. This poses immense constraints on any government’s attempt to secure cyberspace. While the structural tensions noted above seem clear, more abstract constraints to State power lurk just below the surface, exposing deep substantive tensions. These include challenges to the hierarchical structures of the nation–state, the blurring of territorial boundaries, and general resistance to power in a society increasingly focused on control.

Information technology networks contribute to the departure from traditional hierarchical authoritative contexts privileging nation–states. As Arquilla and Ronfeldt explain, the rise of global information networks sets in motion forces that challenge the hierarchical design of many institutions:

"It disrupts and erodes the hierarchies around which institutions are normally designed. It diffuses and redistributes power, often to the benefit of what may be considered weaker, smaller actors. It crosses borders, and redraws the boundaries of offices and responsibilities. It expands the spatial and temporal horizons that actors should take into account. And thus, it generally compels closed systems to open up." [11]

As a consequence of the Internet’s capacity for anarchic global communication, new global institutions are being formed that are preponderantly sustained by network rather than hierarchical structures — examples include peer–based networks such as Slashdot.org, or even the IETF itself. Such global, interconnected networks help to flatten hierarchies, often transforming them altogether, into new types of spaces where traditional sovereign territoriality itself faces extinction.

The substantive tension between the traditional territoriality of the State and the push to control the supra–national Internet is immense. A State is conventionally described by its border, the arbitrary transition from one State to the next, the boundary of defense, one of the natural constraints on the scope of State control. The supra–national character of the Internet stands in direct opposition to this conventional State–model. The Internet introduces an "information frontier" whose boundary lacks either form or rules to separate the "information territories" of individual states from one another. As Weiguang explains, "An ‘information territory’ cannot be divided up according to traditional geopolitical concepts such as sovereign territory, air–space, territorial waters, or even territories claimed in ‘outer space’" [12]. The lack of territorial boundaries in cyberspace, then, leads to much more than the structural constraint on State power — it threatens the very basis of territorial power itself. As Deibert (2002a) warns,

"As [the] flows of information [on the Internet] accelerate, and the networked web of communications becomes increasingly dense, the structural pressures on states will increase accordingly. The internationalization of the state, and correspondingly paralysis of state autonomy and power, will continue, and even magnify." [13]

The world of cyberspace challenges the idea of territorial space as the only kind of space, especially as defined by nation–states.

The third substantive tension between the architecture of the Internet and the government’s efforts to secure cyberspace focuses on the rejection of power in a society focused increasingly on control. Deleuze (1990) provides the theoretical framework for the recent cultural shift from a disciplinary society to a society of control. He paints a bold picture which identifies a historical movement away from technologies of discipline, based on confinement, analogical control, factories; to technologies of control, working more on the interiors of subjects, commodifying the entire social space, where individuals live in a corporation of perpetually changing social forms. As we move through history, Deleuze maintains, elements of control become less and less visible. The National Strategy to Secure Cyberspace, then, becomes a shining example of the shift away from discipline and towards control. Rather than mandating technologies of confinement (firewalls, content filters, etc), the National Strategy instead calls for cooperation, public–private partnerships, and recommends that industry and individuals simply take "greater care." As mentioned above, these "soft" recommendations were criticized by many. Yet, from a Deleuzian perspective, the non–confrontational character of the National Strategy merely reflects an increasingly hidden form of State power and control.

The nature of the Internet threatens this hidden form of control. In "Constituents of a Theory of the Media," Enzensberger (1996) expands upon Deleuze’s concern with the changing shape of power and social control. He recognizes how the "consciousness industry" possesses enormous control over the population, and his essay is obsessed with creating awareness so the masses can understand the modes of persuasion and manipulation inherent in the interaction between media technology and consciousness, and ultimately transform such mechanisms into emancipatory tools. Working from Enzensberger’s typology [14], the decentralized and anarchic nature of the Internet bolsters its potential as an emancipatory medium. Tensions will inevitably arise as this new emancipatory medium confronts a society of control.

 

++++++++++

Conclusion

Discussing the economic sphere of the Internet, Daniel Burstein and David Kline associate the Internet with a series of values: "Free. Egalitarian. Decentralized. Ad hoc. Open and peer–to–peer. Experimental. Autonomous. Anarchic." They contrast these traits with the personality of modern business organizations: "Hierarchical. Systematized. Planned. Proprietary. Pragmatic. Accountable. Organized" [15]. Keeping with the spirit of this paper, it is not unreasonable to replace "modern business organizations" in their dichotomy with "sovereign government." A clash of values is apparent between the architecture of the Internet and the nature of State power. In this paper, I have outlined the structural and substantive tensions between the Internet and the government’s desire to control cyberspace. A crucial consideration in my argument is the values ascribed to the Internet by Burstein and Kline are not fixed — nor are they inevitable. While these values reflect the architecture of the Internet, its particular design was a choice. As Lessig (2002) notes, "how a system is designed will affect the freedoms and control the system enables" [16]. Architecture defines the true parameters of freedom in cyberspace, and the question of what the architecture of cyberspace should be is not a neutral question. Put another way, "Architecture is politics" [17].

In its pursuit of security, the U.S. government challenges the very architecture of the Internet, and by challenging its architecture, the very nature of the Internet itself is threatened. The Internet will continue to be transformed by this protracted struggle, this clash of values. The Internet faces both predictable and unpredictable challenges. It is a fragile representation of the social landscape — a balance between anarchic freedom and state control — that could come undone at any time. Again, the principle of constant change is perhaps the only principle of the Internet that will survive indefinitely. What must be acknowledged by all parties involved — the government agencies, the information technology industry, and the millions of users — is that changing the architecture of the Internet will also impact its core values. End of article

 

About the Author

Michael T. Zimmer is a doctoral student in Media Ecology in the Department of Culture and Communication at New York University.
E–mail: mtz206@nyu.edu.

 

Acknowledgements

The author is grateful to Professors Helen Nissenbaum and Alex Galloway for their feedback and guidance on this paper.

 

Notes

1. Carnegie Mellon Workshop, 1997, p. 1.

2. Department of Homeland Security, 2002, p. iii.

3. Network Working Group, 1996a, p. 3.

4. Network Working Group, 2001, p. 4.

5. Network Working Group, 1996b, p. 4.

6. Network Working Group, 1996a, p. 1.

7. The lack of central nodes is a central feature of the distributed network that makes up the Internet. It is important to note, however, that the majority of users access the Internet via Internet Service Providers, and this form of access creates a centralized location where traffic from a users computer could, indeed, be monitored. For the purposes of this paper, however, I treat this fact as a consequence of how the Internet is accessed. The Internet per se remains free of central nodes.

8. Intended as a tongue–in–cheek acknowledgement of this constraint, RFC 3514: The Security Flag in the IPv4 Header, issued on 1 April 2003, proposes the adoption of a security flag, known as the "evil" bit: "Benign packets have this bit set to 0; those that are used for an attack will have the bit set to 1."

9. Weiguang, 1998, p. 76.

10. Castells, 1996, p. 469.

11. Arquilla and Ronfeldt, 1998, p. 27.

12. Weiguang, 1998, p. 77.

13. Deibert, 2002a, p. 134.

14. Enzensberger (p. 74) presents the following table to summarize the social dichotomy of the media, providing a model to show what changes are necessary for a repressive medium to become an emancipatory one.

 

Table 1: The social dichotomy of the media.

Repressive use of media
Emancipatory use of media
Centrally controlled program Decentralized program
One transmitter, many receivers Each receiver a potential transmitter
Immobilization of isolated individuals Mobilization of the masses
Passive consumer behavior Interaction of those involved, feedback
Depoliticization A political learning process
Production by specialists Collective production
Control by property owners or bureaucracy Social control by self–organization

15. Burstein and Kline, 1995, p. 104.

16. Lessig, 2002, p. 35.

17. Kapor in Lessig, 2002, p. 35.

 

References

John Arquilla and David Ronfeldt, 1998. "Cyberwar is Coming!" In: Gerfried Stocker and Christine Schöpf (editors). InfoWar. New York: Springer–Verlag, pp. 24–50.

Paulina Borsook, 1995. "How Anarchy Works," Wired, volume 3, issue 10 (October), at http://www.wired.com/wired/archive/3.10/ietf.html, accessed 4 April 2003.

Daniel Burstein and David Kline, 1995. Road Warriors: Dreams and Nightmares along the Information Highway, New York: Dutton.

Carnegie Mellon Workshop on Network Security, 1997. Scientific and Technical Intelligence Committee: Proceedings from the Carnegie Mellon Workshop on Network Security, STIC97–001. Washington, D.C.: Library of Congress.

Manuel Castells, 1996. The Information Age: Economy, Society and Culture. Volume 1: The Rise of the Network Society. Oxford: Blackwell.

Department of Homeland Security, 2003. The National Strategy to Secure Cyberspace, at http://www.whitehouse.gov/pcipb/.

Ronald Deibert, 2002a. "Circuits of Power: Security in the Internet Environment" In: James Rosenau and J.P. Singh (editors). Information Technologies and Global Politics: The Changing Scope of Power and Governance. Albany, N.Y.: State University of New York Press, pp. 115–142.

Ronald Deibert, 2002b. "Dark Guests and Great Firewalls: The Internet and Chinese Security Policy," Journal of Social Issues, volume 58, number 1, pp. 143–159. http://dx.doi.org/10.1111/1540-4560.00253

Gilles Deleuze, 1990. "Postscripts on Control Societies," In: Gilles Deleuze. Negotiations: 1972–1990. Translated by Martin Joughin. New York: Columbia University Press, pp. 177–182.

Hans Magnus Enzensberger, 1996. "Constituents of a Theory of the Media," In: Tim Druckrey (editor). Electronic Culture: Technology and Visual Representation. New York: Aperture, pp. 62–85.

Dennis Fisher, 2003. "Cyber Plan’s Future Bleak," eWeek (24 February), at http://www.eweek.com/print_article/0,3668,a=37497,00.asp, accessed 26 April 2003.

Michael Fitzgerald, 2003. "Homeland Cybersecurity Efforts Doubted," SecurityFocus Online (11 March), at http://www.securityfocus.com/news/3043, accessed 15 March 2003.

Richard Forno, 2002. "America’s National Cybersecurity Strategy: Same Stuff, Different Administration," Infowarrior.org, http://www.infowarrior.org/articles/2002–11.html, accessed 23 April 2003.

Alex Galloway, in press. Protocol: How Control Exists After Decentralization. Cambridge, Mass.: MIT Press.

Eric Hall, 2000. Internet Core Protocols: The Definitive Guide. Cambridge, Mass.: O'Reilly.

Shanthi Kalathil and Taylor Boas, 2001. "The Internet and State Control in Authoritarian Regimes: China, Cuba, and the Counterrevolution," Carnegie Endowment, Information and World Politics Project, at http://www.ceip.org/files/Publications/wp21.asp.

Brian Krebs, 2003. "White House Releases Cybersecurity Plan," WashingtonPost.com (14 February), at http://www.washingtonpost.com/ac2/wp–dyn/A7970–2003Feb14?language=printer, accessed 15 March 2003.

Jonathan Krim, 2003. "Cyber–Security Strategy Depends on Power of Suggestion," Washington Post (15 February), p. E1.

Robert Lemos and Declan McCullagh, 2002. "Cybersecurity Plan Lacks Muscle," CNET News.com (19 September), at http://news.com.com/2100–1023–958545.html, accessed 23 April 2003.

Lawrence Lessig, 2002. The Future of Ideas. New York: Vintage Books.

David McGuire and Brian Krebs, 2002. "Large–Scale Attack Cripples Internet Backbone," Washington Post (23 October), p. E5.

Network Working Group, 1996a. "Request for Comments: 1958, Architectural Principles of the Internet," Brian Carpenter (editor), at http://ietf.org/rfc/rfc1958.txt.

Network Working Group, 1996b. "Request for Comments: 2026: The Internet Standards Process — Revision 3," S. Bradner (editor), at http://ietf.org/rfc/rfc2026.txt

Network Working Group, 2001. "Request for Comments: 3160: The Tao of IETF – A Novice's Guide to the Internet Engineering Task Force," S. Harris (editor), at http://ietf.org/rfc/rfc3160.txt

Jerome Saltzer, David Reed and David Clark, 1997. "End–to–End Arguments in System Design," at http://web.mit.edu/Saltzer/www/publications/.

W. Richard Stevens, 1994. "The Protocols," TCP/IP Illustrated. Volume 1. Reading, Mass.: Addison–Wesley, pp. 1–20.

Siva Vaidhyanathan, in press. The Anarchist in the Library. New York: Basic Books.

Shen Weiguang, 1998. "Information Warfare," In: Gerfried Stocker and Christine Schöpf (editors). InfoWar. New York: Springer–Verlag, pp. 60–83.

 

Editorial history

Paper received 11 January 2004; accepted 20 February 2004.


Copyright ©2004, First Monday

Copyright ©2004, Michael T. Zimmer

The tensions of securing cyberspace: The Internet, state power & the National Strategy to Secure Cyberspace by Michael T. Zimmer
First Monday, Volume 9, Number 3 - 1 March 2004
http://www.firstmonday.org/ojs/index.php/fm/article/view/1125/1045





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.