The World Wide Web (WWW) was initially intended as a means to share distributed information amongst individuals. Now the WWW has become the preferred environment for a multitude of e-services: e-commerce, e-banking, e-voting, e-government, etc. Security for these applications is an important enabler. This article gives a thorough overview of the different security issues regarding the WWW, and provides insight in the current state-of-the-art and evolution of the proposed and deployed solutions.
When the World Wide Web (WWW) was established in the early 1990s, it was initially intended as a means to share distributed information amongst individuals . The core technology of the Web consisted of Universal Resource Locators (URLs)  - a means to identify the location of information, the HyperText Markup Language (HTML)  - the language with which information on the Web is represented, and finally the HyperText Transfer Protocol (HTTP)  - the language spoken between Web servers and Web browsers.
All these different technologies have made the WWW very attractive for many e-services. Therefore, the WWW is currently a commonly used platform for e-commerce, e-banking, e-auctioning, e-government, e-voting, e-healthcare, e-insurance, e-anything.
Not surprisingly, security is a very important, if not the most important, issue in all these applications. However, security has not always been considered during the development of many Web technologies. This has the unfortunate consequence that a number of potential security problems can be identified. It is possible for an attacker to eavesdrop the communication between a user's browser and a Web server; sensitive information, such as a credit card number, or any other confidential data, could thus be obtained. An attacker could try to impersonate entities in order to get information which is normally not disclosed without authorization; for example, an attacker could spoof a Web banking application, hereby gathering users' PIN codes. A substantial amount of confidential information is made available via the WWW; unauthorized access to this information should be prevented. Note that relying on the secrecy of a URL is not a good idea, as URLs will leak in some way or another, and are easily picked up by search engines. Web sites have become an organization's or company's public face; "defacement" by hackers is clearly not desired. Web pages and e-mail can contain executable content, some of which might be malicious; for example, an attacker could lure a user into surfing to its Web page which contains a program that installs a Trojan horse. As people are getting more and more online, their privacy is at stake; the Web includes ideal technologies with which user profiles can be maintained. Finally, the Web provides an excellent means for exchanging (any) information, including illicitly distributing copyright protected or explicit material.
Scope of the paperThis paper is certainly not the first survey on web security; see, for example, Rubin . Since the WWW is evolving very quickly, this paper intends to provide an up-to-date and in-depth overview of the current state-of-the-art regarding Web security. Instead of addressing one or more particular issues, we try to identify and discuss a broad range of different security issues which are all relevant to the WWW. We start with the issue of secure communications, probably the issue that comes first into mind when thinking about "Web security". Secure communications can be provided at several layers of the network protocol stack. Secure communications requires - at least as it is currently provided on the WWW - a properly deployed public key infrastructure. Although client authentication can be provided by solutions for secure communications, this is mostly performed on top of these solutions. User Authentication is therefore devoted to the different authentication mechanisms that are currently used. The problem of mobile code and the different approaches to tackle this problem are then discussed in the section on Mobile Code. The privacy concerns and issues form the topic of the chapter on Anonymity and Privacy. The WWW is a common exchange medium for copyrighted, illegal and/or unwanted content. Content investigates the efforts being undertaken for this problem. Thereafter, Payment describes a central issue in e-commerce applications, payment. Finally, the sections entitled Implementation and 'Environmental' Issues and Legal discuss the implementation and environmental issues and the legal issues respectively.
Except perhaps for the communications issue, the order in which the different issues are discussed in this paper is rather arbitrary, and certainly does not intend to reflect relative importance. All of these issues are equally important. As always, the weakest link in the chain determines the overall security of a system. Moreover, it is hard to consider the different security issues separately, as they are all tangled to each other. The main contribution of this paper is therefore to bring the different issues together and discuss the security of the WWW in the broad sense.
The HTTP protocol does not provide any service for securing communications. Desired security services include : entity authentication, that is a user/browser and Web server prove their identity to each other; data authentication, so that entities can verify the origin, integrity and freshness of data; data confidentiality, so that no other party besides the sending and receiving one is able to access the cleartext data; and, non-repudiation , which ensures that entities cannot deny having sent certain data or having performed some action. There are more security services related to communications, such as anonymity. Anonymous communication is addressed in the section on Anonymity and Privacy. These extra security services have not been considered during the development of the currently deployed secure protocols for communications on the Web. We will see below why they should perhaps have been taken into account.
Solutions have been proposed and are deployed that offer the 'normal' security services. The currently widely deployed standard is SSL/TLS. It will be described in more detail below. Alternatives have been proposed but have not become popular, and are either not used at all or used on a much smaller scale, or within a closed environment. S-HTTP  is a protocol situated at the application layer (later we will examine the consequences of providing security services at certain layers) and is specifically intended for HTTP. It secures HTTP messages in a very similar way to the protocols for secure e-mail, and provides all four security services. S-HTTP was however not a success. Microsoft proposed Private Communications Technology (PCT) [10, 93], a protocol that is very similar to SSL/TLS. Although PCT is implemented in Microsoft's Web browser and server, it is not frequently used. Web browsers and Web servers can be integrated into existing security architectures, such as Kerberos  and SESAME . This can be done in several ways . This is especially useful in closed environments (e.g., a company's Intranet) that have already other applications running within this security architecture, or that want to enable additional security features, not provided by SSL/TLS (e.g., authorization and secure audit).
Security at the transport layer
As explained above, the communication between a Web browser and a Web server is typically secured by the SSL/TLS protocol. Historically, Secure Sockets Layer (SSL) was an initiative of Netscape Communications. SSL 2.0  contains a number of security flaws which are solved in SSL 3.0 . SSL 3.0 was adopted by the IETF Transport Layer Security (TLS) Working Group, which made some small improvements and published the TLS 1.0  standard. SSL 3.0 is still widely used today though. "SSL/TLS" is used in this paper, as "SSL" is an acronym everyone is quite familiar with; however, the use of TLS in applications is certainly preferred to the use of the SSL protocols.
SSL/TLS in detail
Within the protocol stack, SSL/TLS is situated underneath the application layer. Although the name might indicate it is situated within the transport layer, SSL/TLS should be seen as a protocol on top of it, and therefore sits in between the transport layer and the application layer. It can be used to secure the communication of any application, and not only between a Web browser and server. It is in principle completely transparent to the application. However, interaction with the end-user is needed, for example, to check with whom a secure session has been established, or to explicitly request the client to authenticate to the server. In practice, SSL/TLS is therefore often integrated into the application to a large extent (see also Bellovin ).
SSL/TLS provides entity authentication, data authentication, and data confidentiality. In short, SSL/TLS works as follows. A connection between a browser and a Web server is divided into two phases, the handshake and the data transfer. The purpose of the handshake is threefold: the Web browser and server need to agree on a set of cryptographic algorithms (ciphersuites) that will be used to protect the data, to authenticate each other, and to agree on cryptographic keys; secondly, they need to establish this set of cryptographic keys with which data will be protected; and, lastly, the Web server authenticates to the browser and, optionally, the user/browser authenticates to the server. Public-key cryptography is used for entity authentication and key establishment. The latter is mostly performed by having the client send an initial random secret to the server, encrypted with the server's public key. Once the handshake has been completed, data transfer can take place. In the 'record layer', data is broken up into fragments, and transmitted as a series of protected records. To provide data integrity, a Message Authentication Code (MAC) is computed over a data fragment; fragment and MAC are then encrypted. Symmetric key cryptography is used to provide data confidentiality and data authentication. Note that digital signatures are not used during data transfer. The standard also foresees compression of the fragments before they are encrypted. However, as no algorithm has been defined, this is not implemented in current products.
In summary, the client and server start with a handshake in order to setup the necessary security parameters which will be used in the subsequent data communication. SSL/TLS's resumption feature allows multiple connections in one session, i.e., the client and server have the opportunity to re-use security parameters so that they do not always have to perform a new full handshake with each new connection.
The usage of SSL/TLS imposes a significant overhead compared to ordinary HTTP. Coarfa et al.  analyzed this overhead in detail, and estimated the relative costs of each component within SSL/TLS. They show that hardware accelerators (discussed later in the section on Implementation and 'Environmental' Issues) can improve performance, but only to some extent.
Security of SSL/TLS
So what is the security of the SSL/TLS protocol? Note that we will here discuss security from a cryptographic (protocol) point of view. That is, we evaluate the security of the SSL/TLS protocol, assuming that the necessary trust anchors are in place, and that the end-points are secure. These assumptions can however not always been made in practice, as will be seen later in this paper. Also note that SSL/TLS has been designed to provide a confidentiality and authenticity service. One should therefore not expect that it automatically achieves the same security properties in other security services (e.g., anonymity, see the section on Anonymity and Privacy).
SSL 2.0 contains a number of security flaws, and should not be used anymore (it is enabled by default in most standard browsers though!). Wagner and Schneier's security analysis of the SSL 3.0 protocol  also applies to TLS 1.0. Mitchell et al.  and Paulson  analyzed respectively the SSL 3.0 and TLS protocol in a more formal way. These analyses show that the most important security weakness regarding the TLS protocol is related to downgrade attacks. Both weak (export) and strong ciphersuites are supported by the SSL/TLS protocols. SSL/TLS allows the Web browser and server to negotiate the desired ciphersuite. An entity-in-the-middle can influence this negotiation and force the use of a weak ciphersuite if it supported by both sides and if it is able to break (or has already broken) it before the handshake is completed; the latter is necessary for the entity-in-the-middle in order to successfully present proofs of authenticity of the handshake messages to both browser and server.
To a worse extent, the same applies to the usage of different versions of the SSL/TLS protocols. If multiple versions are supported by browser and server, an entity-in-the-middle can for example force the usage of SSL 2.0 and exploit some of the known weaknesses of this protocol (especially the fact that the integrity of the handshake messages is not protected). TLS includes some protection against version rollback attacks by also including the protocol version number inside the encoding of the initial secret sent by the client in addition to the indication of the protocol version inside the cleartext handshake messages, which can be modified by an attacker. The bottom line is that the SSL/TLS protocol suite is as secure as the weakest option supported by browser and server, whether or not stronger options are supported as well. Weak options should explicitly be disabled when these are not needed. In practice unfortunately, these options tend to be enabled by default, and are needed for certain Web sites.
In addition to the downgrade weakness of the SSL/TLS protocol, there are two noteworthy security issues with respect to certain underlying mechanisms used in the protocol. As explained above, SSL/TLS first adds a MAC and then encrypts communication. Krawczyk  recently showed that authenticate-then-encrypt is generally not secure, and that it should be done in the reverse order, encrypt-then-authenticate. Fortunately, authenticate-then-encrypt is shown to be secure with the modes of encryption used in SSL/TLS (i.e., block ciphers in CBC mode, and XOR-based stream ciphers; however, some of the usage conditions of these modes are not met in SSL/TLS, and an attack has been found by Vaudenay ). Bleichenbacher  discovered an attack on SSL/TLS servers which support PKCS#1 v1.5  encoded RSA ciphersuites. For RSA ciphersuites, an initial secret is randomly generated by the client and forwarded to the server encrypted with the public RSA key of the server (as already indicated, this is the most common way of key establishment in SSL/TLS). PKCS#1 describes how data should be formatted before it is RSA encrypted. Let the encrypted message be the ciphertext. An attacker can then send many carefully adapted versions of this ciphertext to the server. If the server acts an an oracle by decrypting these messages and returning if the formatting is right or wrong, the attacker can in the end decrypt the original ciphertext and thus find the initial secret from which the cryptographic keys were derived to secure the communications session. There is the tendency in practical cryptographic protocols to define many different error messages, so that a server often acts as an oracle, and gives detailed feedback to clients. TLS thus solves the problem in a pragmatic way by instructing that a server should just proceed even if the formatting is incorrect, without informing the client about the latter. In addition, from a more cryptographic point of view, more secure encoding schemes should be used in the future.
SSL/TLS now and in the future
The TLS protocol is only in its first version and is still evolving. At the time of writing of this article, the IETF TLS Working Group is developing a second version of the TLS 1.0 specification, and creating several enhancements of the TLS protocol, including a number of (wireless and other) extensions , and new ciphersuites incorporating Rijndael , the Advanced Encryption Standard . Besides directly within the IETF, some enhancements to TLS are also proposed from the academic world. Shacham and Boneh propose a system for batching  to improve the performance of the handshake; in particular, they propose a system in which a server can handle multiple requests in one batch, so that the total amount of cryptographic computation is decreased. Fast-track session establishment is another performance improvement proposed by Shacham and Boneh . Here, a client caches a server's public parameters and negotiated parameters, in order to use them in subsequent handshakes (note that this is different from the session resumption feature). Persiano and Visconti discuss user privacy issues regarding SSL/TLS , and propose a modification of the protocol in which the privacy of the user is protected, i.e., the client certificate of the user is only exchanged when the link is already encrypted. To counter denial-of-service attacks, i.e., exhausting Web servers by flooding them with connection requests, Dean and Stubblefield propose client puzzles  to be solved by the browser before a connection can be requested. Hess et al.  describe how to incorporate trust negotiation as an extension to the TLS protocol.
Today's popular browsers implement the SSL/TLS protocol by default. Netscape 4.7x only supports SSL, while Netscape 6.x and Microsoft Internet Explorer 5 and 6 support both SSL and TLS. Instead of relying on the browser's implementation of SSL/TLS, an independent implementation of SSL/TLS executed from within the browser (e.g., through a Java applet) or via a stand-alone application (e.g., a Web proxy on the user's machine - a proxy is an application that acts as an intermediate server and transparently forwards communication to the requested server) is used in many e-services, in particular e-banking. The main reason behind this was the existence of the U.S. export restrictions. These export restrictions and some of the solutions for strong cryptography, will be discussed later in the section on Legal issues. More detailed information on SSL/TLS, the security flaws in SSL 2.0, and the differences between SSL 3.0 and TLS 1.0, can be found in Rescorla .
Security at the application layer
SSL/TLS provides an end-to-end secure channel, but nothing more. Data is only protected while in transit. Moreover, exchanged messages are not digitally signed (note that this applies to data messages; if client authentication is required, then the client applies a digital signature once to one specific message during the handshake). Therefore SSL/TLS does not provide non-repudiation. Both customers and merchants can always deny later having sent or received requests or confirmations from each other.
Trust anchors and Public Key Infrastructure
For SSL/TLS to work, there must exist a meaningful Public Key Infrastructure (PKI) [1, 47]. Participating entities should have a public/private key pair. A PKI ensures a correct mapping of public keys to entities. An entity's name together with the corresponding public key is put into an X.509  certificate which is signed by a Certification Authority (CA).
Firstly, the authentic distribution in advance of 'root' certificates, the certificates of the CAs, is here very important. They are used by the Web server to verify entity certificates (if SSL/TLS client authentication is used). The browser needs them in order to verify the Web server's certificate during SSL/TLS authentication, and in order to verify digitally signed applets (see the section on Mobile Code). Authentication cannot be performed without authentic root certificates. Today's popular browsers are shipped including many root certificates. It is however easy to add more or even replace root certificates (see also the section on Implementation and 'Environmental' Issues). A secure update of expired root certificates is not foreseen. Moreover, the browser trust model causes a server certificate to be trusted if it is successfully verified by any of the root certificates (since there is usually no central policy management, this might easily include an attacker's root certificate). An adversary can thus relatively easily impersonate a secure Web server without the user noticing it (see Hayes ).
Secondly, entities must properly protect their private key. If an entity's private key gets compromised, an adversary can impersonate this entity. The private key should in that case be revoked as soon as possible, i.e., it should for example be added to a Certificate Revocation List (CRL) . Browsers however generally do not yet check by default if a server certificate has been revoked.
Thirdly, certification authorities must be trusted to issue certificates to the right entities, including the right information. Certificates should not be issued without rigorously checking the identity of the individual who requests the certificate. Unfortunately, this is not always the case in practice, and even VeriSign, one of the major CAs, recently issued certificates to fraudulently-claimed Microsoft employees . One can question whether certification authorities are really authorities. For example, a CA that issues certificates for Web servers, is usually not the party that authorized the use of these Web servers' domain names. For example, the organization behind the store.palm.com Web site is Modus Media International and not Palm; this is mentioned in the certificate, but is not immediately reflected on the Web site itself. How can the user be sure that Modus Media is authorized to do business in Palm's name? 
Finally, the user can only trust the correct execution and interface of a secure Web session, if he has a genuine copy of the browser. Standard browsers allow a user to configure the different SSL/TLS versions, choose particular ciphersuites, and manage certificates. Users must be able to check fingerprints of certificates (hash value of a certificate), i.e., the fingerprints should be distributed out-of-band (e.g., published in newspapers, or printed on official documents related to the e-service). Users must be able to recognize when they have a secure session with the server. However, in today's browsers, there are only some limited visual indications - i.e., a closed lock - and an inexperienced user is easily fooled by a spoofed Web site, as demonstrated by Felten et al.  and more recently by Yuan et al. .
Authentication of the user is of paramount importance in many e-services. Authentication should be performed before authorization can take place, that is, the process of determining if a user is allowed or not to perform a particular (trans)action.
Client authentication is optional in the SSL/TLS protocols. When setting up a secure channel between the browser and the server, the user can also be authenticated explicitly using a digital signature: during the handshake, the client signs a hash of all the previously exchanged handshake messages. Note that usually, the private signing key is stored on the user's computer and is only protected with a password.
However, users are mostly authenticated in another way. The different possible mechanisms are described below. These mechanisms can be used independently from SSL/TLS. However, using them on top of SSL/TLS often substantially increases security. Note that some authentication mechanisms authenticate the browser or the machine instead of the actual user.
Host name or IP address
Users are often authenticated via the host name or IP address of their machine. From the security point of view, this technique is vulnerable to IP spoofing . From the functionality point of view, it does not necessarily provide a user the opportunity of mobility. While this authentication mechanism could be deployed on a small scale in a closed system, it is just impractical in an open system. Note that proxies and firewalls prevent the mapping of users to particular machines, and only allow a course-grained mapping of specific groups of users to specific networks.
User authentication is very often performed with fixed passwords. This can be done based on the HTTP/1.0 Basic Authentication  standard, i.e., the password is provided via a particular pop-up browser window and is included in a dedicated HTTP header (present in each subsequent request). As the password is transmitted in clear to the server, SSL/TLS should thus be used underneath this mechanism. Note that fixed passwords remain vulnerable to guessing, dictionary attacks and social engineering, as already indicated by Morris and Thompson .
Despite their inherent weaknesses, fixed passwords are still widely used, as they provide transparent mobility, and as they are very easy to implement and use. Most Web sites do not rely on the HTTP/1.0 Basic Authentication standard, but implement their own fixed password scheme: the user must provide the password through an HTML form; upon receipt of a valid password, the server creates a session authenticator (in its simplest form this is just the cleartext password, which leads to a system equivalent to HTTP/1.0 Basic Authentication). This should be included by the client in subsequent requests within the session with the server and is automatically done when the authenticator is included as a cookie in the initial reply of the server (see also the section on Anonymity and Privacy for more on cookies). Various schemes appear to be deployed on the WWW. Fu et al.  present an interesting discussion of the security of such schemes. As it turns out, many Web sites have authentication mechanisms that are not secure against so-called interrogative adversaries, i.e., ordinary users are for example able to forge authenticators of other users based on their own authenticator. As a side note, Microsoft Passport also relies on cookies to provide a single sign-on service across different Web sites. Slemko  identified some practical flaws in this system, which result in the ability to steal authenticators or to reuse authenticators for other purposes than intended for.
Passwords that are only used once are not frequently used. They would be more secure, but certainly less convenient. A list of independent random one-time passwords can be issued to users. As this is very difficult to learn by heart, users will be forced to keep this list somewhere either on paper, or worse, on a (cleartext) file on their machine. It is also possible to generate a chain of dependent one-time passwords; see for example the system described by Haller et al.  which is based on an idea described by Lamport . This usually requires extra software at the client side. Note that, although passwords cannot be replayed, SSL/TLS is still needed to provide authentication of the server.
The idea of a challenge/response scheme is that the user proves his identity to the server by demonstrating knowledge of a secret, not by just sending this secret to the bank, but by producing the proper response to a random challenge using this secret. There are symmetric and asymmetric challenge/response schemes. In a symmetric scheme, for example, the response consists of a MAC on the time or on a random challenge of the server. A digital signature on a random challenge message is an example of an asymmetric scheme. These challenge/response schemes are often implemented with hardware tokens. The HTTP/1.1 Digest Authentication  standard (which is implemented in Microsoft Internet Explorer) is an example of a software-based challenge/response scheme: the response computed by the browser is the hash of the concatenation of the username, the password, and the challenge of the server. Challenge/response authentication mechanisms are designed to resist replay attacks, i.e., an adversary should not be able to re-use a particular response to authenticate in another session with another challenge. Again, SSL/TLS is still needed to authenticate the server.
Several of the previously discussed mechanisms can be (more securely) implemented using hardware tokens. Users can easily carry these hardware tokens with them, and can use them on different machines. Hardware tokens thus offer mobility, in contrast to software tokens that are installed on a particular machine.
Hardware tokens which display a response to the current time interval (e.g., SecurID ) or to an unpredictable challenge given by the server via the computer screen and typed in by the user on the token (e.g., Digipass ), are used for authentication. These hardware tokens can sometimes calculate MACs on any data, which can be used to authenticate particular transactions in the e-service.
Mobile devices such as PDAs or special-purpose wireless wallets can enhance a WWW based system's security. These devices are considered personal, and can also perform cryptographic protocols. The communication between the device and the PC is realized manually (i.e., the user copies what is displayed on the device), with Bluetooth  or with an infrared interface.
The compromise of one token could lead to the disastrous scenario in which the site should issue a new token to all users. To prevent this, all tokens should contain a different cryptographic key. When using public-key cryptosystems this is not a problem. However, in the case of symmetric keys, it requires the maintenance of a secure database containing all the secret keys the site shares with its users. Symmetric keys are therefore often cryptographically derived from a unique serial number of the token and a master key which is the same for all (or a subset of all) the tokens. In this way, each user shares a different key with the site, without the problem of the secure database.
Security of Java
Java is a platform-independent programming language. Java code consists of generic byte code which runs on a Java Virtual Machine (JVM). The JVM translates the byte code into platform-specific code. Most browsers incorporate their own implementation of a JVM. It is also possible to install a stand-alone JVM. Java includes specific security features in order to counter the risks associated with mobile code. Background information on Java security can be found in McGraw and Felten . Here is a brief overview.
Java applets run in a sandbox. That is, they run in a limited environment which prevents them, for example, from reading or writing files, creating or listening to network connections to/from any computer other than the originating server, executing local programs, overwriting or spoofing local (system) code, etc. To enforce this sandbox model, Java applets are controlled by three consecutive processes. The Byte Code Verifier checks if byte code fits the rules. The (Applet) Class Loader makes sure important parts of the Java runtime environment are not replaced by imposter code, preventing class spoofing. It retrieves the byte code from the local machine or from the net, and defines a namespace. The Security Manager performs runtime checks on dangerous methods.
The sandbox model was introduced in the first version of Java, JDK 1.0. (Local) Java applications are completely trusted, while Java applets run in a sandbox. JDK 1.1 added the concept of code signing. If an applet is digitally signed, and if the signer is trusted, the applet is treated as a local application and does not run in the sandbox. In JDK 1.2, or Java 2, there is no distinction anymore between system code, applications, or applets. All code runs in a controlled environment. However, instead of one limited sandbox environment, each piece of code runs in customized sandbox, i.e., the privileges each piece of code gets can be configured according to particular security policies. Location and/or signer of the code hereby identify what security policy should be used. Thus, Java 2 gives the ability to grant exactly these privileges that are needed, and only when they are needed.
Mobile code in browsers
Both Netscape and Microsoft have implemented their own approach which can be situated somewhat in between JDK 1.1 and 1.2. Netscape refers to code signing as 'object signing'. Extra privileges can be requested to the user when they are needed, through Netscape specific Java methods, which have to be explicitly added to the source code. Microsoft uses the term 'Authenticode' with respect to digitally signing Java applets or ActiveX controls. Requests for extra permissions are included in the signature, and these are asked when the applet is started. Regarding a configurable security policy, only the identity of the signer is relevant for Netscape, while Microsoft Internet Explorer has different 'Security Zones' and thus is also able to distinguish between different locations. Currently, many browsers can now also rely on an external stand-alone and full implementation of a Java 2 virtual machine, hereby providing the complete and browser-independent security functionality of Java 2. Note that these security features only apply to Java. Web pages can contain other forms of mobile code though. VBScript, for example, does not run in a sandbox, and thus has all privileges of the user. Anupam and Mayer  analyzed some flaws in Web browser scripting languages.
Mobile code, friend or enemy?
Despite these security measures, attacks were still shown to be possible. The main causes are bugs and implementation errors in particular virtual machines. More conceptually, however, applets can get out of their limited environment by just asking the user for more privileges. This is dangerous as users are very easily lured into granting these. Such trust decisions are not trivial. The signer is not necessary the author of the code; it is not because the applet is digitally signed, that it can be trusted. Moreover, for what purposes should it be trusted; even if the signer is honest, the code might not be secure.
Extra defense strategies are therefore required. Mobile code can just be disabled, but this might be too restrictive. A company firewall or a personal firewall can be configured to scan for known malicious applets. This is however not possible if these are downloaded over an encrypted channel (SSL/TLS). An organization could enforce an overall security policy and technically prevent users to make their own trust decisions (Netscape Mission Control Desktop is an example of a software product that implements this).
While on the one hand, mobile code constitutes a danger on the WWW, on the other hand, it can be and is used as a security enhancement in particular e-services. For example, electronic banking systems often rely on a secure applet that implements a (non-export version of) SSL/TLS, and that might provide additional security features. Such applets might require extra privileges too, for example to access cryptographic keys stored on the hard disk, or to connect to a smart card reader. This clearly shows that just disabling mobile code is not always a solution, even (or especially) not for security-related applications. Moreover, depending on mobile code for security-related applications makes trust decisions even more important: granting extra privileges to the wrong applet might compromise your cryptographic keys, and allow an attacker to impersonate you.
Anonymity and Privacy
When surfing the WWW, users are also subjected to many potential privacy threats. Although many are concerned about these threats, some users are still not aware of them, and there are those that do not seem to care about online privacy.
Information about a user's identity can be revealed in two ways: at the application level, that is through the content itself that is exchanged, and at the network level, that is via the network address of the machine from which the user is surfing. A third section will address the anonymity of an information publisher.
Anonymity at the application level
A user's browser discloses a lot of information within the HTTP headers, that can help identifying a particular user, and that can help building a complete profile of the user's interests and behaviour. For example, the Referrer header informs the server about the page that contained the link the user is requesting at that moment (e.g., a banner server knows on which pages a banner is placed). The User-Agent header informs the server about which browser on what platform is being used. Last but not least, cookies  constitute the ideal tool to maintain user profiles. Cookies are little pieces of information a Web server can ask the browser to store on the user's machine. When the user returns to that server, the cookie is retrieved and transmitted again to the server. Cookies are mainly used to establish (authenticated) sessions in an e-service, or to store a user's personal preferences. However, they are easily 'abused' to build detailed user profiles as they can be sent together with banners which are included on many different Web pages, but which all originate from the same server (e.g., DoubleClick).
Besides data that is transparently sent without the user really noticing it, personal information is often requested explicitly to the user. The user should be confident that the Web server takes proper care of this personal information. The W3C developed the Platform for Privacy Preferences (P3P) . P3P provides an automated way for users to gain more control over the use of personal information they send to Web sites. In particular, P3P is a standardized way of formulating privacy policies. These privacy policies are presented by servers, can be understood by browsers, and also interpreted directly by the user.
Many Web sites require users to provide a username, password, and an e-mail address. This allows the Web site to offer a personalized service. Unfortunately, users will mostly choose easy-to-remember usernames that can be associated with the real identity of the user. The same e-mail address will mostly be used, making it very easy to link different usernames to each other. Note that users will probably choose the same passwords too, so if one password gets compromised, this will potentially give an adversary access to other (more sensitive) services too. One cannot expect that ordinary users have different usernames, passwords and e-mail addresses for all of the Web sites that they visit. The Lucent Personalized Web Assistant (LPWA)  offers a solution for this problem. The LPWA provides privacy concerned users with a different, anonymous, and unlinkable username/password and e-mail address for each different Web site, while users only have to remember one secret. Before browsing the WWW, users have to login into the LPWA by giving their identity and their secret. From then on, the LPWA is used as an intermediate web proxy. The LPWA transforms the identity, the secret, and the URL of the Web site into a username, a password, and an e-mail address that will be used for that Web site.
Besides the browser itself, third-party browser add-ons (e.g., that show comments posted by other users of the Web site the user is currently visiting, or that show an updated list of related sites, etc.) are potentially dangerous for the privacy of a user. As more users are becoming concerned about privacy, many products are being developed: personal firewalls, password managers, form fillers, cookie managers, banner managers, keyword alert, etc. Most of these products also increase security. The latest browsers also include part of this functionality.
Anonymity at the network level
More fundamentally, at the network level, IP addresses are required for establishing communication between browser and Web server. In many cases, these addresses can be linked to a limited set of persons, if not one person.
A Web proxy can hide the user's IP address from the Web server. In addition, a Web proxy can provide anonymity services at the application level too, by for example stripping identifying information and/or providing secure management of sensitive information such as cookies and usernames/passwords. Examples of such Web proxies are the Anonymizer , and the LPWA (which was commercially available as ProxyMate in the past).
A Web proxy is only a basic solution that protects against local observers (e.g., the Web server itself). More advanced solutions will have to protect against powerful observers, who are able to overview the global network. These solutions should protect against eavesdropping, not for confidentiality purposes, but preventing the content of messages to be traced from destination to source. They should also protect against traffic analysis, to prevent messages to be traced based on size and timing measurements. Note that some Web proxies use SSL/TLS in order to provide an enhanced level of anonymity. However, Danezis  demonstrated that SSL/TLS does not resist traffic analysis (and thus shows that SSL/TLS should not just be used for security services other than it has been designed for; note that while SSL 3.0 does not allow arbitrary length padding, TLS does allow this though, and therefore TLS could in theory be securely deployed in this situation).
Chaum's mix  forms the basis for the more advanced solutions. The messages of all parties wanting to communicate anonymously are sent through the mix. The mix hides the correspondences between messages in its input and those in its output. The mix hides the order of arrival of the messages by reordering, delaying and padding traffic. As Web traffic is real-time and bidirectional, for example, delaying is not really possible. Practical solutions therefore require a chain of mixes in order to provide an adequate level of anonymity. There are two categories of solutions. In one set of solutions - i.e., Crowds  and Hordes  - the users themselves are the mixes and Web requests are randomly forwarded among the 'crowd' of users before they are sent to the Web servers. In the other solutions - Onion Routing , Pipenet , Freedom  and Web MIXes  - an anonymous connection through a number of chosen routers or mix entities is setup above which the Web requests and replies are sent.
There is a need to anonymize all IP traffic, and not only HTTP related communication. Web pages could contain links to non-HTTP sources. Moreover, even DNS requests should be anonymized, as these requests might reveal the intended recipient, even though the actual connection from the originator to the recipient is not traceable. Care should also be taken of mobile code that could possibly undermine anonymity measures by including identifiable information in anonymous connections, by checking if certain Web pages are in the browser's cache, as demonstrated by Felten and Schneider , or by circumventing and totally undermining network anonymization mechanisms in various other ways, as described by Martin and Schulman .
The practical mix-based solutions have a substantial impact on the performance (i.e., decrease in bandwidth) that users will experience. In any case, anonymity at the network level has not yet gained as much interest on a large scale as anonymity at the application level.
Up to this point, we only addressed anonymous browsing on the WWW. In some situations, users wish to be able to anonymously publish information on the WWW.
Goldberg and Wagner proposed Rewebbers and TAZ servers . Rewebbers are Web proxies that understand nested URLs. Nested URLs are constructed by the anonymous publishers, and will hide the address of the server they are actually referring to. They point to 'pages' at intermediate rewebbers. These pages are in fact encrypted (with the public key of that rewebber) nested URLs again, which point to the next rewebber. An example of a nested URL is http://A/KA(http://B/KB(http://C/KC(http://server/))); the real URL http://server/ is here reached after sending the request through three rewebbers A, B, and C. It will be very difficult for the user requesting the page to trace from which server the content is originating.
The system of Goldberg and Wagner only hides the location of a server, but does not prevent information to be tampered with or removed once the location of the server is known. A number of systems have been developed that offer a censorship-resistant and anonymous publication mechanism. An example is the Publius system developed by Waldman et al. .
The World Wide Web is a means to share information amongst individuals, whether or not on a commercial basis. However not all information on the Web is meant to be shared.
A great deal of intellectual property and copyrighted material, like books and music, is available in digital form. Distributing this material on a commercial basis via the WWW opens up new perspectives for businesses. However, it brings a new and very difficult problem: unlike data in an analog medium, digital information can be copied without any loss of quality. Thus, secure mechanisms are needed to prevent or at least detect this duplication. Preventing duplication or restricting usage is, in theory, impossible to achieve, unless a tamper-resistant viewer/listener with cryptographic keys is used. Detection of illegal copying can be performed through watermarking . A watermark is a piece of information that is added to the digital data in such a way that it cannot be detected by the human eye or ear; it is however very difficult to remove it while maintaining the quality of the original. The watermark can contain the identity of the user that bought the digital piece of data in what is called fingerprinting. The user can then not just give away the data as it would later be traced back.
Some of the content available on the WWW might not be suited for all individuals. For example, parents may want their children to not surf pornographic sites. W3C's Platform for Internet Content Selection (PICS)  was originally designed to help parents and teachers control what children access on the Internet. Now it should be seen as a more general specification that allows to associate labels with Internet content. PICS can be the basis for rating or filtering services, but it can also be used for code signing and privacy. Note that Microsoft Internet Explorer supports PICS.
Although numerous different electronic payment systems have been proposed that can be or are used on the WWW, including micro-payment systems and cash-like systems, most are made using credit cards. Mostly, customers just have to send their credit card number and expiry date to the merchant's Web server. This is normally done 'securely' over SSL/TLS, but some serious problems can still be identified. Users have to disclose their credit card number to each merchant. This is quite contradictory to the fact that the credit card number is actually the secret on which the whole payment system is based (note that there is no electronic equivalent of the additional security mechanisms present in real world credit card transactions, such as face-to-face interaction, physical cards and handwritten signatures). Even if the merchant is trusted and honest, this is risky, as one can obtain huge lists of credit card numbers by hacking into (trustworthy, but less protected) merchants' Web servers. Moreover, it is possible to generate fake but valid credit card numbers, which is of great concern for the online merchants. Thus, merchants bear risk in card-not-present transactions.
Secure Electronic Transaction (SET)  is a more advanced standard for credit card-based payments, completed in 1997. One of its core features is that merchants only see encrypted credit card numbers, which can only be decrypted by the issuers. Moreover, the number is cryptographically bound to the transaction by a digital signature. This system is conceptually much better, but until now it has not become popular due to its complexity. In contrast to credit card payments over SSL/TLS, SET requires special "wallet" software on the user's site.
Recently, Visa published the specifications of its 3-D Secure Authenticated Payment Program . This system is mainly based on SSL/TLS. During an online transaction with the merchant, cardholders are requested to authenticate to the issuer. The authentication mechanism can be chosen by the issuer and ranges from fixed password authentication to smart card-based digital signature. The result of the authentication is automatically sent to the merchant. If authentication was successful, the transaction is completed in the standard way by sending credit card number and expiry date to the merchant. The goal of 3-D Secure is to reduce the number of disputed online transactions, and therefore to reduce the risk of the merchant. Similarly, the goal of MasterCard's Secure Payment Application (SPA)  is to authenticate a cardholder through a Universal Cardholder Field (UCAF), a uniform method for passing authentication data of any security scheme.
American Express offers a 'one-time credit card' solution . Customers can request a temporary credit card number with their normal credit card number. This temporary number will only be valid for a short period or even only for one transaction. As the system's name, 'Private Payments', indicates, it allows users to protect their privacy. The system also solves some of the above mentioned security problems in a pragmatic way.
Alternatively, systems like InternetCash  and others exist, in which customers can obtain some pre-paid value identified and protected with a number and PIN, and use it online in cooperation with a central server. These systems are not technically sophisticated but seem to be getting popular.
Another simple but very popular system is PayPal . PayPal is a consumer-to-consumer payment system, often used at auction sites. PayPal links users' credit card numbers or bank accounts with their e-mail addresses. One PayPal user can send money to another PayPal user by logging into the central PayPal Web site, and submit the e-mail address of the user acting as a 'merchant' and the amount of money due. PayPal will then bill the first user's account and add the money to the latter's account. Similarly, PayPal users can request money from other PayPal users. Technically, PayPal is not more secure than ordinary credit card payments mentioned in the beginning of this section. However, users 'only' have to entrust their sensitive information to the central PayPal server, and not to every merchant or other user.
Real-life electronic payment means, such as electronic purses (e.g., Proton ), and debit cards, are also starting to be deployed on the WWW. Software and card readers are now available (e.g., Banxafe ) that provide the necessary hooks to browsers and Web servers to connect these real-life electronic payment systems to the WWW.
Users can nowadays also perform payments via their mobile phones. Numerous GSM based payment systems exist. GiSMo  was a system intended for the Internet in which customers receive a random code through SMS via a central server. This random code is then entered via the computer in order to complete a transaction. Mint  is a system in which each terminal/shop has a unique phone number which the customer can just call at the time of payment. Similar alternatives are Jalda  and Paybox . All these schemes are based on customer authentication through the mobile network. Users register with a payment service that is often run by the mobile operator itself, allowing expenses to then added to a monthly phone bill. Mobile devices and protocols are becoming more advanced far beyond the ordinary GSM phone. In particular, mobile devices with built-in smart card readers in the future will be very interesting, allowing the integration of real-life smart card-based electronic payment tools into the WWW.
Up till now, this section only described payment systems based on authenticated transfers from a user's account to a merchant's account. (Anonymous) electronic cash systems, like ecash , have not become popular. The W3C has worked on specifications for a 'Common Markup for Micropayment per-fee-links' . This specification deals with the embedding of various micropayment schemes (see for example Micali and Rivest ) into Web pages. Micropayments are intended for usage-based pricing. However, such schemes might never become popular, as noted by Shirky  and especially by Odlyzko [72, 73]. Merchants have strategic reasons to bundle several products or services and offer them for a single price, and also consumers seem to have a strong preference for simple flat rate pricing.
Implementation and 'Environmental' Issues
While a system can be conceptually secure, it might be insecure in practice. This section discusses practical issues that are of importance at client and/or server site.
Protection of private keys
Participating entities on the WWW have cryptographic (private) keys for different purposes. A Web server needs an SSL/TLS server private key. Users might need a private key for SSL/TLS client authentication, for SET, or for digitally signing documents. Private keys will often reside on the hard disk of the machine, only protected by a fixed pass phrase. Shamir and van Someren  have shown that cryptographic keys are very vulnerable in software. Keys can relatively easy be spotted in memory or on hard disk, by Trojan horses, or by an adversary who has access to the machine. This has been successfully verified by Janssens et al.  on both browsers and Web servers.
A tamper-resistant hardware module can be installed on a Web server. This module will perform private key operations as required during the SSL/TLS handshake. The private key should never leave the module. There is usually also a performance gain here: the hardware module substantially accelerates the private key operations compared to operations in software. This definitely increases the number of clients that can establish a secure connection to the Web server at the same time.
User private keys can be stored on smart cards. However, for many e-services the investment in cards and readers might be too high compared to the expected benefits (i.e., decreased security risks; the level of security is substantially higher than when for example using fixed passwords). However end-user PCs are beginning to appear with smart card readers. It will be very interesting then to rely on existing smart card applications (e.g., electronic purses or electronic identity cards), if available, if possible, and if appropriate. Note however that despite standardization efforts, there are still interoperability issues in this field.
Perfect security can never be achieved. Cryptographic hardware only (yet substantially) increases the cost required for a successful attack. More advanced attacks thus remain possible, as for example shown by Anderson and Kuhn  and Kocher et al. . More practically, even if the private key is properly protected from a tamper-resistance point of view, an adversary might just be able to access the signing functionality of the device through the regular interface. For example, if a smart card is unlocked by typing a PIN code via the ordinary keyboard, the PIN code could be captured or the user could be requested for the PIN code through a fake interface. Ideally, a smart card reader should therefore have its own pin pad and a small display, such that users do not have to enter PIN codes via their computer's keyboard. Users are then also able to verify crucial information on a trusted display, that is not their regular computer screen. Common specifications for such secure readers are being developed . However, it still constitutes an expensive solution. Most current smart card readers therefore only consist of a slot in which the card can be inserted. This still protects the card's private key itself, but may not prevent access to the card's signing function.
Implementation and configuration
The security of a system heavily depends on its implementation and its configuration. We will now examine the majority of security problems on the Web in these areas.
The security of a cryptographic protocol does not only rely on the protection of private keys, but also depends on the secure generation of random numbers. During the SSL/TLS handshake, an initial secret must be generated from which the cryptographic session keys will be derived. Obviously, an adversary should not be able to guess this initial secret. However, securely generating random numbers is not a trivial task. A random number generated should be 'seeded' with as much as possible environment and user input (not only the time of the day). Ideally, random numbers should be generated by dedicated hardware whereby randomness is based on certain physical processes. Bad random number generation has been identified by Goldberg and Wagner in an early version of the Netscape browser .
The implementations of Web browsers and servers can contain bugs. An incorrect implementation of a protocol (e.g., not following the SSL/TLS protocol as specified in the standard) is, of course, a bug. Bugs can also consist of buffer overflows in the code or other flaws that lead to unexpected situations, by which security can be bypassed. Security holes are often present due to insecure configuration of Web browsers and/or servers. Particular examples are the configuration of server-side scripting and client-side mobile code. Browsers and servers are often installed with default (in)security settings, and are (mis)configured by untrained users.
The World Wide Web contains many features. Some of them should be carefully used relative to security. For example, consider a user who accesses confidential information after having successfully authenticated to the Web server. If the information is stored on the user's machine (say in the browser's cache), the Web-based authentication mechanism is not repeated when viewing the same information off-line. Instead, the access controls of the operating system must be relied upon. Thus, caching of Web pages and Web-based access control are clearly incompatible.
Web browsers and servers are not the real end points in a secure Web application. The end points are the machines and the operating systems the browsers and servers are running on. If the operating system, and in particular other network services that are enabled, contain bugs or are not securely configured, this can be used to circumvent security measures as provided by the browser and server application.
Current end-user computers tend to offer more functionality at the cost of security. Specifically, there is currently a lack and deployment of secure operating systems, as indicated by Loscocco et al. . Software applications mostly have access to all resources and files, and run with highest privileges (as opposed to the principle of least privilege). Critical trusted anchors are thus not always present. Viruses, Trojan horses, worms, and other malicious programs can tamper with the installed root certificates, steal a user's private keys, spoof the user interface or mislead users in another way, and intercept communication before it is 'securely' sent to a Web server. An industry alliance is currently working on mechanisms which will provide a trusted component and enable security on an end-user's computing platform . This effort will in particular enable a more secure creation of digital signatures . A trusted computing platform is still useless without a secure operating system. In January 2002, Bill Gates wrote an internal memo  on the importance of 'trustworthy computing'. Although some people are quite sceptical, improvements in the security of widely deployed operating systems might be anticipated in the long term.
Also the Web server should form a secure end point in e-service. The appropriate measures should be taken to prevent hackers of breaking into a site. A discussion of these measures falls out of the scope of this article, but detailed information can be found in Garfinkel and Spafford .
The fact that a client platform is not secure is often just due to the lack of security knowledge of the end-user. Although the typical client platform is inherently insecure, most problems could be prevented by an educated, careful and security-conscious user. Users should keep their security authenticators, whether these are just passwords, a list of one-time passwords, hardware tokens, or the PINs to unlock these tokens, private, and protect them from potential abuse. Users should install virus scanners, and keep them and their system up to date. Users should avoid practices that easily lead to security hazards; in particular they should not start up arbitrary executable attachments received via electronic mail. Users should check fingerprints of certificates against the fingerprints that can be - and should be - found in newspapers or official paper documents provided by the administrators behind an e-service.
Those behind an e-service should provide information to their users about these matters. Maybe they will have to do this to be able to decline responsibility in case something goes wrong and it turns out it is due to the user. The system administrators maintaining an e-service should be properly trained in computer security, such as regularly monitoring security advisories (e.g., CERT ) and applying software patches when required.
The previous sections made clear that there is no such thing as perfect security. No matter the amount or strength of security measures in place, there will always be potential security weaknesses. As some security breaches cannot be completely prevented (e.g., obtaining and misusing a user's credentials), logging and monitoring will allow the administrators of an e-service to at least detect security hazards, or find out later what exactly happened. These detection mechanisms can go from just passive logging to active monitoring, such as sending an alert if certain transactions do not match a user's regular profile or behaviour. Note that at the level of computer networks, this is commonly referred to as Intrusion Detection Systems (IDS) .
Last but not least, there are important legal issues concerning Web security. We will here discuss the infamous export restrictions that have had an enormous impact on the past and current state of Web security. The scope of legal issues is however much wider than that. Besides export restrictions, the usage of cryptography is in some countries legally restricted . More recently, digital signature laws are being created to determine the legal scope of the digital signatures . There are also legal issues concerning privacy and anonymity . Revocation of anonymity is an interesting topic. Finally, we already discussed earlier in the section on Content the issues on copyrighted or illegal content.
Historically, U.S. export restrictions prevented common browsers (those originating in the U.S.) to contain strong cryptography (that is, activated strong cryptography. Exported browsers however did contain strong cryptography; for Netscape, a small program was made available on the Internet to activate it ). For an exported U.S. browser, the length of a symmetric cryptographic key used to be limited to 40 bits; asymmetric keys (RSA) were limited to 512 bits.
During the relaxation of these restrictions, financial institutions were allowed to enable strong cryptography which was already present in browsers. This was technically achieved with special server certificates with particular extensions. Different terminology was used: Microsoft's "Server Gated Cryptography" (SGC), Netscape's International "Step-Up" encryption and Verisign's "Global Server ID." In addition, e-services, particularly e-banking relied on external implementations of SSL/TLS to provide strong cryptography within applets or within applications (e.g., local proxies that sit between a restricted browser and Web server).
Since the beginning of 2000, cryptographic software can be exported from the U.S. to most other countries without restrictions. However, the proprietary implementations of SSL/TLS external to the browser are still frequently used. The export restrictions have especially a big impact because export ciphersuites are still available in standard products and are enabled by default. As discussed in the section on Communications, this actually still makes SSL/TLS as secure as export SSL/TLS, whether or not these products also support strong cryptography.
Let us briefly summarize, and repeat some important issues.
With respect to secure communications, SSL/TLS provides a secure end-to-end channel; but nothing more. In particular SSL/TLS does not provide non-repudiation of origin whatsoever. SSL/TLS is a mature cryptographic protocol, it has been extensively examined, and many security flaws have been fixed. However, subtle security and functionality weaknesses are still being discovered. Clearly, SSL/TLS is still - and will always be - a protocol in evolution. Secure communications requires a trusted Public Key Infrastructure. In current browsers however, the authenticity of root certificates is difficult to guarantee, and private keys are easily exposed. More fundamentally, there is a lack of trusted components upon which security can be built. Current browsers do not have enough trusted visual indications of security, making Web spoofing a practical threat. In particular, secure logos might give users a feeling of trust, just as store fronts do in the physical world. Moreover, the operating systems, on which Web applications are running, have shown to be insecure, and open up ways with which security can be circumvented. Weak mechanisms are often used, e.g., for authentication and payment, as these are easy to implement and integrate in today's browsers. Mobile code constitutes a significant security risk for the user. Java provides good protection concepts. Unfortunately, these concepts often depend on users having to make trust decisions, and not all of the concepts are present in other mobile code systems. The privacy of the user is very difficult to protect. Without extra precautions, current Web applications expose a great deal of identifying information about the user. Obviously, the user's IP address is conceptually the most challenging piece of information to hide from other entities on the WWW. The user might often be the weakest link in the system. Most users are not educated in security, and do not know how to update and configure their system. The default configuration often does not provide good security.
It should thus be clear that Web security is much more than secure communications only. It involves many security issues which - moreover - depend on each other. On the one hand, the necessary technology already exists with which e-services on the WWW can be made secure to some extent. However, this technology is not always put in place properly in today's systems. On the other hand, some important components are still missing. While the World Wide Web has already reached a certain level of maturity, there is definitely still some progress to be made before all real-life services can be securely provided in an electronic way.
About the Authors
Joris Claessens is a researcher at the COSIC (COmputer Security and Industrial Cryptography) group of the department of Electrical Engineering, Katholieke Universiteit Leuven, Belgium. Prof. Bart Preneel and Prof. Joos Vandewalle head the COSIC group. The goal of COSIC's research activities is to create an electronic equivalent for primitives in the physical world, such as confidentiality, signatures, identification, anonymity, notarization, and payments. To achieve this goal, the group of more than 20 researchers concentrates on the design, evaluation, and implementation of cryptographic algorithms and protocols, and on the development of security architectures for computer systems and telecommunications networks. COSIC performs theoretical work on cryptographic algorithms and protocols, and also integrates these solutions into different applications. COSIC provides consultancy in this area, and cooperates with many other academic research groups and companies. For more information, see http://www.esat.kuleuven.ac.be/cosic/.
Joris Claessens is funded by a research grant of the Institute for the Promotion of Innovation by Science and Technology in Flanders (IWT). This work was also supported in part by the Concerted Research Action (GOA) Mefisto-2000/06 of the Flemish Government.
2. American Express, "Private Payments," at http://www.americanexpress.com/privatepayments/, accessed 4 March 2002.
3. Ross Anderson and Markus Kuhn, 1996. "Tamper Resistance - a Cautionary Note," Proceedings of the Second USENIX Workshop on Electronic Commerce: November 18-21, 1996, Oakland, California. Berkeley, Calif.: USENIX Association, pp. 1-11, and at http://www.cl.cam.ac.uk/~mgk25/tamper.html, accessed 4 March 2002.
4. Anonymizer, at http://www.anonymizer.com/, accessed 4 March 2002.
5. Vinod Anupam and Alain Mayer, 1998. "Security of Web Browser Scripting Languages: Vulnerabilities, Attacks, and Remedies," Proceedings of the Seventh USENIX Security Symposium, January 26-29, 1998, San Antonio, Texas. Berkeley, Calif.: USENIX Association, and at http://www.usenix.org/publications/library/proceedings/sec98/full_papers/anupam/anupam.pdf, accessed 4 March 2002.
6. Paul Ashley, Mark Vandenwauver, and Joris Claessens, 1999. "Using SESAME to Secure Web Based Applications on an Intranet," In: Bart Preneel (editor). Secure Information Networks: Proceedings of the IFIP TC6/TC11 Joint Working Conference on Communications and Multimedia Security. Boston: Kluwer, pp. 303-317.
7. Banxafe, at http://www.banxafe.com/, accessed 4 March 2002.
8. Steven M. Bellovin, 1989. "Security Problems in the TCP/IP Protocol Suite," Computer Communication Review, volume 19, number 2 (April), pp. 32-48. http://dx.doi.org/10.1145/378444.378449
9. Steven M. Bellovin, 1998. "Cryptography and the Internet," In: Hugo Krawczyk (editor). Advances in Cryptology - CRYPTO'98. (Lecture Notes in Computer Science, 1462). Berlin: Springer-Verlag, pp. 46-55.
11. Tim Berners-Lee, Roy T. Fielding, and Larry Masinter, 1998. "Uniform Resource Identifiers (URI): Generic Syntax," IETF Request for Comments, RFC 2396, at http://www.ietf.org/rfc/rfc2396.txt, accessed 4 March 2002.
13. Oliver Berthold, Hannes Federrath, and Stefan Köpsell, 2001. "Web MIXes: A system for anonymous and unobservable Internet access," In: Hannes Federrath (editor). Designing Privacy Enhancing Technologies: Proceedings of the Workshop on Design Issues in Anonymity and Unobservability. (Lecture Notes in Computer Science, 2009). Berlin: Springer-Verlag, pp. 115-129.
14. Simon Blake-Wilson, Magnus Nystrom, David Hopwood, Jan Mikkelsen, and Tim Wright. "TLS Extensions," IETF Internet Draft, February 2002, at http://www.ietf.org/internet-drafts/draft-ietf-tls-extensions-03.txt, accessed 4 March 2002.
15. Daniel Bleichenbacher, 1998. "Chosen Ciphertext Attacks Against Protocols Based on the RSA Encryption Standard PKCS#1," In: Hugo Krawczyk (editor). Advances in Cryptology - CRYPTO'98. (Lecture Notes in Computer Science, 1462). Berlin: Springer-Verlag, pp. 1-12.
16. CERT Coordination Center, at http://www.cert.org/, accessed 4 March 2002.
17. David L. Chaum, 1981. "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms," Communications of the ACM, volume 24, number 2 (February), pp. 84-88. http://dx.doi.org/10.1145/358549.358563
18. Cristian Coarfa, Peter Druschel, and Dan S. Wallach, 2002. "Performance Analysis of TLS Web Servers," Proceedings of the 2002 Network and Distributed System Security Symposium, 6-8 February 2002, Reston, Va.: Internet Society, at http://www.isoc.org/isoc/conferences/ndss/02/proceedings/papers/coarfa.pdf, accessed 4 March 2002.
20. Wei Dai. PipeNet 1.1, at http://www.eskimo.com/~weidai/pipenet.txt, accessed 4 March 2002.
22. Drew Dean and Adam Stubblefield, 2001. "Using Client Puzzles to Protect TLS," Proceedings of the 10th USENIX Security Symposium, 13-17 August 2001, Washington, D.C., Berkeley, Calif.: USENIX Association, at http://www.cs.rice.edu/~astubble/tls-usenix.pdf, accessed 4 March 2002.
23. Tim Dierks and Christopher Allen, 1999. "The TLS Protocol Version 1.0," IETF Request for Comments, RFC 2246 (January), at http://www.ietf.org/rfc/rfc2246.txt, accessed 4 March 2002.
24. Digipass, at http://www.vasco.com/, accessed 4 March 2002.
25. Donald Eastlake, Joseph Reagle, and David Solo, 2002. "XML-Signature Syntax and Processing," W3C Recommendation (February), at http://www.w3.org/TR/xmldsig-core/, accessed 4 March 2002.
26. Carl Ellison and Eric Rescorla, 2002. "The store.palm.com problem," Short discussion on the Cryptography Mailing List (January), submission address: email@example.com.
28. Edward W. Felten, Dirk Balfanz, Drew Dean, and Dan S. Wallach, 1997. "Web Spoofing: An Internet Con Game," Proceedings of the 20th National Information Systems Security Conference, 7-10 October 1997, Baltimore, Md., pp. 95-103; version of paper at http://www.cs.princeton.edu/sip/pub/spoofing.pdf, accessed 4 March 2002.
29. Edward W. Felten and Michael A. Schneider, 2000. "Timing attacks on Web privacy," In: Sushil Jajodia (editor). Proceedings of the 7th ACM Conference on Computer and Communications Security, New York: ACM Press, pp. 25-32; version at http://www.cs.princeton.edu/sip/pub/webtiming.pdf, accessed 4 March 2002.
30. Roy T. Fielding, Jim Gettys, Jeffrey C. Mogul, Henrik Frystyk, Larry Masinter, Paul Leach, and Tim Berners-Lee, 1999. "Hypertext Transfer Protocol - HTTP/1.1," IETF Request for Comments, RFC 2616 (June), at http://www.ietf.org/rfc/rfc2616.txt, accessed 4 March 2002.
31. FINREAD, "Financial Transactional IC Card Reader," CEN Workshop Agreement, CWA 14174, July 2001; see also http://www.cenorm.be/news/press_notices/smartcards.htm, accessed 4 March 2002.
32. "Advanced Encryption Standard (AES)," FIPS PUB 197, National Institute of Standards and Technology, November 2001; see http://csrc.nist.gov/encryption/aes/, accessed 4 March 2002.
33. Fortify for Netscape, at http://www.fortify.net/, accessed 4 March 2002.
34. John Franks, Phillip Hallam-Baker, Jeffrey Hostetler, Scott Lawrence, Paul Leach, Ari Luotonen, and Lawrence Stewart, 1999. "HTTP Authentication: Basic and Digest Access Authentication," IETF Request for Comments, RFC 2617 (June), at http://www.ietf.org/rfc/rfc2617.txt, accessed 4 March 2002.
35. Alan O. Freier, Philip Karlton, and Paul C. Kocher, 1996. "The SSL Protocol, Version 3.0," Internet Draft (March), at http://www.netscape.com/eng/ssl3/ssl-toc.html, accessed 4 March 2002.
36. Kevin Fu, Emil Sit, Kendra Smith, and Nick Feamster, 2001. "Dos and Don'ts of Client Authentication on the Web," Proceedings of the 10th USENIX Security Symposium, 13-17 August 2001, Washington, D.C., Berkeley, Calif.: USENIX Association, at http://www.cs.cornell.edu/People/egs/syslunch-spring02/syslunchsp02/webauth_tr.pdf, accessed 4 March 2002.
37. Eran Gabber, Phillip B. Gibbons, David M. Kristol, Yossi Matias, and Alain Mayer, 1999. "On Secure and Pseudonymous Client-Relationships with Multiple Servers," ACM Transactions on Information and System Security, volume 2, number 4 (November), pp. 390-415. http://dx.doi.org/10.1145/330382.330386
39. Bill Gates, 2002. "Trustworthy computing, " Microsoft internal memo (15 January), see http://www.wired.com/news/business/0,1367,49826,00.html, accessed 4 March 2002.
41. Ian Goldberg and David Wagner, 1996. "Randomness and the Netscape Browser: How secure is the World Wide Web?" Dr. Dobb's Journal (January), at http://www.ddj.com/documents/s=965/ddj9601h/9601h.htm, accessed 4 March 2002.
42. Ian Goldberg and David Wagner, 1998. "TAZ Servers and the Rewebber Network: Enabling Anonymous Publishing on the World Wide Web," First Monday, volume 3, number 4 (April), at http://www.firstmonday.org/issues/issue3_4/goldberg/, accessed 4 March 2002.
43. Neil Haller, Craig Metz, Phil Nesser, and Mike Straw, 1998. "A One-Time Password System," IETF Request for Comments, RFC 2289 (February), at http://www.ietf.org/rfc/rfc2289.txt, accessed 4 March 2002.
44. James Hayes, 1998. "The Problem with Multiple Roots in Web Browsers: Certificate Masquerading," Proceedings of the Seventh IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, (WET ICE '98), 17-19 June 1998, Stanford University, Stanford, Calif. Los Alamitos, Calif.: IEEE Computer Society Press, pp. 306-311.
45. Adam Hess, Jared Jacobson, Hyrum Mills, Ryan Wamsley, Kent E. Seamons, and Bryan Smith, 2002. "Advanced Client/Server Authentication in TLS," Proceedings of the 2002 Network and Distributed System Security Symposium, 6-8 February 2002, Reston, Va.: Internet Society, at http://www.isoc.org/isoc/conferences/ndss/02/proceedings/papers/hess.pdf, accessed 4 March 2002.
46. Kipp E.B. Hickman, 1995. "The SSL Protocol (SSL 2.0)," Internet Draft (February); see also http://www.netscape.com/eng/security/SSL_2.html, accessed 4 March 2002.
48. InternetCash, at http://www.internetcash.com/, accessed 4 March 2002.
50. Markus Jakobsson and Susanne Wetzel, 2001. "Security Weaknesses in Bluetooth," In: David Naccache (editor). Topics in Cryptology, CT-RSA 2001: The Cryptographers' Track at RSA Conference 2001. San Francisco, Calif., 8-12 April 2001. (Lecture Notes in Computer Science, 2020). Berlin: Springer-Verlag, pp. 176-191.
52. Dirk Janssens, Ronny Bjones, and Joris Claessens, 2000. "KeyGrab TOO - The search for keys continues ...," Utimaco White Paper (December), at http://www.utimaco.com/, accessed 4 March 2002.
53. Per Kaijser, 1998. "A Review of the SESAME Development," In: Ed Dawson and Colin Boyd (editors). Proceedings of the Third Australasian Conference on Information Security and Privacy. ACISP'98, Brisbane, Australia, 13-15 July 1998. (Lecture Notes in Computer Science, 1438). Berlin: Springer-Verlag, pp. 1-8.
55. Paul Kocher, Joshua Jaffe, and Benjamin Jun, 1999. "Differential Power Analysis," In: Michael Wiener (editor). Advances in Cryptology - CRYPTO'99. 9th Annual International Cryptology Conference, Santa Barbara, Calif., 15-19 August 1999. (Lecture Notes in Computer Science, 1666). Berlin: Springer-Verlag, pp. 388-397.
56. John Kohl and Clifford Neuman, 1993. "The Kerberos Network Authentication Service (V5)," IETF Request for Comments, RFC 1510 (September), at http://www.ietf.org/rfc/rfc1510.txt, accessed 4 March 2002.
57. Bert-Jaap Koops. "Crypto Law Survey," at http://cwis.kub.nl/~frw/people/koops/lawsurvy.htm, accessed 4 March 2002.
58. Hugo Krawczyk, 2001. "The Order of Encryption and Authentication for Protecting Communications (or: How Secure is SSL?)," In: Joe Kilian (editor). Advances in Cryptology - CRYPTO 2001. 21st Annual International Cryptology Conference, Santa Barbara, Calif., 19-23 August 2001. (Lecture Notes in Computer Science, 2139). Berlin: Springer-Verlag, pp. 310-331.
59. David M. Kristol, 2001. "HTTP Cookies: Standards, Privacy, and Politics," ACM Transactions on Internet Technology, volume 1, number 2 (November), pp. 151-198. http://dx.doi.org/10.1145/502152.502153
60. Leslie Lamport, 1981. "Password Authentication with Insecure Communication, " Communications of the ACM, volume 24, number 11 (November), pp. 770-772. http://dx.doi.org/10.1145/358790.358797
61. Peter A. Loscocco, Stephen D. Smalley, Patrick A. Muckelbauer, and Ruth C. Taylor, 1998. "The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments," Proceedings of the 21st National Information Systems Security Conference, pp. 303-314, and at http://csrc.nist.gov/nissc/1998/proceedings/paperF1.pdf, accessed 4 March 2002.
62. David Martin and Andrew Schulman, 2002. "Deanonymizing Users of the SafeWeb Anonymizing Service," Boston University Computer Science Department Technical Report 2002-003 (11 February), at http://www.cs.bu.edu/techreports/pdf/2002-003-deanonymizing-safeweb.pdf, accessed 4 March 2002.
63. MasterCard, Secure Payment Application (SPA), at http://www.mastercardintl.com/, accessed 4 March 2002.
64. Adrian McChullagh and William Caelli, 2000. "Non-Repudiation in the Digital Environment," First Monday, volume 5, number 8 (August), at http://www.firstmonday.org/issues/issue5_8/mccullagh/, accessed 4 March 2002.
65. Gary McGraw and Edward W. Felten, 1999. Securing Java: Getting Down to Business with Mobile Code. New York: Wiley, and at http://www.securingjava.com/, accessed 4 March 2002.
66. Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone, 1997. Handbook of Applied Cryptography. Boca Raton, Fla.: CRC Press, and at http://www.cacr.math.uwaterloo.ca/hac/, accessed 4 March 2002.
67. Silvio Micali and Ronald L. Rivest, 2002. "Micropayments Revisited," In: Bart Preneel (editor). Topics in Cryptology: Proceedings of the Cryptographers' Track at the RSA Conference 2002. (Lecture Notes in Computer Science, 2271). Berlin: Springer-Verlag, pp. 149-163.
68. Microsoft, 2001. "Erroneous VeriSign-Issued Digital Certificates Pose Spoofing Hazard," Microsoft Security Bulletin MS01-017 (March), at http://support.microsoft.com/default.aspx?scid=kb;EN-US;q293818, accessed 4 March 2002.
69. Mint, at http://www.mint.nu/, accessed 4 March 2002.
70. John C. Mitchell, Vitaly Shmatikov, and Ulrich Stern, 1998. "Finite-State Analysis of SSL 3.0," Proceedings of the Seventh USENIX Security Symposium, January 26-29, 1998, San Antonio, Texas. Berkeley, Calif.: USENIX Association, and at http://www.usenix.org/publications/library/proceedings/sec98/full_papers/mitchell/mitchell.pdf, accessed 4 March 2002.
71. Robert Morris and Ken Thompson, 1979. "Password Security: A Case History," Communications of the ACM, volume 22, number 11 (November), pp. 594-597. http://dx.doi.org/10.1145/359168.359172
72. Andrew Odlyzko, 2000. "The history of communications and its implications for the Internet," at http://www.dtc.umn.edu/~odlyzko/doc/history.communications0.pdf, accessed 4 March 2002.
73. Andrew Odlyzko, 2001. "Economics and Cryptography," 2001 IACR Distinguished Lecture, presented 8 May 2001 at EUROCRYPT 2001, in Innsbruck, Austria; see http://www.iacr.org/publications/dl/odlyzko01/odlyzko01.html, accessed 4 March 2002.
74. Lawrence C. Paulson, 1999. "Inductive Analysis of the Internet Protocol TLS," ACM Transactions on Information and System Security, volume 2, number 3 (August), pp. 332-351. http://dx.doi.org/10.1145/322510.322530
75. Paybox, at http://www.paybox.de/, accessed 4 March 2002.
76. PayPal, at http://www.paypal.com/, accessed 4 March 2002.
77. Pino Persiano and Ivan Visconti, 2000. "User Privacy Issues Regarding Certificates and the TLS Protocol: The Design and Implementation of the SPSL Protocol," In: Sushil Jajodia (editor). Proceedings of the 7th ACM Conference on Computer and Communications Security, New York: ACM Press, pp. 53-62.
78. Proton, at http://www.protonworld.com/, accessed 4 March 2002.
79. Michael G. Reed, Paul F. Syverson, and David M. Goldschlag, 1998. "Anonymous Connections and Onion Routing," IEEE Journal on Selected Areas in Communications, volume 16, number 4 (May), pp. 482-494. http://dx.doi.org/10.1109/49.668972
80. Michael K. Reiter and Aviel D. Rubin, 1998. "Crowds: Anonymity for Web Transactions," ACM Transactions on Information and System Security, volume 1, number 1 (November), pp. 66-92. http://dx.doi.org/10.1145/290163.290168
82. Eric Rescorla and Allan M. Schiffman, 1999. "The Secure HyperText Transfer Protocol," IETF Request for Comments, RFC 2660 (August), at http://www.ietf.org/rfc/rfc2660.txt, accessed 4 March 2002.
83. RSA Laboratories. "RSA Cryptography Standard. Public-Key Cryptography Standard," PKCS#1, January 2001, and at http://www.rsasecurity.com/rsalabs/pkcs/, accessed 4 March 2002.
84. Aviel D. Rubin and Daniel E. Geer Jr., 1998. "A Survey of Web Security," Computer, volume 31, number 9 (September), pp. 34-41. http://dx.doi.org/10.1109/2.708448
85. Berry Schoenmakers, 1998. "Basic Security of the ecash Payment System," In: Bart Preneel and Vincent Rijmen (editors). Computer Security and Industrial Cryptography: State of the Art and Evolution. (Lecture Notes in Computer Science, 1528). Berlin: Springer-Verlag, pp. 342-356.
86. SecurID, at http://www.rsasecurity.com/, accessed 4 March 2002.
87. SET Secure Electronic Transaction LLC, "SET Secure Electronic Transaction Specification," at http://www.setco.org/, accessed 4 March 2002.
88. Hovav Shacham and Dan Boneh, 2001. "Improving SSL Handshake Performance via Batching," In: David Naccache (editor). Topics in Cryptology, CT-RSA 2001: The Cryptographers' Track at RSA Conference 2001. San Francisco, Calif., 8-12 April 2001. (Lecture Notes in Computer Science, 2020). Berlin: Springer-Verlag, pp. 28-43.
89. Hovav Shacham and Dan Boneh, 2002. "Fast-Track Session Establishment for TLS," Proceedings of the 2002 Network and Distributed System Security Symposium, 6-8 February 2002, Reston, Va.: Internet Society; abstract at http://www.isoc.org/isoc/conferences/ndss/02/final.shtml, accessed 4 March 2002.
90. Adi Shamir and Nicko van Someren, 1999. "Playing "Hide and Seek" with Stored Keys," In: Matthew Franklin (editor). Financial cryptography: Third International Conference, FC '99, Anguilla, British West Indies, February 22-25, 1999: Proceedings. (Lecture Notes in Computer Science, 1648). Berlin: Springer-Verlag, pp. 118-124.
91. Clay Shields and Brian Neil Levine, 2000. "A Protocol for Anonymous Communication Over the Internet," In: Sushil Jajodia (editor). Proceedings of the 7th ACM Conference on Computer and Communications Security, New York: ACM Press, pp. 33-42.
92. Clay Shirky, 2000. "The Case Against Micropayments," The O'Reilly Network (19 December), at http://www.openp2p.com/pub/a/p2p/2000/12/19/micropayments.html, accessed 4 March 2002.
94. Marc Slemko, 2001. "Microsoft Passport to Trouble," at http://alive.znep.com/~marcs/passport/, accessed 4 March 2002.
95. Adrian Spalka, Armin B. Cremers, and Hanno Langweg, 2001. "Protecting the Creation of Digital Signatures with Trusted Computing Platform Technology Against Attacks by Trojan Horse Programs," In: Michel Dupuy and Pierre Paradinas (editors). Trusted information: The New Decade Challenge: IFIP TC11 16th International Conference on Information Security (IFIP/Sec'01), June 11-13, 2001, Paris, France. Boston: Kluwer, pp. 403-419.
96. Trusted Computing Platform Alliance (TCPA), at http://www.trustedpc.org/, accessed 4 March 2002.
97. Miriam van Dellen. "Anonymity Law Survey," at http://rechten.kub.nl/anonymity/, accessed 4 March 2002.
98. Simone van der Hof. "Digital Signature Law Survey," at http://rechten.kub.nl/simone/ds-lawsu.htm, accessed 4 March 2002.
100. Visa, "3-D Secure Authenticated Payment Program," at http://international.visa.com/, accessed 4 March 2002.
101. World Wide Web Consortium, 1999. "Common markup for micropayment per-fee-links," W3C Working Draft (25 August), at http://www.w3.org/TR/WD-Micropayment-Markup/, accessed 4 March 2002.
102. World Wide Web Consortium. "Extensible Markup Language (XML)," http://www.w3.org/XML/, accessed 4 March 2002.
103. World Wide Web Consortium. "HyperText Markup Language (HTML)," http://www.w3.org/MarkUp/, accessed 4 March 2002.
104. World Wide Web Consortium. "Platform for Internet Content Selection (PICS)," http://www.w3.org/PICS/, accessed 4 March 2002.
105. World Wide Web Consortium. "Platform for Privacy Preferences (P3P)," http://www.w3.org/P3P/, accessed 4 March 2002.
106. David Wagner and Bruce Schneier, 1996. "Analysis of the SSL 3.0 protocol," Proceedings of the Second USENIX Workshop on Electronic Commerce: November 18-21, 1996, Oakland, Calif. Berkeley, Calif.: USENIX Association, pp. 29-40.
107. Marc Waldman, Aviel D. Rubin, and Lorrie Faith Cranor, 2000. "Publius: A robust, tamper-evident, censorship-resistant web publishing system," Proceedings of the Ninth USENIX Security Symposium, and at http://cs1.cs.nyu.edu/~waldman/publius/publius.pdf, accessed 4 March 2002.
108. Yougu Yuan, Eileen Zishuang Ye, and Sean Smith, 2001. "Web Spoofing 2001," Dartmouth College, Department of Computer Science, Technical Report TR2001-409 (July), at http://www.cs.dartmouth.edu/~pkilab/demos/spoofing/tr.pdf, accessed 4 March 2002.
109. Zero-Knowledge Systems. "Freedom Network," at http://www.zeroknowledge.com/, accessed 4 March 2002.
Paper received 17 February 2002; accepted 26 February 2002; revision received 4 March 2002.
Copyright ©2002, First Monday
A Tangled World Wide Web of Security Issues by Joris Claessens, Bart Preneel, and Joos Vandewalle
First Monday, volume 7, number 3 (March 2002),
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2014.