Copyright and Global Libraries: Going with the Flow of Technology by Roberto Zamparelli
First Monday
Copyright and Global Libraries: Going

with the Flow of Technology by Roberto Zamparelli

The current approach to the enforcement of copyright restrictions on intellectual properties to be distributed by electronic means (particularly, via the Web) aims at blocking unauthorised duplication by means of increasingly sophisticated protection systems (encryption, watermarks, net-active software, etc.). The paper argues that an approach of this kind runs counter to current technological trends, and that it should be eventually replaced by a model in which unauthorised duplication is not done because it is not convenient on the user's part, not because it is not possible. Such model rejects the pay-per-view concept in favour of a relatively expensive membership fee by which the user of an `Open Global Library' acquires the personal right to unlimited downloading. The fee is anchored to a range of intrinsically non-copyable services, to discourage `eavesdropping' by non-members, and coupled with a reward scheme to distribute part of the membership fee to the authors of the intellectual properties. Implications and open problems for advertising, private enterprise and environmental protections are discussed.


A Near-Future Scenario
Authorship and Unauthorised Diffusion
Two Roads to Copy Protection
The Coercive Approach
The Global Library Model
The Online Model
The Reward System
The Physical Library as a Recycling Centre
Open Issues


We often hear that mass-copying is one of the hallmarks of our civilisation. We read this in journals, newspapers, books, and on the Internet - the very number of repetitions supposedly validates the message. Today, in an industrialised country, making near-perfect copies of a text, a music, a picture, or a video is becoming easier and cheaper by the day. Naturally, this situation has very important consequences on the notions of "copyright" and "library".

In what follows, I present some highly speculative remarks on these issues. Without supporting any specific concrete solution under study, I want to examine the features of one possible path, one that in my opinion has the advantage of a relative simplicity, and promises to go with the technological and economical flow, rather than against it.

A Near-Future Scenario

To set the environment for our problem, let's make some assumptions that I don't think are unreasonable, at the current pace of technology. Consider a world in which computers are as widespread as telephones, and high-speed network connections common and affordable. Hardware prices are down, and computers have memories capable of hosting a middle-size movie, if properly compressed. Digital back-up media are available, which may copy a symphony from a DVD (or its successor) with no quality losses. Photocopies/scanners are likewise digital, and can be coupled with OCR software to turn a well-printed book into its electronic essence with moderate human intervention. All these capabilities are already available, more or less, so my assumptions are a reasonable extrapolation of what is available.

On the other hand, nothing around should grant a belief in the forthcoming existence of electronic screens with readability, cost and transportability features to match those of common paper. Paper has its own drawbacks, of course. For one thing, it cannot display anything dynamic; second, the (environmental) cost of paper production or recycling is very high, and likely to stay there in the near future. Moreover, we know that an office armed with a laser printer is a prodigious paper-waster (setting aside the waste in energy, ink, and toner cartridges).

Authorship and Unauthorised Diffusion

Let us now imagine, in this not-so-distant future, a young, brilliant man who happens to be good at creating fiction, or essays, or software, or music, or video, or all of the above in one wild mix (that's called multimedia). His production is appreciated, and he would like to have some financial gain from this activity, perhaps even using the network to sell his work. His problem: how to avoid the possibility that anything he publishes is copied and broadcasted without any return, perhaps not even fame?

The problem has two sides. One is the issue of authorship: how to insure that your work is not stolen and presented under someone else's name. The other is simply unauthorised duplication, where nobody else pretends to be the author, but the work (or part of it) is copied and given away, for free or in a black-market network, with no gain for its creator.

The two sides are clearly interlocked, with many intermediate states. When good work is circulated without the author's permission it can become anonymous, and then it might be tempting (though not profitable) for someone else to call it his own. Vice-versa, stolen works may be sold cheaply, much like any stolen good, and thus enter the unauthorised duplication stage. Nonetheless, I believe that fraudulent authorship can still be fought with traditional tools - basically, by registering your work with some trustworthy authority, and taking legal action against anyone who claims to be the author. This is not straightforward, particularly for a work that has originally appeared on the Internet, but at least it doesn't present new problems for work that appears in more traditional formats (such as books or compact discs).

The issue of unauthorised duplication, on the contrary, requires in my view a genuine shift of perspective. One reason is that the infringement isn't typically done by a group of computer pirates, but by an army of people who act with the help of perfectly legal technology, doing something which seems the obvious use for that technology (what else are you supposed to do with three gigabyte tapes? Back up your own notes and configuration files?). Sometimes, these "pirates" are acting in what appears to be perfectly good faith. For example, university students who photocopy large portions of a very expensive textbook for an exam are clearly doing something illegal, but typically do not feel like criminals, and in my opinion, they shouldn't be treated as such. However, they should be discouraged at least on economic grounds - if everybody purchased the book its cost could be much less. But there are many ways to this end, and here is where the real debate begins.

Two Roads to Copy Protection

In my view, there are two main routes against unauthorised duplication. The first, an extension of the current situation, is to say that copying should not be done because it is forbidden, and then call up on technology to enforce the prohibition. Let's call this the coercive approach. The second route aims at a model in which unauthorised copying is not done because, above and beyond being forbidden, it is not convenient: obtaining a legal copy is safer, cheaper, and has added values. There may or may not then be technology to back up the prohibition - it is purely a matter of economic convenience. What is crucial, in this case, is that people should not copy illegally because copying legally should be easier. And people will do what is easier (and the network specialises in producing people who do just what is easier!). Therefore, this second route, call it the incentive approach, seems at the very least more natural. The problem is how to set up a system in which the legal way is not only the most convenient one but also the most profitable solution.

The Coercive Approach

Let's first discuss what is probably the current trend: reinforcing traditional copyright with technology to reduce unauthorised copying. This has been done in a number of ways, at all levels. In the last few years, we have seen books with grey notes on black background and different size from any paper size in use on the planet; software has swelled by design, so that only compact discs can hold it; the music industry has staunchly resisted the introduction of digital tapes. Experiments are continuing with 'digital watermarks' which can be printed on music or images to make their source identifiable (the Imprimatur Project); two-way encryption of material purchased over the Web and of credit card numbers; and one-way encryption, via special decoder boxes for 'pay per view' films. In some cases, a given work is encrypted by the distributor using the purchaser's credit card number, so that giving it away amounts to giving away one's personal code (an example is Cerberus in the UK).

I have several observations about these techniques and the coercive approach in general. Data encryption is expensive, in terms of computational resources, and of time on the user's side. Some systems require special software to decrypt a product, frequently different from a user's favourite software, with different commands to learn. Other approaches require multiple passwords (how many passwords is one willing to write down and fetch out?). A credit-card-number-based system is no doubt a powerful deterrent to giving away software, but some users might get nervous at the idea of having important data permanently encoded in the software that they (or their colleagues) use everyday. What if the computer gets stolen? Should software be kept in the safe with credit cards?

Even on the seller's side, encryption is not always an option:

We have commercial credit and debit cards. Now in a digital world that really doesn't work because the transaction costs on each credit card transaction is too high. Typically, the value of individual transactions in digital information are small figures, the cost of processing each transaction exceeds the value of the information being traded - so it is simply not going to be commercially viable.
Anne Leer, Oxford University Press: Keynote address at the Imprimatur Second Consensus Forum, February 1997

The cost of encoding financial transaction might go down, if alternative means of payment, like `digital' money, are adopted (such as Digicash). However, there is political resistance both to encryption (on grounds of security) and to the use of digital cash, since it is very hard to tax transactions based on it.

A second type of objection that can be raised against the coercive approach against unauthorised copying of music, video, or written text is that no matter how thickly a piece of work is encrypted, there is always a moment in which its content must be plain in order to reach the user. A piece of text must eventually appear on the screen in a readable format, and at that point, laborious but simple cut-and-paste operations can produce a copy. Pictures can be screen-grabbed, and music can be recorded from a speaker jack (it will not be digital but still of sufficiently high quality if copied to a digital media). Writing software that intercepts a protected piece at the last instant before it reaches the user is probably not too hard, if there is motivation. The prohibitions set up by complex technology are just the kind of motivation that some people need. Some barriers are there to challenge people to break them: in some communities, unauthorised copying occurs because it is a test of a hacker's skill, not because there is any interest in what gets copied.

Consider that the electronic watermark on a work (or for that matter the identification number for a program) makes it identifiable but of course doesn't prevent people from using it. Identification is a deterrent only if there are inspections - if the object is sold, or at least used in a public place under some control. It has little effect if the copy is used in private settings, or passed on to friends who pass it on to more friends, unless of course one envisions fine-grained police controls on private housings, a scenario with disturbing implications. Surely, the extended use of networks makes checking easier: in a computer permanently connected to the Internet a piece of software might send a signal to a given vendor, which checks if the copy has been legally acquired or not. Arguments could be made that this sort of checking is illegal and an invasion of privacy.

Finally, there is an ambiguity at the bottom of the coercive approach: the same firm that makes software against unauthorised duplication often produces software or information that help copying. This triggers a technological circle: software companies that want to protect their product are forced to buy elaborate protection systems which are then void by new copying methods (analogous examples - in order of perversity - encrypter/decrypter; virus/anti-virus; missile/anti-missile; mines/mine-detectors; and so on). The winner in this race is the one with time and resources to invest not in the product itself, but in the most advanced way to protect it. The benefactor is not the isolated author or programmer, precisely the person that would most benefit from an inexpensive, Internet-based distribution system.

The Global Library Model

An alternative path to a world of legal, authorised duplication goes through the concept of Open Global Library. An Open Global Library (OGL) is at the same time a source of information, a service provider, and a monitoring centre, which is accessible to users upon payments of some flat rate. Once the fee is paid, a library user should find more convenient to get anything he or she wants from the library, rather than copying it illegally from some other source.

The Online Model

Let's explore, first of all, the online version of this model. A library user pays a flat rate to access the library. Then, whenever she wants a piece of work, she downloads it from the library to her computer (remember we are assuming a fast connection). The access is recorded by the library, and credited to the author: "Someone is reading your book." The user doesn't need to save the work - she can download it any number of times; in fact, the idea of "Net computers" - computers with little of no static memory but a very large RAM and a permanent Net connection - fits nicely with this idea: a piece of work cannot be saved (and therefore, cannot be easily passed on). Presumably, the author is not credited for any subsequent download by the same user (it would be just too easy for our author to write a small program to make him the most intensely read person on the planet - overnight!). Therefore downloads by users would be recorded and logged (as in any current loan library).

But why should a person become a library user? Why not exploit a friend who (network computer aside) saves the desired piece of work on his hard disk and passes it on? The answer is that the library fee does not merely grant access to products, but also to services which are by their very nature non-copyable. One of the services could be the loan of the network computer itself, another the possibility of searching for (tele)work; placing personal ads; or, participating in a time-bank (a setup to exchange services of various types among a local group). Libraries might offer valuable, category-specific searches on materials related to what a user has just loaned. Companies associated with them might lend different items that are hard to copy, such as paintings, sculptures, and bicycles. In the case of software, one obvious service is the availability of future releases, bug fixes, and online assistance.

It is important to notice that technology might still play an important role in this process. Some of the anti-copying techniques described above might still be useful, if they prove economical and - more importantly - if they do not complicate the process of obtaining a piece of work from the library: once the rate is paid, downloading must be absolutely hassle-free. What really does not fit in this model is the concept of 'pay-per-object'. While there might be different rates to participate in the global library - giving access to a wider or smaller set of services - the loan of a work must always be anchored to an additional, non-copyable service. It must be flat, not dependent on the amount of the service one receives, i.e. number of books or films taken out. Once users have subscribed, they will feel compelled to use the service that they have paid for; there is no point in getting material elsewhere. In fact, if they do, they are letting their money go to waste (a concept that can be occasionally remarked upon by appropriate advertising).

There are various additional ways to make sure users pay a given fee. On one extreme, we have the possibility of introducing a (basic level) of library fee as a tax, much like the tax one pays to have garbage collected, or the percentage of city taxes which currently goes to traditional libraries and schools. One could debate on the fairness of such a 'tax on culture' since many people will end up never using it. But there is a certain extent to which a tax is an educational tool. In principle, it seems more fair to make garbage taxes proportional to the actual amount of garbage produced (as it is, for instance, in Germany), than to make library taxes proportional to how many books one reads. We want to educate people to produce less garbage, not to read less books. So perhaps a flat library tax is a valuable idea.


An additional source of financing for this hypothetical library could take the form of advertising embedded within the product one has requested.

Advertising already occurs on the Internet, both on the Web and by e-mail. Advertising by e-mail is generally considered to be an annoying and potentially disruptive behaviour. Some servers have appeared which seem to specialise in it. If these servers will not be banned, the proliferation of promotional e-mail will probably cause the parallel, rapid diffusion of e-mail filtering programs. At some point promotional messages will have to become pretty sophisticated to reach their targets, perhaps to the point of being seriously bothersome and illegal.

Web advertising is more discrete - it doesn't come to you. However, in principle it is quite easy to filter this sort of advertising away, certainly much easier than filtering it from movies (where it would leave a blank screen of dubious interest), printed magazines, or billboards. It should be kept in mind that excessive advertising on the Web can be much more counterproductive that excessive advertising on paper. With a properly constructed Web ad filter in place, all ads could be removed. On the other hand, a certain amount of well-targeted and fair advertising might also be considered a service for some users. At the moment, advertising is mostly restricted to sites with high visibility (news, search engines, journals). As advertising moves to publications that reach limited but well-defined interest groups, it tends to become specialised, aimed at a specific target. To be feasible in the Open Global Library model, this poses some problems with privacy, to which I will return.

One general question about advertising: what should one advertise for? Books in a book download and music in a video jpeg file? But books and music can be obtained from the library, at no extra cost, so where is the profit for the private investor? One possible answer is that the private investor is the library, or more exactly, one fragment of it. Various sections of the library might be leased (by the state or by other companies) to private enterprise, which might compete on the rate offered to users, on the payment to the authors, and on the type of services offered.

Business competition on the Internet is probably going to be particularly fierce (just imagine robots that automatically compare the price of the same product among retailers in the whole world). There are many issues raised by the interface between public and private in the distribution of creative works, which deserve thinking and careful modeling. It seems to me that large rate differences between private libraries could potentially resurrect unauthorised duplication, but the question is wide open. Some answers may be related to the details of a given reward scheme for authors.

The Reward System

In the OGL model, creators of a given work receive a fraction of the rate paid by the users to the library which is proportional to the number of people that have downloaded their work, as recorded by the library. This is in fact a natural extension of a system already in use in the United Kingdom and in Germany, the Public Lending Right scheme, by which authors (writers, translators, and some editors or compilers) receive an annual fee (in the UK, currently 2 pence per loan, with a maximum of BP 6000 per year) in proportion to the number of times a work has been loaned by a public library, as calculated from a sample.

It is easy to imagine how this factor could be adjusted in a number of creative ways. For instance, a measure of `appreciation', extracted from a questionnaire that the users could be required to fill out periodically, or some kind of "impact factor" (for scientific publications, that would be the number of reference items that mention the publication; for fiction, it might be the number of pointers to that work found on the Internet by search engines).

It is clear that the OGL model could favour the small-time writer or musician who has trouble getting published by traditional means. Unlike a paper publisher, the Open Global Library doesn't need to make a big, risky investment to place work online in its barest typographic form. It can do it for a very reasonable amount. And unlike a traditional publisher, the library doesn't need to judge it. Judgment is up to the users, but since the users have an amazing selection to choose from, an author needs to draw attention to his work and make it worth reading. He or his agent might be willing to pay the library for advertising, as well as for additional services that increase the value of the work: careful proofreading, validation and professional typographical layout. At this point, intermediate agencies between the author and the library could become quite useful. Imagine the role of a literary magazine: some authors pay it just for their work to be read and critically evaluated. Periodically, the magazine compiles a list of `best work online' (or it even advertises for it). The users read the magazine, and try out its suggestions.

The Physical Library as a Recycling Centre

So far, I have disregarded a very important factor, which is eyes. If I download a program, a video, or an hypertext, I am definitely bound to use a monitor - but, who would ever want to read "Brother Karamazov" from a screen? Flat-screen technology is improving readability, but even if future screens were to cost one tenth of current ones, few would be eager to bring them to the beach, take them to the bath tub, or just drop them on the floor. A few pages into an online novel, there is an almost desperate need to press the 'Print' button and "access" text on old-fashioned paper. In my opinion, printing should be both allowed and discouraged. Once a work is printed, it can be passed on to others, losing copyright control. Moreover, printing has high environmental costs. One way to discourage printing would be to tax blank tapes above a certain quality and paper sold in sheets of a size/texture suitable for printers.

At this point, a key role could be played by the more traditional, local library. However, the library acts no longer as a book repository, but as a telecommunications and print centre. Here is an example of one possible procedure. A user needs a book and visits the local library (which, ideally, should be combined with a local telework centre, so as to maximise line usage). If she find the book, she takes it on loan, triggering the credit to the author. If the book isn't there, but is present in another local library, a request for interlibrary loan is sent. When the nearest available printed copy is too far, or the book is urgently needed, frequently requested in the area, or just too thin to be worth the trip, the local library downloads it from the Internet, prints it, binds it, and gives it to the user. When she is done with it, she can return it to the library, where it will go into the permanent collection. Since printing is centralised, it is possible to achieve a greater efficiency than in home printing, while the fact that the book is returned saves paper in the long run. People who love scribbling on what they read can pay the cost of the print (plus a dissuasive over-charge) and keep the book.

This model has some drawbacks. After a while, libraries would fill up with publications printed out for users with special interests, which are unlikely to be taken out again (think of so many short-lived scientific publications or certain kinds of newspapers and magazines). Paper recycling is not an ideal choice, being itself a highly polluting process. Instead of printing publications of this sort in the traditional manner, printing could perhaps be done on cards of some resistant plastic material (such as polyester or cellulose acetate films), using washable ink. When the "publication" is returned to the library, the sheets are simply washed and reused.

Open Issues

A model of online distribution like the one just sketched raises a vast number of problems and open issues, some of which are actually common to different systems of online publication and trade. Privacy is an issue since the library must keep a record of what is sent to which user. Users can, in principle, be identified by their tastes in reading and watching. Great care should be put in keeping separate the user's library code, the user's real identity, and (whenever possible) his or her Internet address.

Interestingly, a virtual library, unlike a material one, can do a similar process with loaned works. After the user has made a choice of, say, a book, the work could be identified in all subsequent transactions by a code that will not reveal its category. Clearly, this possibility raises some questions with respect to the diffusion of illegal material (with little control of what an author places online and of what is sent to a user, anything might go through), and for advertising. If the library 'doesn't know' what it is sending, it cannot add online advertising to match the content of the publication, not to mention the user's tastes (imagine a context in which users who have paid different rates to the library receive different doses of advertising in the same piece of work).

Another important issue is globality. How global should a global library really be? Could it be slowly scaled up, from a small, experimental server to a language-wide system? (languages are after all the only natural boundaries on the Internet). Or would this lead to a massive copyright fraud the moment a limited number of users have access to unlimited downloading, and can pass the material on to those who are not 'in' yet? What would be the reaction of traditional book retailers, and how could they recycle themselves online?

If there are several private open libraries in competition, should the same author place his work on more than one? Should he be credited twice if a reader downloads the same work first from one place, then from another? Is it possible 'to buy the same book twice'? Who is going to check and how could this verification be done without scattering private information all over the network?

All interesting, open questions. End of article

About the Author

Roberto Zamparelli was born in Rome in 1963, and completed a degree 'Laurea in Lettere' at the University of Rome. In 1995 he obtained a Ph.D. in linguistics at the University of Rochester, in upstate New York. He is currently a Marie Curie Fellow at the Human Communication Research Centre University of Edinburgh, with a project on how to improve publication of scientific material on the World Wide Web. His interests include the retrieval of bibliographic information from publications on the Web and the use of the Web for teaching and research in linguistics (see Layers). He is a founding member of the association La Citta Invisibile.

Copyright © 1997, First Monday

Copyright and Global Libraries: Going with the Flow of Technology by Roberto Zamparelli.
First Monday, Volume 2, Number 11 - 3 November 1997

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.