Signs of epistemic disruption: Transformations in the knowledge system of the academic journal
First Monday

Signs of epistemic disruption: Transformations in the knowledge system of the academic journal by Bill Cope and Mary Kalantzis

This article is an overview of the current state of scholarly journals, not (just) as an activity to be described in terms of its changing processes, but more fundamentally as a pivot point in a broader knowledge system. After locating journals in what we term the process of knowledge design, the article goes on to discuss some of the deeply disruptive aspects of the contemporary moment. These not only portend potential transformations in the form of the journal, but possibly also in the knowledge systems that the journal in its heritage form has supported. These disruptive forces are represented by changing technological, economic, distributional, geographic, interdisciplinary and social relations to knowledge. The article goes on to examine three specific breaking points. The first breaking point is in business models — the unsustainable costs and inefficiencies of traditional commercial publishing, the rise of open access and the challenge of developing sustainable publishing models. The second potential breaking point is the credibility of the peer review system: its accountability, its textual practices, the validity of its measures and its exclusionary network effects. The third breaking point is post–publication evaluation, centered primarily around citation or impact analysis. We argue that the prevailing system of impact analysis is deeply flawed. Its validity as a measure of knowledge is questionable, in which citation counts are conflated with the contribution made to knowledge, quantity is valued over quality, popularity is taken as a proxy for intellectual quality, impact is mostly measured on a short timeframe, ‘impact factors’ are aggregated for journals or departments in a way that lessens their validity further, there is a bias for and against certain article types, there are exclusionary network effects and there are accessibility distortions. Add to this reliability defects — the types of citation counted as well as counting failures and distortions — and clearly the citation analysis system is in urgent need of renewal. The article ends with suggestions intended to contribute to discussion about the transformation of the academic journal and the creation of new knowledge systems: sustainable publishing models, frameworks for guardianship of intellectual property, criterion–referenced peer review, greater reflexivity in the review process, incremental knowledge refinement, more widely distributed sites of knowledge production and inclusive knowledge cultures, new types of scholarly text and more reliable use metrics.


The knowledge business
Forces of epistemic disruption
Breaking point 1: How knowledge is made available
Breaking point 2: Designing knowledge credibly
Breaking point 3: Evaluating knowledge, once designed
Framing knowledge futures



The knowledge business

These are some quantifiable dimensions of the academic and scholarly knowledge business: ‘In 2004 … academic publishing in the Western world was dominated by twelve publishing corporations with combined annual sales of approximately $65 billion and employing in the order of 250,000 employees’ (Peters, 2009). ‘In 2006 the top ten STM [Scientific, Technical and Medical] publishers took in 53% of the revenue in the $16.1 billion [U.S.] periodicals market’ (Shreeves, 2009). Universities spend between 05. percent and 1.00 percent of their budgets on journals subscriptions (Phillips, 2009). Morgan Stanley reports that academic journals have been the fastest growing media sub–sector of the past 15 years (Morgan Stanley, 2002). An analysis of Ulrich’s periodicals list shows that the number of scholarly journals increased from 39,565 in 2003 to 61,620 in 2008; of these, the number of refereed journals has risen from 17,649 in 2002 to 23,973 in 2008. The number of articles per journal is up from 72 per annum in 1972 to 123 in 1995 and average length of an article increased by 80 percent between 1975 and 2007 (Tenopir and King, 2009). Approximately 5.7 million people work in research and development worldwide, publishing on average one article per year, and reading 97 articles per year. This average publication rate per R&D worker per annum has stayed steady, and the dramatic increase in articles published in recent decades is attributable to increases in the number of R&D workers (Mabe and Amin, 2002).

And here are some the qualitative dimensions of the business of academic and scientific knowledge making: the process of publication is an integral aspect of business of knowledge–making. Far from being a neutral conduit for knowledge, the publication system defines the social processes through which knowledge is made, and gives tangible form to knowledge. The message, of course is by no means reducible to its medium. We take it for granted that there are knowledge referents external to knowledge representations, and that the representations are not ends in themselves. However, the representational media and the social norms of representation are as much the stuff of knowledge as the things those representations purport to represent.

This article takes the academic journal as its reference point because it is symptomatic of underlying knowledge systems. It looks at the academic journal in a moment of enormously unsettling, uncertain and perhaps also exciting times. We look at seismic stresses in the workings of the academic journal, and analyze these for signs of a deeper epistemic disruption — in the very ways we know.

But first, to define ‘know’. What do we mean by specifically scientific, academic or scholarly knowledge? After all, people are ‘knowing’ in everyday life in a whole lot of ways. Academic or scholarly knowledge has some extraordinary features. It has an intensity of focus and a concentration of intellectual energies greater than that of ordinary, everyday, commonsense or lay knowing. It relies on the ritualistic rigor and accumulated wisdoms of disciplinary communities and their practices. It entails, in short, a kind of systematicity that does not exist in casual experience. Husserl draws the distinction between ‘lifeworld’ experience and what is in ‘transcendental’ about ‘science’ (Cope and Kalantzis, 2000b; Husserl, 1970). In these terms, the ‘lifeworld’ is everyday lived experience. The ‘transcendental’ of academic and scholarly knowledge stands in contradistinction to the commonsense knowing of the lifeworld, which by comparison is relatively unconscious and unreflexive. Academic and scholarly knowledge sets out to comprehend and create meanings–in–the–world which extend more broadly and deeply than the everyday, amorphous pragmatics of the lifeworld. Such knowledge is systematic, premeditated, reflective, purposeful, disciplined and open to scrutiny by a community of experts. Science is more focused and harder work than the knowing in and of the lifeworld (Kalantzis and Cope, 2008).

The knowledge representation process is integral to the making of this peculiarly academic, scientific and scholarly knowledge. It is central to its business of epistemic design. This design process has three representational moments.

D1: Available designs of knowledge. The first aspect is what we would call ‘available designs’ (Cope and Kalantzis, 2000a; Kress, 2000). The body of scholarly literature — the five million or so scholarly articles published each year, the (probably) hundred thousand books — is the tangible starting point of all knowledge work. These representational designs work at a number of levels — at one level as textual practices of describing, reporting on observations, clarifying concepts and arguing to rhetorical effect. They are also represented intertextuality, at the level of bodies of knowledge, where no text sits alone but constantly draws upon and references against other texts by way of conceptual distinction, or accretion of facts, or agreement on principle — among many of the possibilities that fuse a work into a body of knowledge. These representational designs are the fundamental ground of all academic and scholarly knowledge work. They give tangible form to fields of interest.

D2: Designing knowledge. The second aspect is the process of ‘designing’. Available knowledge designs have a textual and intertextual morphology. These are the raw materials of already–represented knowledge or found knowledge objects. Designing is the stuff of agency, the things you do to know and the rhetorical representation of those things. It is also the stuff of communities of disciplinary practice. These practices involve certain kinds of knowledge representation–modes of argumentation, forms of reporting, descriptions of methods and data, ways of supplementing extant data, linking and distinguishing concepts, and critically reflecting on old and new ideas and facts. There is no knowledge–making of scholarly relevance without the representation of that knowledge. And that representation happens in a community of practice: with collaborators who co–author or comment upon drafts, with journal editors or book publishers who review manuscripts and send them out to referees, with referees who evaluate and comment, and then the intricacies of textual revision, checking, copy–editing and publication. Knowledge contents and the social processes of knowledge representation are inseparable.

D3: The designed: new knowledge becomes integrated into a body of knowledge. And then a third aspect of the process, ‘the (re)designed’, when a knowledge artifact joins the body of knowledge. Private rights to ownership are established through publication. These do not inhere in the knowledge itself, but in the text which represents that knowledge (copyright) or through what the representation describes (patents). Moral rights to attribution are established even when default private intellectual property rights are forgone by attaching a ‘commons’ license. On the other hand, even the most rigorous of copyright license allows quoting and paraphrasing in the public domain for the purposes of the discussion, review and verification. This guarantees that a privately owned text can be incorporated into a body of public knowledge and credited via citation. This is the point at which the process of designing metamorphoses into the universal library of knowledge, the repository of publicly declared knowledge, deeply interlinked by the practices of citation (Quirós and Martín, 2009). At this point, the knowledge design becomes an ‘available design’, absorbed into the body of knowledge as raw materials for others in their designing processes.

Of course, scholarly knowledge making is by no means the only secular system of social meaning and knowing in modern societies. Media, literature and law all have their own design protocols. In this article, however, we want to focus specifically on the knowledge systems of science and academe as found in the physical sciences, the applied sciences, the professions, the social sciences, the liberal arts and the humanities. We are interested in their means of production of knowledge, where the medium is not the whole message but where the textual and social processes of representation nevertheless give modern knowledge its peculiar shape and form.



Forces of epistemic disruption

Our schematic outline of the knowledge representation processes — available designs/designing/the designed — could be taken to be an unexceptional truism but for the extraordinary social and epistemic instability of this moment. This article takes journals as a touchstone as it explores the dimensions of epistemic change — some well underway, others merely signs of things to come. Here are some of the roots of epistemic shift:

E1: Technology. The most visible force of epistemic disruption is technological. We are going to start with this as the first item on our list of disruptions for its tangible obviousness, although not for its intrinsic epistemic disruptiveness. An information revolution has accompanied the digitization of text, image and sound and the sudden emergence of the Internet as a universal conduit for digital content. But the information revolution does not in itself bring about change of social or epistemic significance. Academic publishing is a case in point. The Internet–accessible PDF file makes journal articles widely and cheaply accessible, but its form simply replicates the production processes and social relations of the print journal: a one–way production process which ends in the creation of a static, stable text restricted to text and still image. This change is not enough to warrant the descriptor ‘disruptive’. The technological shift, large as it is, does not produce a change in the social processes and relations of knowledge production.

There is no deterministic relationship, in other words, between technology and social change. New technologies can be used to do old things. In fact, in their initial phases new technologies more often than not are simply put to work do old things — albeit, perhaps, more efficiently. At most, the technological opens out social affordances, frequently in ways not anticipated by the designers of those technologies. So what is the range of affordances in digital technologies that open new possibilities for knowledge making? We can see glimpses of possibility of new and more dynamic knowledge systems, but not yet captured in the mainstream academic journal. For instance, in contrast to texts that replicate print and are structured around typographic markup, we can envisage more readily searchable and data mineable texts structured around semantic markup (Cope and Kalantzis, 2004). In contrast to knowledge production processes which force us to represent knowledge on the page, restricting us to text and still image, we can envision a broader, multimodal body of publishable knowledge which would material objects of knowledge that could not have been captured in print or its digital analogue: datasets, video, dynamic models, multimedia displays (Jakubowicz, 2009). Things that were formerly represented as the external raw materials of knowledge can now be represented and incorporated within the knowledge. And in contrast to linear, lock–step modes of dissemination of knowledge, we can see signs of possibility for scholarly knowledge in the more collaborative, dialogical and recursive forms of knowledge making already to be found in less formal digital media spaces such as wikis, blogs and other readily accessible Web site content self–management systems. Most journals are still making PDFs, still bound to the world of print-look–alike knowledge representation, but a reading of technological affordances tells us that we don’t have replicate traditional processes of knowledge representation — digital technologies allow us to do more than that.

E2: Economics. The second item on our list of potential disruptions is the economics of production. With the Internet, we’ve got used to getting a wealth of information for free. It’s not that it’s really free, because it takes human effort to create the content and physical infrastructure to manufacture, transmit and render the content — computers and storage devices and transmission networks. Actually, we’ve got used to a system of cross–subsidy, a kind of information socialism within a market economy. Wikipedia content is free because its authors donate their time and so must have other sources of income. A Google search is free because the company has copied other people’s content without permission and without payment, and then makes a business out of a free service by putting advertising beside it. Open access academic journal content is free because universities value published output and pay academics salaries to publish their work. This represents a profound shift in our expectations about knowledge markets in which we traditionally bought printed content. Open access academic journal content is free because universities value published output and pay academics salaries — the students and the state pay for this work. Internet–accessible content represents a profound shift in our expectations about knowledge markets when we traditionally bought printed content. So, at those times today when we reach a journal article on the Internet to which we don’t have subscription access and it costs US$30 or US$50, this breaks the norm of information socialism to which we have now become accustomed. The rise of open access journals are not only one symptom of resistance. It is estimated that 15 percent of academic journal articles are now open access (Brody, et al., 2007). Another is the increasingly prevalent practice of the posting of preprints to discipline repositories (Shreeves, 2009). Informal pre–publication is eroding the significance of the post–publication text as both authors and readers find the immediacy of open discipline–based repositories more powerful and relevant than eventual publication (Morris, 2009). The ArXiv repository in high energy physics ( is a case in point (Ginsparg, 2007). In some areas, conference proceedings are becoming more important than journal articles for their immediacy — computer science is a good example of this. In other areas such as economics in which the world can change almost overnight, reports are becoming more important than journals. And, in almost every discipline, academic authors and increasingly the institutions for which they work are insisting upon the right to post their published articles to institutional repositories or personal Web sites, either in typeset or original manuscript form. More and more, they are taking it upon themselves to do this, legally or illegally, with or without reference to publishing agreements they have signed. Bergstrom and Lavaty (2007) report, for instance, that an Internet search turned up freely available versions of 90 percent of articles in the top fifteen economics journals. Ginsparg (2007) reports that over a third of a sample of articles from prominent biomedical journals were to be found at non–journal Web sites in 2003.

E3: More distributed knowledge making. Next in our list of disruptions is a broadening of the sites of knowledge making. Universities and conventional research institutes today face significant challenges to their historical role as producers of socially privileged knowledge. More knowledge is being produced by corporations than was the case in the past. More knowledge is being produced in hospitals, in schools, in lawyers’ offices, in business consultancies, in local government, and in amateur associations whose members are tied together by common interest. More knowledge is being produced in the networked interstices of the social Web, where knowing amateurs mix with academic professionals, in many places without distinction of rank. In these places, the logics and logistics of knowledge production are disruptive of the traditional values of the scholarly work — the for–profit, protected knowledge of the corporation; the multimodal knowledge of audiovisual media; and, the ‘wisdom of the crowd’ which ranks knowledge and makes it discoverable through the Internet according to its popularity. If one wanted to view these developments normatively, one could perhaps describe them as amounting to a democratization of knowledge. Or we could simply leave this an empirical observation: knowledge is being made in more widely dispersed institutional sites.

E4: Geographic inequities. Fourth in our list of disruptions is a geography of knowledge making which unconscionably and unsustainably favors rich countries over poor, Anglophone countries over non–English speaking, intellectual centers over peripheries. The situation does not yet show significant signs of changing, but because it is must change for reasons of equity and the more dispersed nature of this phase in globalization, it might be prudent to assume that it will change. The situation of academic publishing in Africa is bleak, and the representation of articles published by Africa–based authors in the mainstream journals world dropped between 1995 and 2005 (Smart, 2009). Knowledge making in China’s 1,000 universities, even though they are going through a phase of burgeoning growth, has yet to reach the wider world of ideas (Tchou, 2009). Meanwhile, some early signs of the globalization of knowledge making are to be seen. Multinational authorship of journal articles is on the rise (Morris, 2009).

E5: Interdisciplinarity. Fifth is the disruptive force of interdisciplinarity. Journals have traditionally been definers of disciplines or sub–disciplines, delineating the centre and edges of an area of inquiry in terms of its methodological modes and subject matter. The epistemic modes that gave shape to the heritage academic journal are being broken apart today as we address the large tasks of our time — sustainability, globalization, diversity, knowledge or learning to take just a few items on the contemporary intellectual agenda. Interdisciplinary approaches often need to be applied for reasons of principle, to disrupt the habitual narrowness of outlook of within–discipline knowledge work, to challenge the ingrained, discipline–bound ways of thinking that produce occlusion as well as insight. Interdisciplinary approaches also thrive in the interface of disciplinary and lay understandings. They are needed for the practical application of disciplined understandings to the existing world. Robust applied knowledge demands an interdisciplinary holism, the broad epistemological engagement that is required simply to be able to deal with the complex contingencies of a really integrated universe. Conventional discipline–defining journals are, in their essential boundary–drawing logic, not well suited to this challenge.

E6: Knowledge–producing, symbol–making, participatory cultures. And a final disruptive force, potentially impacting the social processes of knowledge–making themselves. If trends can be read into the broader shifts in the new, digital media, they stand to undermine the characteristic epistemic mode of authoritativeness of the heritage scholarly journal. The historical dichotomy of author and reader, creator and consumer, is everywhere being blurred. Authors blog, readers talk back, bloggers respond. Wiki users read, but also intervene to change the text if and when they feel they should. Game players become participants in narratives. iPod users create their own playlists. Digital TV viewers create their own viewing sequences. These are aspects of a general and symptomatic shift in the balance of agency in which a flat world of users replaces a hierarchical world of culture and knowledge in which a few producers create content to transmit to a mass of receivers (Cope and Kalantzis, 2007). What will academic journals be like when they escape their heritage constraints? There will be more knowledge collaborations between knowledge creators and knowledge users, in which user commentary perhaps can become part of the knowledge itself. Knowledge making will escape its linear, lock–step, beginning–to–end process. The end point will not be a singular version of record — it will be something that can be re–versioned as much as needed. Knowledge making will be more recursive, responsive, dynamic and above all, more collaborative and social than it was in an earlier modernity that payed greater obeisance to the voice of the heroically original author.

These, then, some of the potentially profound shifts that may occur in the knowledge regime, as reflected in the representational processes of today’s academic journal. They could portend nothing less than a revolution in the shape and form of academic knowledge ecologies. But for such change to occur, something may first have to break. Using our knowledge design paradigm, we will look some specific fissures at three points of potential break in today’s academic knowledge systems: in the availability of designs of knowledge, in the designing process, and in the ways we evaluate the significance of already–designed knowledge. At each of these three knowledge–making moments, we will examine points where fault lines are already visible, signs perhaps of imminent breaking points. We will examine open access versus commercial publishing (design availability), the peer review system (designing) and citation counts as a measure of scholarly value (the designed).



Breaking point 1: How knowledge is made available

Academic knowledge today — manifest in the textual resources that frame scholarly work — is made available in two principal modes. The first mode is at a price, and the second is for free.

A1: Knowledge for sale: Some of the players in the pay–to–use mode are small publishers or associations which operate on an essentially self–sustaining model. However, the large journal publishers make up the bulk of the journals market. Holding a monopoly position on the titles of journals, they charge excessive prices to university libraries for subscriptions, usually enjoying high–profit margins in the otherwise highly competitive media sector (Morgan Stanley, 2002). These profits are related in part to artificial scarcity created by a system where prestige and authoritativeness attach to prestigious journals (Quirós and Martín, 2009). Exploiting this position is particularly problematic when journal companies rely on the unpaid authoring and refereeing labor of academics — this is what gives a journal quality, not the mechanics of its production and distribution.

The prices of journals have risen rapidly over two decades. The subscription rates of economics journals, for instance, rose 393 percent between 1984 and 2001, physics by 479 percent and chemistry by 615 percent — during which time the CPI increased only by 70 percent (Edlin and Rubinfeld, 2004; McCabe, et al., 2006). Journal prices increased eight percent in 2006 and over nine percent in 2007. The average subscription price of a chemistry journal in 2007 was US$3,490, physics US$3,103, engineering US$1,919 and geography US$1,086 (Orsdel and Born, 2006; Orsdel and Born, 2008). In January 2006, the editor of the Journal of Economic Studies resigned in protest at his journal’s US$9,859 per annum subscription rate (Orsdel and Born, 2006).

Large publishing conglomerates have increased their subscription rates more than small academic publishers and non–profits. On average in 2005, commercial publishers charged university libraries several times as much per page as non–profit publishers (Bergstrom and Bergstrom, 2006). In an analysis of approximately 5,000 journals, Bergstrom and McAfee created a value–for–money ranking (, coming to the conclusion that the six largest STM publishers mostly fall into the bad value category (74 percent on average), while an extremely low percentage of titles from the non–profits are rated as bad value — 14 percent (Orsdel and Born, 2006). McCabe, et al. (2006) found that the average ratio of 2000 to 1990 prices for non–profits is 2.03, whereas the for–profit ratio is 3.77.

Commercial journal publishing, moreover, is increasingly dominated by a handful of multinational conglomerates — six publishers control 60 percent of the scholarly journals publishing market (Peters, 2009; Willinsky, et al., 2009). Elsevier controls 2,211 journals, Springer 1,574, Blackwell 863, and John Wiley 776 (McCabe, et al., 2006). Blackwell and Wiley have since merged.

The result is what is often referred to as the ‘journals crisis’. Libraries are simply unable to afford these price hikes. The average total library budget only grew at 4.3 percent per annum between 1991 and 2002, or 58 percent in total, while journals prices grew several times faster (Edlin and Rubinfeld, 2004). This has left less money for monograph purchases, journals smaller publishers and new journal titles. The protests from libraries have been loud. In October 2007, the Max Planck Institute, a leading European research institute, cancelled its subscription to 1,200 Springer journals, not negotiating a new agreement until February 2008 (Orsdel and Born, 2008). As well as price hikes for subscriptions, ‘bundling’ of multiple titles has also had a negative effect, tending to squeeze small and non–commercial publishers out of library purchases. According to the Association of Research Libraries, between 1986 and 2000, libraries cut the number of monographs they purchased by 17 percent, but the number of journal titles by only seven percent (Edlin and Rubinfeld, 2004).

It might have been expected that the move to electronic subscriptions would have allowed for a cheaper access option. However, a case study of ecology journals showed no reduction in prices for online–only journals (Bergstrom and Bergstrom, 2006). Discounts for online–only subscriptions average at only five percent, and some of the largest publishers offer no discount at all (Dewatripont, et al., 2006; Orsdel and Born, 2006). Publishers, in other words, are still basing their charges on the economics of traditional print publishing. Not only are their profits high; their cost structures are also high, reflecting perhaps a complacency which comes with their monopoly over prestige titles. The cost of producing an article is estimated to be about US$3,400 for commercial journal publishers (Clarke, 2007). This is inexcusably high when the primary work of quality assessment and content development is with unpaid academic authors and peer reviewers. And for this high price, the publication process often remains painfully slow (compared, for instance, to the speed of new media spaces), and the final products not particularly visible to Internet search because they are hidden behind subscription walls.

A2: Knowledge freely available. The open access rejoinder has been strident and eloquent. ‘An old tradition and a new technology have converged to make possible an unprecedented public good’ (Open Society Institute, 2002). ‘The Internet has fundamentally changed the practical and economic realities of distributing scientific knowledge and cultural heritage’ (Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, 2003). The open access claim that academic knowledge should be made freely available through the Internet has been backed by cogent and at times impassioned argument (Bergman, 2006; Bethesda Statement on Open Access Publishing, 2003; Kapitzke and Peters, 2007; Peters, et al., 2008; Willinsky, 2006a; Willinsky, 2006b). John Willinsky speaks of the ‘access principle’:

“A commitment to the value and quality of research carries with it a responsibility to extend the circulation of such work as far as possible and ideally to all who are interested in and who might profit by it.” [1]

And in the words of Stevan Harnad:

“[S]ome think the most radical feature of post–Gutenberg journals will be the fact that they are digital and online, but that would be a much more modest development if their contents were to continue to be kept behind financial firewalls, with access denied to all who cannot or will not pay the tolls. … [T]he optimal and inevitable outcome — for scientific and scholarly research, researchers, their institutions and funders, the vast research and development industry, and the society whose taxes support science and scholarship and for whose benefits the research is conducted — will be that all published research articles will be openly accessible online, free for all would–be users Web wide.” (Harnad, 2009).

These arguments have been supported by practical initiatives to build open access infrastructure. Prominent amongst these are the Open Journals System software created by the U.S.–Canadian Public Knowledge Project ( and the DSpace open access repository software led by MIT ( The online Directory of Open Access Journals ( indexes many thousands of open access journals, and Open J–Gate ( lists over a million open access articles. The Open Archives Initiative ( develops and promotes metadata standards to facilitate the accessibility of open access content.

Open access also now comes in many hues. In addition to ‘core open access journals’ (Clarke, 2007), there are today many somewhat qualified varieties of access, including delayed open access in which articles are made freely available after a period of time and hybrid open access journals in which some authors or the sponsors of their research may choose to pay an additional fee to have their article available for free (Bird and Richardson, 2009). Moreover, some 67 percent of journals allow some form of self–archiving in repositories (Shreeves, 2009), and these are color–coded by the Sherpa Romeo initiative as green (can archive pre–print and post–print), blue (can archive final draft post–refereeing), yellow (can archive pre–refereeing draft), and white (archiving not permitted) (

Meanwhile, a succession of institutional mandates now support one variety of open access or another. In December 2007, the U.S. National Institutes of Health (NIH) which dispense some US$29 billion in grants resulting in some 80,000 articles annually, required grantees provide open access to peer reviewed articles within one year of publication. In January 2008, the European Research Council announced that grant recipients must post articles and data within six months of publication. There has also been action at the university level. In 2007, Harvard University’s Faculties of Arts and Sciences voted unanimously to require faculty to retain rights to post copies of published articles on the University’s institutional repository. In the same year, 791 universities in 46 European countries voted unanimously to require open access to the results of publicly funded research (Orsdel and Born, 2008). In October 2008, the Association of American Universities and the Association of Research Libraries issued a ‘call to action’ for American universities to take responsibility for the free, online dissemination of research content. The trends, concludes Peter Suber (2007), point powerfully in favor of a shift to open access.

In this context, repositories of various sorts are growing rapidly, both at an institutional level and discipline–based (Shreeves, 2009). By 2007, there were one million articles in PubMed Central, developed by the U.S. National Library of Medicine ( The ArXiv repository in physics, mathematics, computer science, quantitative biology and statistics ( contained half a million articles (Ginsparg, 2007). Research Papers in Economics contains over half a million items ( To a significant degree, the development of these repositories involves the migration of content, legally and sometimes illegally, which has already been published or which is subsequently published in commercial journals (Bergstrom and Lavaty, 2007; Ginsparg, 2007).

The shift to open access scholarly journals is paralleled in many areas of cultural production and intellectual work in the era of the new, digital media. Yochai Benkler speaks of a burgeoning domain of ‘social production’ or ‘commons–based peer production’ in which ‘cooperative and coordinate action carried out through radically distributed, nonmarket mechanisms that do not depend on proprietary strategies’. As computers and network access have become cheap and ubiquitous, ‘the material means of information and cultural production in the hands of a significant fraction of the world’s population’. Benkler considers this to be no less than ‘a new mode of production emerging in the middle of the most advanced economies in the world’, in which ‘the primary raw materials in the information economy, unlike the industrial economy, are public goods — existing information, knowledge and culture’. The ‘emergence of a substantial component of nonmarket production at the very core of our economic engine — the production and exchange of information — … suggests a genuine limit on the extent of the market … [and] a genuine shift in direction for what appeared to be the ever–increasing global reach of the market economy and society in the past half century [2].

Wikipedia is a paradigmatic case of this social production. Print encyclopedias were big business. For many households in the era of print literacy, this paper monster was their largest knowledge investment. Encyclopedia entries were written by invited, professional experts. Wikipedia, by contrast, is free. It is written by anyone, knowledge professional or amateur, without pay and without distinction of rank. Academic knowledge does not fit the Wikipedia paradigm of social production and mass collaboration in a number of respects, including the non–attribution of authorship and the idea that any aspiring knowledge contributor can write, regardless of credentials. For the moment, however, we want to focus on the unpaid, non–market mode of production, a cornerstone of the case for open access journals.

Culture and information are taken out of the market economy in the paradigm of social production by theoretical fiat of their unique status as non–rivalrous goods or goods where there is no marginal cost of providing them to another person. Here is Lawrence Lessig quoting Thomas Jefferson:

“He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature.” [3]

Here is John Willinsky quoting economist Fritz Machlup:

“If a public or social good is defined as one that can be used by additional persons without causing any additional cost, then knowledge is such a good of the purest type.” [4]

Non–rivalrous goods are like the lighthouse, providing guidance to all ships equally, whether few or many ships happen to pass (Willinsky, 2006b). And here is Michael Peters, quoting Joseph Stiglitz:

“Knowledge is a public good because it is non–rivalrous, that is, knowledge once discovered and made public, operates expansively to defy the normal law of scarcity that governs most commodity markets.” [5]

Concludes Lessig:

“The system of control we erect for rivalrous resources (land, cars, computers) is not necessarily appropriate for nonrivalrous resources (ideas, music, expression). … Thus a legal system, or a society generally, must be careful to tailor the kind of control to the kind of resource. … The digital world is closer to the world of ideas than the world of things.” [6]

The peculiar features thus ascribed to knowledge, culture and ideas become the basis for a new and burgeoning ‘gift economy’ outside of the market (Raymond, 2001). Bauwens (2005) describes the consequent development of a ‘political economy of peer production’ as the ‘widespread participation by equipotential participants’, a ‘third mode of production’ different from for–profit or public production by state–owned enterprises. … Its product is not exchange–value for a market, but use–value for a community of users … [who] make use–value freely accessible on a universal basis, through new common property regimes.’ Again, the sites of academic knowledge production are not like this in some important respects, for they are primarily not–for–profit or state–owned spaces and they do not by and large use or need to use the new common property regimes to which Bauwens refers.

However, one thing does carry over from the case for a political economy of peer–to–peer production, and that is the idea that knowledge should be free. With this comes a series of common assumptions about the nature of non–market motivations. In the domain of social production, social motivations displace monetary motivations [7]. Or, in Opderbeck’s words, ‘Traditional proprietary rights are supposed to incentivize innovation through the prospect of monopoly rents. The incentive to innovate in a purely open source community, in contrast, is based on “reputational” or “psychosocial” rewards’ (Opderbeck, 2007). Translated into academe, Willinsky argues that ‘the recognition of one’s peers is the principal measure of one’s contribution to a field of inquiryrsquo;. Less charitably, he calls this is an ‘ego economy’ driven by ‘the necessary vanity of academic life.’ [8]

There are, however, some serious theoretical difficulties with these ideas of social production and non–rivalrous goods, and we will consider these before returning to the question of the alternative ways in which scholarly journals can be made and made available. On the question of ‘social production’: this new economy is also a kind of anti–economy. For its every inroad, it removes the economic basis for knowledge– and culture–making as a form of employment. Tens of thousands of people used to work for encyclopedia publishers, even if some of the jobs, such as that of the proverbial door–to–door salesperson, were less than ideal. Everybody who writes for Wikipedia has to have another source of income. What would happen to the global scholarly publishing industry if academics assumed collective and universal responsibility for self publishing, an industry supporting in 2004 a reported 250,000 employees worldwide with a US$65 billion turnover (Peters, 2007)? What would happen to scholarly associations and research institutes that have historically gained revenue from the sale of periodicals and books? An ironical consequence of a move to social production would, in the much–trumpeted era of the knowledge or creative economy, be to value knowledge making and creativity at zero when coal. How do knowledge workers eat and where do they live? Without doing away with the market entirely, we are consigning a good deal of knowledge work to involuntary volunteerism, unaccountable cross–subsidy, charity or penury. We know from experience the fate of workers in other domains of unpaid labor, such as the unpaid domestic work of women and carers. Making it free means that it is exploited. In the case of the knowledge economy, the exploiters are the likes of Google who take the unpaid work of social producers and make a fortune from it.

And on the distinction between rivalrous and non–rivalrous goods, the key theoretical problem is to base one’s case on the circumstantial aspects of knowledge consumption rather than the practical logistics of knowledge production. Rivalrous and non–rivalrous goods equally need to be made. They cost their makers labor time, time which otherwise could be spent making buildings or food. Ostensibly non–rivalrous goods also need physical spaces and tools and storage devices and distribution networks, and these all have to be made by people who need buildings and food. In these fundamental respects, knowledge or cultural goods are not so different from any other goods. In fact, knowledge and material domains are not so neatly separable. Buildings and food have design in them (and when we go to architects and restaurants we are in part purchasing intellectual property). Equally, all cultural products have to be made, delivered and rendered materially.

In this perspective, in this era of the new, digital media we might be witnessing no more than one of the old marvels of industrial capitalism — a technology that improves productivity. In the case of knowledge making, the efficiencies are so great — print encyclopedias vs. Wikipedia, celluloid movies vs. digital movies posted to YouTube, PDF journal articles vs. print journals — that we get the impression that the costs have reduced to nothing. But they have not. They have only been lowered. We have become too dazzled by the reduction in costs to notice the costs we are now paying. So low are these costs in fact that we are can even afford to make these cultural products in our spare time, and not worry too much about giving away the fruits of our labors to companies who have found ways to exploit them in newly emerging information markets. Knowledge is a product of human labor and it needs human labor to make it available. There can never be zero costs of production and distribution of knowledge and culture, theoretical or empirical. At most, there are productivity improvements. Far from ushering in a new mode of production, the driving force is more of the same engine that over the past few centuries has made capitalism what it is.

So how do we move forward? In the most general of terms, there are two options. The first is socialism in all sectors. If knowledge and culture are to be free, so coal and corn or buildings and food must be if we are not to advantage the industries of the old economy over those of the new, to consign knowledge and culture work selectively to the gift economy. The second is to build an economics of self–sustainable, autonomous cultural production, where there is space for small stallholders (publishers, musicians, writers, knowledge workers) or where the cross–subsidies are transparent and explicit — including the economics of academic socialism in an otherwise mixed economy.

Returning now the particularities of scholarly journals, no doubt the excessive cost of commercial journal content represents both profiteering on the part of the big publishers and lagging inefficiencies when they have not re–tooled their fundamental business processes for the digital era. Clarke (2007) estimates that the production cost of US$730 for an open access journal article, compared to US$3,400 for a commercial journal article.

On the other hand, however, open access publishing is bedeviled by problems of resourcing. Where does the US$730 come from to produce the open access article? Without some kind of fee structure, open access publishing has to rely on academic volunteerism or cross–subsidy by taxpayers or fee–paying students who support the university system. Willinsky speaks lyrically of a return to the days when authors worked beside printers to produce their books [9]. However, academics do not have all the skills or resources to be publishers. Having to be an amateur publisher adds another burden to an already demanding job. Nor is playing amateur publisher necessarily the best use time that could otherwise be devoted to research, writing and teaching. There’s a lot of work in publishing. Someone has to provide the labor time. That time always comes at a direct or indirect cost. The problem with this ethereal ‘reputational’ economy is not that it does not have costs but that it shifts its costs often silently and unaccountably to places not well equipped to bear those costs or to evaluate effective and efficient resource use.

Sometimes open access publishing is forced to develop alternative, quasi–commercial funding mechanisms — in the form of author publication fees, ‘memberships’ and institutional subsidies. The Public Library of Science charges a US$1,500 author fee. Springer will make an article available in open access for an author fee of US$3,000 [10]. One could argue that the economic basis for this kind of open access knowledge system is a kind of socialism for the affluent — if you work as a professor in a big, well–resourced research university, you can afford to fund the publication of your article. You may also be able to donate some of your time to publishing, or have funding for graduate assistants who can do the publishing work. There are, in other words, key questions about the sustainability, equity and in fact the openness, of open access business models.

A3: Towards sustainable scholarly publishing. In order to develop an economics of sustainability for academic knowledge systems, it is important to make some distinctions. These are not like other content creation spaces in the new media in some important respects. They are not like Wikipedia or YouTube insofar as universities are systems of public resourcing and elaborate cross–subsidy which already fund the ideas generation process. They are not like peer–to–peer production insofar as these particular knowledge workers are paid to be such by the public or not–for–profit private institutions that pay their salaries. To this extent, involvement in the publication process is justifiable. It is a very small step to build funding for specific publication media and services into the infrastructure of universities. This, in fact, may be a new role for university libraries and rejuvenated university presses.

Light–weight, self–sustaining publication funding models can possibly be created in this space. There is no reason given today’s digital infrastructure costs why subscription fees should be so high, or author publication fees, or per–article purchase prices. How many academics would pay (say) US$10 per year for journal access and publication alerts? How many academics would pay (say) US$100 for rapid peer review and publication? How many students would as willingly pay US$1 for an article as they do for a song in the iTunes store? The key to today’s journals impasse may be to develop low–cost digital infrastructures, and self–sustaining business models.



Breaking point 2: Designing knowledge credibly

The system of peer review is a pivotal point in the knowledge design process: the moment at which textual representations of knowledge are independently evaluated. To this point, knowledge work is of no formal significance beyond the private activities of a researcher or intellectual. Peer review is required as a critical step towards their knowledge becoming socially validated, confirmed as knowledge–of–note and made more widely available knowledge.

Critical to our argument about modern knowledge systems is that it is representations of knowledge that are being evaluated, not something that might itself be called knowledge. Knowledge is not simply made of the stuff that happened in the laboratory, or what was found in the archive, or what transpired in social observation. Rather, it is what a scholar tells us has happened or was found or transpired. And, adding a further layer of abstraction of representation away from referent, the person and context of the scholar at the point of evaluation is removed through double–blind review. The text is examined simply as that — as a representation — and the reviewer interpolates hypothetical connections between the representations and possible referents. The referee does not know the identity of the author, and thus the location of their work, their interests or motivations. All the referee is working with as they evaluate a knowledge representation is what the text itself reveals.

Here are some of the characteristic features of the peer review system: A journal editor receives a manuscript. They examine the text in order to decide on referees whose expertise, as evidenced in what they have already published, may be relevant to the content of the article to be reviewed. Referees are selected because they are qualified to review — in fact, often more qualified than the editor — and this judgment is based on the fact that the referee publishes into a proximal discourse zone. The key question is not whether they have relevant substantive knowledge so much as whether they will be able to understand the text. Refereeing also spreads the work around, creating a more distributed knowledge system than one that is publisher– and editor–centric. The identity of the author is removed, and the text sent to more than one referee. Referees are asked to declare conflicts of interest of which the journal editor may be unaware — if they happen to be able to identify the author, or if they cannot give a work a sympathetic hearing because their understandings are diametrically opposed, for instance. The key motif of good refereeing, one of its intertextual tropes in fact, is independence and impartiality, an aura of reading a text for its intellectual merit alone, without prejudice to politics or paradigms or personal opinion. The referee promises not to disclose the paper’s contents before publication, or to disclose their identity as a referee. After reading the text, they might recommend publication without qualification, or rewriting based on their suggestions, or rejection of the paper. Whatever their judgment, referees should support their recommendations with a cogent rationale and, if the recommendation is to rewrite, specific advice. Nor do referees of a particular work do not know of each other’s identity, and so cannot conspire to agree on the worth of a text. Multiple referees are sought in order to corroborate recommendations, in a case for instance, one referee’s judgment transpires to be unsound. When there are conflicting opinions amongst the referees, the editor may weigh the assessments of the referee reports or, if uncertain, send the text out to more referees.

Prototypes of these textual practices predate the rise of the modern academic journal. In the domain of Islamic science, Ishap bin Ali Al Rahwl (854–931) in the ‘Ethics of a Physician’ discussed a procedure whereby a physician’s notes were examined by a council of physicians to judge whether a patient had been treated according to appropriate standards (Meyers, 2004; Spier, 2002). The scientific method of Francis Bacon in his New Organon of 1620 included a process akin to peer review in which a reader of scientific speculations patiently reconstructs the scientist’s thoughts so that he can come to the same judgment as to the veracity of the scientist’s claim (Bacon, 1620). There are, in other words, conceptual precursors to peer review in older textual practices.

Pre–publication peer review in a form more recognizable today began to evolve as a method of scientific knowledge validation from the seventeenth century, starting with Oldenberg’s editorship of the Philosophical Transactions of the Royal Society (Biagioli, 2002; Guédon, 2001; Peters, 2007; Willinsky, 2006a). However, institutionalization of peer review processes did not become widespread until the twentieth century, either as a consequence of having to handle the increasing numbers of articles or in order to find appropriately qualified experts as areas of knowledge became more specialized (Burnham, 1990). A more dispersed peer review process in which referees had a degree of independence from the journal editor was not widely applied until after the photocopier became readily accessible from the late 1950s (Spier, 2002).

There is some evidence, however, that this may be a moment of decline in peer review, in part for the most practical of reasons. In the forms in which it has been practiced to in conventional publishing processes, peer review is slow. This is one of the principal reasons why repositories have been growing rapidly — as a site for faster publication of scholarly content. It is estimated that only 13 percent of material in institutional repositories has been peer reviewed (Shreeves, 2009). In the physics community, for instance, the ArXiv repository has become tremendously important. ArXiv does not arrange or require peer review, and preprints published there may or may not be subsequently submitted for peer review. To be able to post content at ArXiv, all you require is the endorsement of a current contributor, a process of some concern insofar as it creates a kind of private club in which the substantive scholarly criteria for membership are not explicitly spelt out. The repository’s founder, Paul Ginsparg, also speaks of ‘heuristic screening mechanisms’ which include the worryingly vague admonition, ‘of refereeable quality’ (Ginsparg, 2007). The processes and criteria by which the unacceptability of content is determined by ‘moderators’ are not spelt out.

P1: Accountability in pre–publication processes. First, to take the discursive features of the peer review process, these track the linearity and post–publication fixity of text manufacturing processes in the era of print. Peer review is at the pre–publication phase of the social process of text production, drawing a clear distinction of pre– and post–publication at the moment of commitment to print. Pre–publication processes are hidden in confidential spaces, leading to publication of a text in which readers are unable to uncover the intertextuality, and thus dialogue, that went into this aspect of the process of knowledge design. The happenings in this space remain invisible to public scrutiny and thus unaccountable. This is in most part for practical reasons — it would be cumbersome and expensive to make these processes public. In the digital era, however, the incidental recording of communicative interchanges of all sorts is pervasive and cheap, inviting in cases of public interest (of which knowledge making would surely be one) that these be made part of the public record or at least an independently auditable confidential record.

Then, in the post–publication phase, there is very little chance for dialogue which can have an impact upon the statement of record, the printed article, beyond subsequent publication of errata. Reviews, citations and subsequent articles may reinforce, or revise, or repudiate the content of the publication of record, but these are all new publications equally the products of a linear textual workflow. Moving to PDF as a digital analogue of print does very little to change this mode of textual and knowledge production.

Key flaws in this knowledge system are the lack of transparency in pre–publication processes, lack of metamoderation or audit of referee reports or editor–referee deliberations, and the relative closure of a one–step, one–way publication process. If we posit that greater reflexivity and dialogue will make for more powerful, effective and responsive knowledge processes, then we have to say that we have yet barely exploited the affordances of digital media. Sosteric (1996) discusses Habermas’ ideal speech situation in which both interlocutors have equal opportunity to initiate speech, there us mutual understanding, there is space for clarification, interlocutors can use any speech act and there is equal power over the exchange. In each of these respects, the peer review process is less than ideal as a discursive framework. There are power asymmetries, identities are not revealed, dialogue between referee and author is prevented, the arbiter–editor is unaccountable, consensus is not necessarily reached, and none of these processes are open to scrutiny on the public record.

We can see some of what may be possible in the ways in which the new media integrally incorporate continuous review in their ranking and sorting mechanisms — from the simple ranking and viewing metrics of YouTube to more sophisticated moderation and metamoderation methods at web publishing sites such as the Web–based IT news publication, Slashdot ( Social evaluations of text that were practically impossible for print, are now easy to do in the digital media. Is it just habits of knowledge making practice that prevent us moving in these directions? What about setting up a more dialogical relation between authors and referees? Let the author speak to referee and editor, with or without identities revealed: How useful did you find this review? If you did, perhaps you might acknowledge a specific debt? Or do you think the reviewer’s judgment might have been clouded by ideological or paradigmatic antipathy? Such dialogues are much of the time closed by the current peer review system, and at best the author takes on board some of the reviewer’s suggestions in the rewriting process, unacknowledged. Tentative experiments in open peer review, not too unlike post–publication review in a traditional publishing workflow, have been designed to grant greater recognition to the role of referees and create greater transparency, to discourage abusive reviews and to reduce the chances of ideas being stolen by anonymous reviewers before they can be published (Rowland, 2002). Why should referees be less honest in their assessments when their identities are revealed? They may be just as honest. In fact, the cloak of anonymity has its own discursive dangers including non–disclosure of interests, unfairly motivated criticisms and theft of ideas. In the new media, too, reviewers can be ranked by people whose work has been reviewed, and their reviews in turn ranked and weighted for their credibility in subsequent reviews. This is roughly how trusted super–author/reviewers emerge in Wikipedia. There could also be multiple points of review, blurring the pre– and post–publication distinction. Initial texts can be published earlier, and re–versioning occur indefinitely. In this way, published texts need not ossify, and the lines of their development could be traced because changes are disclosed in a publicly accessible record of versions. These are some of the discursive possibilities that the digital allows, all of which may make for more open, dynamic and responsive knowledge dialogue, where the speed of the dialogue is not slowed down by the media in which it is carried.

P2: Textual practices: The second major flaw in the traditional peer review process, and a flaw that need not exist in the world of digital media, is in the textual form of the article itself. Here is a central contradiction in its mode of textuality: the canonical scholarly article speaks in a voice of empirical transparency, paradigmatic definitiveness and rhetorical neutrality — this last oxymoron capturing precisely a core contradiction, epistemic hypocrisy even. For the textual form of the article abstracts knowledge away from its reference points. The article does not contain the data, rather it refers to the data or suggests how the author’s results could be replicated. The article is not the knowledge, or even the direct representations of knowledge — it is a rhetorical re–presentation of knowledge. This practically has to be the case for print and print look–alikes. But in the digital world, there is very little cost in presenting full datasets along with their interpretation, a complete record of the observations in point alongside replicable steps–in–observation, the archive itself alongside its exegesis. Referees, in other words, in the era of digital recording could not only review the knowledge representation, but come a good deal close to the world to which those representations point in the form of an immediate recordings of that world. This can occur multimodally through the amalgamation of datasets, still image, moving image, and sound with text–captions, tags, narrative glosses. There are no page constraints (shape and textual form) or page limits (size and extent) in the digital record. This brings the reviewer into a different relation to the knowledge itself, more able to judge the relations between the purported knowledge and its textual forms, and for this reason also more able to make a contribution to its shaping as represented knowledge. This would also allow a greater deal of transparency in the dialectics of the empirical record and its interpretation. It may also lead to a more honest separation of represented data from the interpretative voice of the author, thus creating a more open and plausible environment for empirical work. In a provocative and widely cited article, John Ioannidis (2005) argues that ‘most published research findings are false’. Exposing data would invite critical reinterpretation of represented results and reduce the rates and margins of error in the published knowledge record.

P3: Peer review measures. A third major flaw in the heritage peer review process is its validity. What does the peer review system purport to measure? Ostensibly it evaluates the quality of a contribution to knowledge (Jefferson, et al., 2002; Wager and Jefferson, 2001). But precisely what are the rubrics of knowledge? In today’s review system these are buried in the under–articulated depths of initiation to peer community. Mostly, review is just a three–point scale — accept, accept with revisions, reject — accompanied by an open–ended narrative rationale. In the review text, the tropes of objectivity can hide, although none too effectively at times, a multitude of ideological, paradigmatic and even personal agendas. These are exacerbated by the fact that referees operate under a cloak of anonymity. There are times moreover, when the last person who you want to review your work, the last person who is likely to be ‘objective’, is someone in a proximal discourse zone (Judson, 1994). For these reasons, the texts of peer review and the judgments that are made, are often by no means valid. One possible solution to this problem is to develop explicit, general knowledge rubrics at a number of subdisciplinary, disciplinary and metadisciplinary levels, and require that referees defend the validity of their judgments against the criteria spelt out in the rubrics. This would also have the incidental benefit of making the rules of the epistemic game explicit, and in so doing more accessible to network outsiders … which brings us to the fourth major flaw in the peer review system, its network effects.

P4: Network effects. Peer review pools generally work like this. A paper is sent to a journal editor. The editor is the initial gatekeeper, making a peremptory judgment of relevance to the area of knowledge and the quality of the work. Having passed this hurdle, the editor chooses suitable reviewers. This choice can reflect content or methodological expertise. But it can also be a choice of friends and enemies of ideas, positions and paradigms — another point of potential closure in the knowledge process. Given that referees are not paid, the bias amongst those who accept the task will be broadly established in context where they owe something to the patronage of the editor or they are friends of the editor and stand in some kind of relation of reciprocal obligation. If the author has returned to them reviews that they consider to be unfair or plain wrong, they have no one to whom to appeal other than the editor of the journal who selected the referees in the first place — there are no independent adjudication processes, and more broadly, no processes for auditing the reliability of the journal as a knowledge validation system (Lee and Bero, 2006). The overall logic of such a system is to create self–replicating network effects in which a distributed system in fact becomes a system of informal, unstated, unaccountable power (Galloway and Thacker, 2007). Journals come to act like insider networks more than places where knowledge subsists on its merits, or at least that’s the way it often feels to outsiders. Their tendency, then, is to maintain consensus, control the field, suppress dissent, reinforce the disciplinary ramparts and support institutional and intellectual inertia (Horrobin, 1990). The practical effect is to exclude thinkers who, regardless of their merit, may be from a non–English speaking country, or teach in a liberal arts college, or who do not work in a university, or who are young or an early career researcher, or who speak to an innovative paradigm, or who have unusual source data (Stanley, 2007). The network effect, in other words, is to exclude a lot of potentially very valuable knowledge work conducted in rich knowledge spaces.

A final note on this point: Open access publishing does not necessarily reduce these points of closure in scholarly knowledge making. The question of the cultural and epistemic openness of a knowledge system is a completely different one to the economics of its production. As we have seen, open access may even be accompanied by greater closure in which even the heritage peer review system, whatever its defects, is eroded, to be replaced by fewer, more powerful and even less accountable gatekeepers. On the other hand, there are no reasons why self–sustaining business models might not be open in an epistemic sense. In fact, reputational economies can be more viciously closed than commercial ones because they are driven by purely ideological interests. Ironically, cultural systems grounded in material sustainability often operate in practice with less ideological prejudice. Moreover, open access journals by and large perpetuate the print analogue workflow of PDF, with all its intrinsic deficiencies as an open knowledge system. It’s important, in other words, not to mix discussions of business models, the technologies that are used and the epistemic conditions of openness — the latter does not necessarily correlate with the former two. New resourcing models and technologies can as be closed as old ones from an epistemic point of view.



Breaking point 3: Evaluating knowledge, once designed

On a time dimension, knowledge is an iterative thing. Knowledge workers read the texts of others as reference points for their own knowledge work — to find out what has already been discovered and thought, and to determine which questions still need to be addressed. This is the basis of ‘progress’ in science and the evolution of frames of thinking. On a structural dimension, and for all the rhetorical heroism of discovery and analytical voice, knowledge is a social product. ‘Standing on the shoulders of giants’ was Isaac Newton’s famous expression. This is why there is a deep and intrinsic intertextuality to formal knowledge representations.

Citation analysis or bibliometrics has emerged over the past half century as a principal measure for ranking of the value of a published piece. The more people who have cited an author and their text, it is assumed, the greater the contribution to knowledge of that author and that text. This thinking was refined in the work of Eugene Garfield in the 1950s and in the company he founded in 1961, the Institute for Scientific Information, now owned by the multinational media company, Thomson Reuters (Craig and Ferguson, 2009). To undertake citation counts, you need to count all the citations made in every article. Garfield’s Science Citation Index, renamed Web of Science, or Web of Knowledge ( after the social sciences and humanities had been included, has since found competitors, principally Elsevier’s Scopus (, CiteSeerX (, and Google Scholar ( (Harzing and Wal, 2008; Kousha and Thelwall, 2007; Norris and Oppenheim, 2007; Schroeder, 2007).

Here is the Thomson Web of Knowledge way of calculating the value of a scholar’s work: count the number of citations during this year to articles you have published in the two preceding years and divide this by the total number of articles you have published in these two years (Meho, 2007). This, in other words, is a measure of the average number of citations your publications get in the two years after you have published. If it takes more than two years to get citations for your article, they are not counted. More recently, physicist Jorge Hirsch has invented the h–index, where h is five when you have published five articles in your career which have received five citations, or 20 if you have 20 articles cited on average 20 times. This measure is designed to evaluate whole careers and value scholars who have produced consistently highly cited articles (Craig and Ferguson, 2009). With the rise of online journals, another increasingly used ‘impact’ metric is download counts, or the number of times an article is accessed by users (Davies, 2009). Standards for the measurement of downloads have been established by the not–for–profit COUNTER organization (

These are the principal systems of counting used to evaluate the worth of a knowledge worker’s output, and aggregated to determine quality of a journal or an academic department. They are poor measures indeed. We will use the two canons of assessment theory to interrogate the bases of citation measures: their validity and their reliability (Pellegrino, et al., 2001). A valid assessment is one where the evidence collected can support the interpretative burden placed upon it. The assessment in, other words, measures what it purports to measure. A reliable assessment will consistently produce the same results when repeated in the same or similar populations. The assessment, in other words, is not fraught by inaccuracy in its implementation. Citation counts and impact factors fail on both criteria.

To evaluate the validity of citation counts, we need to start with the question of what we want to measure: the value of a scholar’s work and their contribution to knowledge for the purposes of career evaluation, or assessing the intellectual quality of a journal or department. Here are some fundamental problems:

V1: The purposes of knowledge–making. All citation counts do is measure the number of mentions of a text. The ultimate utility of knowledge — actual impact — is on the broader social world, not the self–enclosed world reciprocal naming which is a peculiar characteristic of academic networks. Citation counts measure academic network positions but not necessarily the ultimate social utility of knowledge, its originality in contributing to new knowledge, or its implications and its consequences in terms of anticipated or unanticipated applications (Browman and Stergiou, 2008).

V2: Quantity valued over quality: Although citation counts and impact factors factor the total number of publications into their denominators, numerical evaluation of academic work is still powerfully connected to the total number of articles. This produces a culture of ‘no thought unpublished’, ‘salami publishing’ of one idea at a time, and ‘honorary authorship’ where additional authors are added for at times relatively marginal association with a work. Increasing your total number of publications increases your visibility, which increases the chance that you’ll get cited. However, numbers of any sort — publication counts, citation counts or impact factors — may turn out to be a lazy shortcut in promotion, hiring, departmental review, metric by means of which you think you can evaluate a body of publications without having to read them (Simons, 2008).

V3: Devaluing lightly cited articles. If 90 percent of cited articles are lightly cited, does it mean they have no value (Browman and Stergiou, 2008)? An article may demonstrate the strength of analysis or synthesis or data collection capacities of an active researcher. It may clarify their thinking. It might demonstrate their research competence and clarity of thinking. It may flow into their teaching, or be read by students and others, who it may influence. Articles may be read and used without citation, contributing to one’s background knowledge of a field. Download metrics at least come closer actual use in the sense of readership, but they do not tell you whether the paper was actually read, or whether the downloaded the item was the one the reader was looking for, or whether people come back to the same article multiple times rather than download and store (Craig and Ferguson, 2009).

V4: Is popularity a valid measure of knowledge? Perhaps most seriously, citation is not about the actual intellectual quality or social impact of a text, it is about the extent to which an author and a text have been noticed or positioned themselves to be noticed. To take some cultural analogies and according to the same logic, you would have to choose the largest circulation magazine at a newsstand, or best–selling novel in a bookstore, or the hit song or record album, or the movie with the biggest box office takings. Just because any of these things is popular, does not mean that it is the best quality measured in terms of the cultural canons of the domain, nor that it has the most profound social impact. There are small magazines which deal with specialist areas of interest, great novels which have not reached a mass readership, brilliant music that fits into genres without mass appeal and innovative movies which never leave the art house but deeply influence their genre. In fact, without being elitist, we could argue that the most innovative and influential works are not wildly popular, especially in the first instance. They often operate in small, specialized discourse spaces. Powerful and knowledge making is more likely to be ‘unpopular’ in this sense. Popularity, in fact, is as often as not a sign that something is derivative, stooping to a lowest common denominator to reach a wide market or tainted by promotional and positional effects. Here are some of the effects of a popularity measure of knowledge: It values work which has hooks designed to reach a broader academic audience. It values work which is fashionable and reflects conventional wisdoms over work which is innovative and unconventional. It values large fields over small (in larger fields, such as medicine, there are more things to cite, and more people who can cite you, than in smaller fields, such as zoology). Cambridge biologist Peter Lawrence’s advice to the cynical, citation–needing biologist would be: ‘Choose the most popular species; it may be easier to publish unsound but trendy work on humans than an incisive study on a zebrafish’ (Lawrence, 2007).

V5: Limiting impact to short timeframe. The Thomson–Reuters impact factor counts citations only in the two years after an article is published. As Lawrence points out, ‘truly original work usually takes longer than two years to be appreciated — the most important paper in biology of the 20th century was cited rarely for the first ten years’ (Lawrence, 2007).

V6: Aggregated impact factors. Each year, Thomson–Reuters ranks impact factors at the level of whole journals, or at least the 9,000 journals they have selected to be worthy of such a measure (Simons, 2008). Citation counts are often aggregated as a proxy for journal, departmental or university ‘impact’. This adds another layer of invalidity to the citation counts at the article or author level. The quality of authors and their papers, in other words, is evaluated by the impact of the journals in which they publish and the departments or universities to which they belong. This is even lazier than citation counts themselves. ‘Without exception’, says Stevan Harnad (2008), ‘none of these metrics can be said to have face validity’. Averaged values for journals or departments can be highly influenced by a few blockbuster articles in a particular two–year stretch. Phillip Campbell, editor of Nature, says that ‘our own internal research demonstrates how a high journal impact factor can be the skewed result of many citations of a few papers rather than the average level of the majority, reducing its value as an objective measure of an individual paper’. In 2004, Campbell says, 89 percent of Science’s impact factor was generated by just 25 percent of its papers. As for the 75 percent whose impact was relatively low, and thus who did Nature a disserve if the journal is to be judged by its impact factor, ‘they were in disciplines with characteristically low citation rates per paper like physics, or with citation rates that are typically slow to grow, like the Earth sciences, or because they “hot”’ (Campbell, 2008).

The logic of popularity as reflected in citation counts can influence editor’s decisions — they will be more likely to choose your paper if it has features which make it more likely to enhance their journal’s impact factor. Journal impact factors can also be skewed by editors who suggest during the review process the inclusion of additional citations from the journal to which the author is submitting (Craig and Ferguson, 2009). After all, it is in the interest of the author publishing in that journal that its impact factor be raised, and citing other articles in that journal will do that. Furthermore, a high impact factor as measured by citation metrics may be more the product of promotional power and positioning in the marketplace than the quality of knowledge (Bornmann, et al., 2008). This market–popularity logic creates a closed circle in which market visibility breeds market visibility.

As an aside, another frequently used quantitative measure of journal quality is its rejection rate. The higher the rejection rate, it is assumed, the better the quality of the published article. However, not only does a high rejection rate add a level of arbitrariness to the review process — one reviewer’s mild reservations in a high rejection rate journal might lead to rejection of an excellent work. Rejection rate measures reduce journal quality to the contingencies of supply and demand. In the digital era, anything that meets a certain standard can readily be published. There are no fixed limits in supply of publishing space as there were in the era of print journals — the denominator in this equation. On the other hand the size of the numerator is no more than a function of the size of a field. Of course, journals with names as expansive Science and Nature and with infrastructures that assure wide public exposure will have high rejection rates. But small fields may produce consistently excellent work, a high proportion of which should be published. Why should a low rejection rate cast aspersions on a journal in a specialist field?

V7: Bias favoring some genres of article. Review articles, which overview a body of research, are much more likely to be cited than new research and new thinking. Review articles are also more likely to be cited than the articles upon which they rely for their syntheses (Bornmann, et al., 2008; Meho, 2007; Simons, 2008). Citing a review article dispenses with the need for long reference lists, a particularly important thing when word or page limits have been set (Pauly and Stergiou, 2008). Journal editors have noticed the relative impact of review articles, which explains their sixfold increase between 1991 and 2006 (Craig and Ferguson, 2009). Longer articles get cited more when they cover a broader range of issues. More authors correlates with wider citation (Bornmann, et al., 2008).

V8: Network effects. The citation system rewards people who can forcefully work networks and find their way into journals with wider circulation, thus rewarding academic entrepreneurship ahead of intellectual content. It creates a citation barter system in which authors feel they need to mention friends, patrons, and people to whom they own a positional debt (Lawrence, 2007). You dutifully quote leaders in the field; you don’t confront contrasting views and results openly in case the people you mention might be your reviewer or a friend of your reviewer, and also not to get people off side who might cite you. It is a good idea to quote people who are heavily cited in the hope that they might notice you and cite you, thus enhancing your visibility. In other words, citation metrics measures social power dynamics which are not necessarily related to criteria of intellectual quality and social impact (Bornmann, et al., 2008; Lawrence, 2007). ‘Creative discovery is not helped by measures that select for tough fighters and against more reflective modest people’, concludes Peter Lawrence (2008). This is a system that works against women, people from non–Anglophone countries, people with ideas and data that don’t mesh well with the conventional wisdoms of those who dominate a field. Besides, the academic star system which the citation system creates, based as it is on the mass media logic of popularity rankings is peculiarly poorly suited to a new media environment in which knowledge and cultural creation is more broadly distributed. In this sense, citation–popularity rankings track the logic of the old media world which valued of economies of scale.

V9: Accessibility distortions. Studies show that open access publishing doubles research impact (Harnad, 2009). Repeated studies lauding the positive effects of open access come to similar conclusions (Brody, et al., 2007; Willinsky, 2006a). From a knowledge system point of view, this only means that increased citations are the product of easier accessibility. It does not mean that the knowledge they contain is intrinsically more impactful. In hybrid open access journals, research shows that open access articles can generate between 25 percent and 250 percent more citations than articles which are not freely available (Orsdel and Born, 2006). This means that people who can afford to pay open access author fees get more cited for their investment. In fact, electronic access generally may only serve to accentuate a herd mentality. Examining a database of 34 million articles between 1945, Evans (2008) shows that as more articles come accessible online, either through open access or commercial subscription, the articles and journals cited tended to be fewer and more recent. His explanation? Scholars are becoming more influenced by other’s choices of citation than a close reading of the texts on their merits. As a consequence, fields hasten to consensus and conformity.

Now to the question of the reliability of citation counts:

R1:Self and negative citation. Is citation a measure of the value of a paper? The simple assumption is that citation denotes positive impact, and the more citations are better. But not when it is self–citation. Not when it is negative citation. Not when popularity is in fact notoriety. Perspectives which may in the general view of the field be wrong–headed and extreme may be few, but they are regularly used as straw people or paradigmatic reference points when attempting to position one’s argument in an ostensibly balanced way against the range of interpretative alternatives. For instance, a small number of anti–immigration academics who work in the field of immigration studies are more regularly cited because they represent the only mentionable counterpoints in a field which is by and large populated by people who are sympathetic to immigration. The impact factors of the anti–immigration folks will be greater simply because theirs is a minority position within the field, not necessarily because their position is intellectually more defensible. Notwithstanding their impact factor, the anti–immigration views are not in fact more influential because the rest of the field only cites them in order to disagree with them. They serve as a rhetorical foil, but for this they are rewarded citation counts. And what happens when you don’t cite a source but cite a source which cites a source — such as a review article or the literature section in a regular article — thus failing to give credit to the origins of an idea or the original source of an idea or data? What happens when a much–cited piece proves to be wrong? ‘Your paper may … have diverted and wasted the efforts of hundreds of scientists, but [the impact factor] will still look good on your CV and may land you a job’ (Lawrence, 2007). And what about stuff that you don’t cite but which has influenced you greatly, or that you deliberately don’t cite because you don’t want to support the person or position or be seen to be supporting them, or don’t cite because the work although of general influence are not directly relevant to your subject at hand?

R2: Have the articles even been read? A study of ecology papers showed that only 76 percent of cited articles supported the claim being made for them by the author making the citation. Another study of misprinted citations shows that perhaps only 20 percent of cited papers are read, indicating that people are citing citations rather than sources they have read. The increasing reliance on meta–analyses and review articles exacerbates this problem, particularly when the secondary article cites source articles uncritically, as if their findings were correct (Todd and Ladle, 2008).

R3: Failures to count correctly. The citation counting system is riddled with elementary inaccuracies. Incorrectly referenced items may be as high as a third, lowering the chance of a citation being counted (Todd and Ladle, 2008). ‘Homographs’ occur frequently when initials are used instead of whole first names — in the reference lists as well as citation databases — which leads to a failure to distinguish scholars who have the same last name and initial (Meho, 2007). Citations are also more likely to be counted when they are in English or when an author has a conventional English name (Harzing and Wal, 2008). Here is the conclusion the editor of Nature when his journal went to analyze the impact factor attributed to them by Thomson–Reuters Web of Knowledge: ‘Try as we might, my colleagues and I cannot reconcile our own counts of citable items in Nature’ (Campbell, 2008).

R4: What’s counted is what counts. The Thomson–Reuters databases include a limited number of journals, mostly English Language from North America and Europe (Meho, 2007). They only count citations in these journals. How are these selected? On the basis in part of some general criteria of no particular relevance to impact and intellectual quality such as timeliness of publication and some highly subjective criteria such as the stature of the members of the editorial board ( A librarian colleague of ours e–mailed Thomson to ask them about their selection processes, and their answer was ‘All journal evaluations are done solely by Thomson staff. We do receive recommendations for journals from researchers but they have no part in the evaluation process.’ Given that impact factors ratings generate impact (the apparent prestige of a journal for authors and respectable citability for readers), this is an indefensibly opaque process. Given Thomson–Reuters’ position in the world of academic publishing, it could also be regarded without too much suspicion as a case of the fox guarding the chicken coup. For all its touted openness, Google Scholar may be little better. In response to a query by a scholarly publisher why its twenty or so journals had not yet been indexed despite years of requests, the Google e–mail respondent simply replied: ‘we are currently unable to provide a time frame for when your content will be made available on Google Scholar’.

R5: To the extent that they work at all, citation counts work for some fields better than others. For instance, in molecular biology and biochemistry 96 percent of citations are to journal articles and the Web of Knowledge database covers 97 percent of cited articles, resulting in a 92 percent coverage of the field. However, in the humanities and arts, only 34 percent of citations are to journal articles, of which only 50% are counted in the Web of Knowledge, producing a mere 17 percent coverage (Craig and Ferguson, 2009). And bibliometrics, despite its name, completely ignores books, and thus favors to disciplines in which more journal articles are published over those where books are also a significant publication venue. Butler (2008) concludes that for most disciplines in the social sciences and humanities, standard bibliometric measures cannot be supported. Moreover, citation practices vary. Bornmann, et al. (2008) report on research by Podlubny which estimates that one citation in mathematics is equivalent to 15 in chemistry and 78 in clinical medicine, practically precluding analyses across fields.

R6: Citation counts are a function of the size of the fields: If you work in a small field (a rare medical condition, or finely specified disability, or localized knowledge, or small culture, or minority interest, or technical specialty), you will have fewer things to cite and fewer people who can cite you. But the knowledge may be just as important and have a great impact within that area of knowledge. Low citation count, then, will be a function of the size of the field, not the impact of your work (Lawrence, 2008).



Framing knowledge futures

If today’s knowledge systems are broken in places and on the verge of breaking in others, what, then, is to be done? Following is an agenda for the making of future knowledge systems which may optimize the affordances of the new, digital media.

F1: Sustainable scholarly publishing. Beyond the open access/commercial publishing dichotomy, there is a question of resourcing models and sustainability. Academics’ time is not best spent as amateur publishers. The key question here is how does one build sustainable resourcing models which neither require cross–subsidy of academics’ time, nor the unjustifiable and unsustainable costing and pricing structures of the big publishers? The challenge is to develop new business models, either in the form of academic socialism (institutional support for publishing by libraries or university presses paid for by government or institutions) or lightweight commercial models which do not charge unconscionable author fees, subscription rates or per–article purchase prices.

F2: Guardianship of intellectual property. How does one balance academics’ and universities’ interest in intellectual property with the public knowledge interest? The ‘gift economy’ also supports a ‘theft economy’ in which private companies profit from the supply content provided at no charge. Google copies content, mostly without permission and always without payment, and makes money from advertising alongside this content. The October 2008 settlement between Google and the Author’s Guild which distributes revenues from books Google has scanned in a number of U.S. libraries, may create as many new problems as it solves older ones (Albanese, 2008). The key question here is, how does one establish an intellectual property regime which sustains intellectual autonomy, rather than a ‘give away’ economy which undervalues the work of the academy? Moreover, journal articles and scholarly monographs do not need to have one or other of the ‘free’ copyright licenses upon which many of the new domains of social production depend — the Creative Commons license (Lessig, 2001) that underwrites Wikipedia or the General Public License (Stallman, 2002a; Stallman, 2002b; Williams, 2002) that locks free or open source software and its derivatives into communal ownership (Fitzgerald and Pappalardo, 2007). This is because authors are strongly named in academic knowledge regimes — the credibility of a work is closely connected to the credentials of an author, and copyright strengthens this claim to credibility. Furthermore, the imperatives of attribution and ‘moral rights’ are rigorously maintained through academic citation systems. A (re)user of copyrighted knowledge, conversely, has extraordinary latitude in ‘fair use’, quoting and paraphrasing for the purposes of review and criticism (Saunders and Smith, 2009). A version of ‘remix culture’, to use Lessig’s portrayal of the new world of digital creativity (Lessig, 2008), has always been integral to academic knowledge systems. However, to the extent that it is essential to build on the work of others, this is already built into conventional copyright regimes (Cope, 2001). Moreover, private author–ownership is integral to academic freedom, where authors in universities are allowed to retain individual ownership of copyright of published works, though not rights to patents or course materials (Foray, 2004). This is also why many open access journals retain traditional copyright licenses. Moreover, academics are not necessarily good stewards of these copyrights, when for instance they hand over these rights for no return to commercial publishers who subsequently sell this self–same content back to the institution for which they work, and at monopoly prices. As universities take a greater interest in content production in the regime of academic socialism, they should in all probability take a greater interest in copyright, whether that be libraries managing repositories or university presses publishing content, which they can then make available for free or sell at a reasonable price.

F3: Criterion–referenced review. What does it mean to do high quality intellectual work? Rather than unstructured commentary, we should require referees to speak to multiple criteria, and score for each: the significance of questions addressed, setting an intellectual agenda, rigor of investigation, originality of ideas, contribution to understanding, practical utility — these are some criteria that emerged in research as part of the British Research Assessment Exercise (Wooding and Grant, 2003). Or, with a more practical text focus, we might ask referees systematically to address clarity of thematic focus, relationships to the literature, research design, data quality, development or application of theory, clarity of conclusions and quality of communication. Or, with an eye to more general knowledge processes, we might ask referees to evaluate a report of intellectual work for its specifically experiential, empirical, categorical, theoretical, analytical, critical, applicable and innovative qualities. Clear disciplinary and metadisciplinary criteria will increase referees’ accountability and give outsiders an equitable opportunity to break into insider networks.

F4: Greater reflexivity and recursiveness in the peer review process. Digital technologies and new media cultures suggest a number of possibilities for renovation of the knowledge system of the scholarly journal. Open peer review where authors and referees know each other’s identities, or blind reviews that are made public, may well produce greater accountability on the part of editors and referees, and provide evidence of and credit the contribution a referee has made to the reconstruction of a text (Quirós and Martín, 2009). Reviews could be dialogical, with our without the reviewer’s identity declared, instead of the unidirectional finality of a accept/reject/rewrite judgment. The referee could be reviewed — by authors, or moderators, or other third–party referees, and their reviews weighted for their accumulated, community–ascribed value as a referee. And whether review texts and decision dialogues are on the public record or not, they should be open to independent audit for abuses of positional power.

F5: A fluid process of incremental knowledge refinement. Instead of a lock–step march to a single point of publication, then a near irrevocable fixity to the published record, a more incremental process of knowledge recording and refinement is straightforwardly possible in the digital era. This could even end the distinction between pre–publication refereeing and post–publication review. Re–versioning would allow initial, pre–refereeing formulations to be made visible, as well as the dialogue that contributed to rewriting for publication. Then, as further commentary and reviews come in, the author could correct and reformulate, thus opening the published text to continuous improvement.

F6: More integrative, collaborative and inclusive knowledge cultures. Instead of the heroic author shepherding a text to a singular moment of publication, the ‘social Web’ and interactive potentials intrinsic to the new media point to more broadly distributed, more collaborative knowledge futures. What has been called Web 2.0 (Hannay, 2007), or the more interactive and extensively sociable application of the Internet, points to wider networks of participation, greater responsiveness to commentary, more deeply integrated bodies of knowledge and more dynamic, reflexive, responsive and faster moving knowledge cultures.

F7: More widely distributed sites of knowledge production. The effect of a more open system would be to open entry to the republic of scholarly knowledge for people currently outside the self–enclosing circles of prestigious research institutions and highly ranked journals. Make scholarly knowledge affordable to people without access through libraries to expensive institutional journal subscriptions, make the knowledge criteria explicit, add more accountability to the review process, allow all comers to get started in the process of the incremental refinement of rigorously validated knowledge, and you’ll find new knowledge — some adjudged to be manifestly sound and some not — emerging from industrial plants, schools, hospitals, government agencies, lawyers’ offices, hobbyist organizations, business consultants and voluntary groups. Digital media infrastructures make this a viable possibility.

F8: Globalizing knowledge production. Approximately one quarter of the world’s universities are in the Anglophone world. However, the vast majority of the world’s academic journal articles are from academics working in Anglophone countries. A more comprehensive and equitable global knowledge system would reduce this systemic bias. Openings in the new media include developments in machine translation and the role of knowledge schemas, semantic markup and tagging to assist discovery and access across different languages. They also speak to a greater tolerance for ‘accented’ writing in English as a non–native language.

F9: New types of scholarly text. In 1965, J.C.R. Linklider wrote of the deficiencies of the book as a source of knowledge, and imagined a future of ‘procognitive systems’ (Linklider, 1965). He was anticipating a completely new knowledge system. That system is not with us yet. In the words of Jean–Claude Guédon (2001), we are still in the era of digital incunabula. Escaping the confines of print look–alike formats, however, expansive possibilities present themselves. With semantic markup, large corpora of text might be opened up to data mining and cyber–mashups (Cope and Kalantzis, 2004; Sompel and Lagoze, 2007; Wilbanks, 2007). Knowledge representations can present more of the world in less mediated form in datasets, images, videos and sound recordings (Fink and Bourne, 2007; Lynch, 2007). Whole disciplines traditionally represented only by textual exegesis, such as the arts, media and design might be formally brought into academic knowledge systems in the actual modalities of their practice (Jakubowicz, 2009). New units of knowledge may be created, at levels of granularity other than the singular article of today’s journals system — fragments of evidence and ideas contributed by an author within an article (Campbell, 2008), and curated collections and mashups above the level of an article, with sources duly credited by virtue of electronically tagged tracings of textual and data provenance.

F10: Reliable use metrics. More and better counting is needed if we are to evaluate reliably the impact of published scholarly work. We need to review, not Thomson–selected citations or unreliably collected Google citations, but every citation. We could ask authors to tag for the kind of citation (agreement, distinction, disagreement, etc.). We could collect download statistics more extensively and consistently. We could ask readers to rate articles, and weight their ratings by their rater–ratings. We could ask for a quick review of every article read, and record and rate the breadth and depth of a scholar’s reading or a reader’s rating credentials. We could harvest qualitative commentary found alongside citations.

F11: Reliable use measures. Instead of shortcuts to reading, we could ask scholarly evaluators to read whole texts alongside author exegeses and independent assessment of the impact of their ideas (Lawrence, 2008; Wooding and Grant, 2003). What did this research or these ideas actually do in a field? Instead of the dubious numerical proxies, we would ask the question directly, what was the actual impact of this intellectual work on the world?

If it is the role of the scholarly knowledge system to produce deeper, broader and more reliable knowledge than is possible in everyday, casual experience, what do we need to do to deepen this tradition rather than allow it to break, a victim to the disruptive forces of the new media? The answers will not only demand the development of new publishing processes. They will entail require the construction of new knowledge systems.

This inevitably leads us to an even larger question: how might renewed scholarly knowledge systems support a broader social agenda of intellectual risk taking, creativity and innovation? How is renovation of our academic knowledge systems a way to address the heightened expectations of a ‘knowledge society’? And what are the affordances of the digital media which may support reform?

Whatever the models that emerge, the knowledge systems of the near future could and should be very different from those of our recent past. The sites of formal knowledge validation and documentation will be more dispersed across varied social sites. They will be more global. The knowledge processes they use will be more reflexive and so more thorough and reliable. Knowledge will be made available faster. Through semantic publishing, knowledge will be more discoverable and open to disaggregation, reaggregation and reinterpretation. There will be much more of it, but it will be much easier to navigate. The internet provides us these affordances. It will allow us to define and apply new epistemic virtues. It is our task as knowledge workers to realize the promise of our times and to create more responsive, equitable and powerful knowledge ecologies. End of article


About the authors

Bill Cope is a Research Professor in the Department of Educational Policy Studies at the University of Illinois. He and Mary Kalantzis are the authors or ediors of number a number of widely cited books, including The powers of literacy (Falmer Press, 1993), Multiliteracies: Literacy learning and the design of social futures, (Routledge, 2000) and New learning: Elements of a science of education (Cambridge University Press, 2008). From 2000 to 2003, he conceived and coordinated a major research project on digital authoring environments though RMIT University in Melbourne, Australia, ‘Creator to consumer in a digital age’, funded by the Australian Government’s Department of Industry. Dr. Cope is also Director of Common Ground Publishing, a developer of hybrid open access/commercial academic publishing software, and a publisher of books and academic journals, based in the Research Park at the University of Illinois.

Mary Kalantzis is Dean of the College of Education at the University of Illinois, Urbana–Champaign. Until 2005, she was Dean of the Faculty of Education, Language and Community Services at RMIT University in Melbourne, Australia, and President of the Australian Council of Deans of Education. She has been a Commissioner of the Australian Human Rights and Equal Opportunity Commission, Chair of the Queensland Ethnic Affairs Ministerial Advisory Committee and a member of the Australia Council’s Community Cultural Development Board.



1. Willinsky, 2006a, p. xii.

2. Benkler, 2006, pp. 60, 3, 5, 105, 18–19.

3. Lessig, 2008, p. 290.

4. Willinsky, 2006a, p. 9.

5. Peters, et al., 2008, p. 15.

6. Lessig, 2001, pp. 95, 116.

7. Benkler, 2006, pp. 93–94.

8. Willinsky, 2006a, pp. 20–22.

9. Willinsky, 2006a, p. 191.

10. Willinsky, 2006a, pp. 1, 5.



Andrew Albanese, 2008. “Harvard slams Google settlement; Others react with caution,” Library Journal (30 October), at, accessed 16 March 2009.

Francis Bacon, 1620. The New Organon, digital version at, accessed 16 March 2009.

Michel Bauwens, 2005. “The political economy of peer production,” CTheory (1 December),, accessed 16 March 2009.

Yochai Benkler, 2006. The wealth of networks: How social production transforms markets and freedom. New Haven, Conn.: Yale University Press.

Sherrie S. Bergman, 2006. “The scholarly communication movement: Highlights and recent developments,” Collection Building, volume 25, pp. 108–128.

Carl T. Bergstrom and Theodore C. Bergstrom. 2006. “The economics of ecology journals,” Frontiers in Ecology and Evolution, volume 4, pp. 488–495, and at, accessed 16 March 2009.

Ted C. Bergstrom and Rosemarie Lavaty. 2007. “How often do economists self–archive?” Department of Economics, University of California, Santa Barbara, at, accessed 16 March 2009.

Berlin declaration on open access to knowledge in the sciences and humanities, 2003. “Berlin declaration,” at, accessed 16 March 2009.

Bethesda statement on open access publishing, 2003. “Bethesda statement” (20 June), at, accessed 16 March 2009.

Mario Biagioli, 2002. “From book censorship to academic peer review,” Emergences: Journal for the Study of Media & Composite Cultures, volume 12, pp. 11–45.

Claire Bird and Martin Richardson. 2009. “Publishing journals under a hybrid subscription and open access model,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Lutz Bornmann, Rüdiger Mutz, Christoph Neuhaus, and Hans–Dieter Daniel. 2008. “Citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results,” Ethics in Science and Environmental Politics, volume 8, pp. 93–102, and at, accessed 16 March 2009.

Tim Brody, Les Carr, Yves Gingras, Chawki Hajjem, Stevan Harnad, and Alma Swan. 2007. “Incentivizing the open access research Web: Publication–archiving, data–archiving and scientometrics,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Howard I. Browman and Konstantinos I. Stergiou. 2008. “Factors and indices are one thing, deciding who is scholarly, why they are scholarly, and the relative value of their scholarship is something else entirely,” Ethics in Science and Environmental Politics, volume 8, pp. 1–3.

J.C. Burnham, 1990. “The evolution of editorial peer review,” Journal of the American Medical Association, volume 263 (9 March), and at, accessed 16 March 2009.

Linda Butler, 2008. “Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian research quality framework,” Ethics in Science and Environmental Politics, volume 8, pp. 83–92.

Philip Campbell, 2008. “Escape from the impact factor,” Ethics in Science and Environmental Politics, volume 8, pp. 5–7.

Roger Clarke, 2007. “The cost profiles of alternative approaches to journal publishing,” First Monday, volume 12, number 12 (December),, accessed 16 March 2009.

Bill Cope, 2001. “Content development and rights in a digital environment,&redquo; In: B. Cope and R. Freeman (editors). Digital rights management and content development: Technology drivers across the book production supply chain, from the creator to the consumer. Melbourne: Common Ground, pp. 3–16.

Bill Cope and Mary Kalantzis, 2007. “New media, new learning,” International Journal of Learning, volume 14, pp. 75–79.

Bill Cope and Mary Kalantzis, 2004. “Text–made text,” E–Learning, volume 1, pp. 198–282.

Bill Cope and Mary Kalantzis, 2000b. Multiliteracies: Literacy learning and the design of social futures. London: Routledge, p. 350.

Bill Cope and Mary Kalantzis, 2000a. “Designs for social futures,” In: B. Cope and K. Mary (editors). Multiliteracies: Literacy learning and the design of social futures. London: Routledge, pp. 203–234.

Iain D. Craig and Liz Ferguson, 2009. “Journals ranking and impact factors: How the performance of journals is measured,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

J. Eric Davies, 2009. “Libraries and the future of the journal: Dodging the crossfire in the e–revolution; or leading the charge?” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Mathias Dewatripont, Victor Ginsburgh, Patrick Legros, and Alexis Walckiers. 2006. Study on the economic and technical evolution of the scientific publication markets in Europe. Brussels: European Commission, and at http//, accessed 16 March 2009.

Aaron S. Edlin and Daniel L. Rubinfeld. 2004. “Exclusion or efficient pricing? The ‘big deal’ bundling of academic journals,” University of California, Berkeley, at, accessed 16 March 2009.

James A. Evans, 2008. “Electronic publication and the narrowing of science and scholarship,” Science, volume 321, number 5887 (18 July), pp. 395–399, and at, accessed 16 March 2009.

J. Lynn Fink and Philip E. Bourne. 2007. “Reinventing scholarly communication for the electronic age,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Brian Fitzgerald and Kylie Pappalardo. 2007. “The law as cyberinfrastructure,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Dominique Foray, 2004. The economics of knowledge. Cambridge, Mass.: MIT Press.

Alexander R. Galloway and Eugene Thacker. 2007. The exploit: A theory of networks. Minneapolis: University of Minnesota Press.

Paul Ginsparg, 2007. “Next–generation implications of open access,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Jean–Claude Guédon, 2001, “In Oldenburg’s long shadow: Librarians, research scientists, publishers, and the control of scientific publishing,’ Association of Research Libraries, at, accessed 16 March 2009.

Timo Hannay, 2007. “Web 2.0 in science,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Stevan Harnad, 2009. “The PostGutenberg open access journal,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Stevan Harnad, 2008. “Validating research performance metrics against peer rankings,” Ethics in Science and Environmental Politics, volume 8, pp. 103–107, and at, accessed 16 March 2009.

Anne–Wil K. Harzing and Ron van der Wal. 2008. “Google Scholar as a new source for citation analysis,” Ethics in Science and Environmental Politics, volume 8, pp. 61–73, and at, accessed 16 March 2009.

D.F. Horrobin, 1990. “The philosophical basis of peer review and the suppression of innovation,” Journal of the American Medical Association, volume 263, at, accessed 16 March 2009.

Edmund Husserl, 1970. The crisis of European sciences and transcendental phenomenology. Translated and with an introduction by David Carr. Evanston, Ill.: Northwestern University Press.

John P.A. Ioannidis, 2005. “Why most published research findings are false,” PLoS Medicine, volume 2, pp. 696–701.

Andrew Jakubowicz, 2009. “Beyond the static text: Multimedia interactivity in academic publishing,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Tom Jefferson, Elizabeth Wager, and Frank Davidoff. 2002. “Measuring the quality of editorial peer review,” Journal of the American Medical Association, volume 287, pp. 2786–2790, and at, accessed 16 March 2009.

Horace Freeland Judson, 1994. “Structural transformations of the sciences and the end of peer review,” Journal of the American Medical Association, volume 272, pp. 92–94, and at, accessed 16 March 2009.

Mary Kalantzis and Bill Cope. 2008. New learning: Elements of a science of education. Cambridge: Cambridge University Press.

Cushla Kapitzke and Michael A. Peters. 2007. Global knowledge cultures. Rottherdam: Sense Publishers.

Kayvan Kousha and Mike Thelwall. 2007. “Google Scholar citations and Google Web/URL citations: A multi–discipline exploratory analysis,” Journal of the American Society for Information Science and Technology, volume 58, number 7, pp. 1055–1065.

Gunther Kress, 2000. “Design and transformation: New theories of meaning,” In: B. Cope and M. Kalantzis (editors). Multiliteracies: Literacy learning and the design of social futures. London: Routledge, pp. 153–161.

Peter A. Lawrence, 2008. “Lost in publication: How measurement harms science,” Ethics in Science and Environmental Politics, volume 8, pp. 9–11, and at, accessed 16 March 2009.

Peter A. Lawrence, 2007. “The mismeasurement of science,” Current Biology, volume 17, pp. 583–585.

Kirby Lee and Lisa Bero, 2006. “What authors, editors and reviewers should do to improve peer review,” Nature,, accessed 16 March 2009.

Lawrence Lessig, 2008. Remix: Making art and commerce thrive in the hybrid economy. New York: Penguin.

Lawrence Lessig, 2001. The future of ideas: The fate of the commons in a connected world. New York: Random House.

J.C.R. Linklider, 1965. Libraries of the future. Cambridge, Mass.: MIT Press.

Clifford Lynch, 2007. “The shape of the scientific article in the developing cyberinfrastructure,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Michael A. Mabe and Mayur Amin. 2002. “Dr. Jekyll and Dr. Hyde: Author–reader asymmetries in scholarly publishing,” Aslib Proceedings, volume 54, pp. 149–157.

Mark J. McCabe, Aviv Nevo, and Daniel L. Rubinfeld. 2006. “The pricing of academic journals,” University of California, Berkeley, at, accessed 16 March 2009.

Lokman I. Meho, 2007. “The rise and rise of citation analysis,” Physics World, volume 20, pp. 32–36.

Barbara Meyers, 2004. “Peer review software: Has it made a mark on the world of scholarly journals?” Aries Systems Corporation, at, accessed 16 March 2009.

Morgan Stanley, 2002. “Scientific publishing: Knowledge is power” (30 September), Morgan Stanley Equity Research Europe (London), and at, accessed 16 March 2009.

Sally Morris, 2009. “‘The tiger in the corner’: Perhaps journals will not be central to the lives of tomorrow’s scholars?” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Michael Norris and Charles Oppenheim, 2007. “Comparing alternatives to the Web of Science for coverage of the social sciences’ literature,” Journal of Informetrics, volume 1, number 2, pp. 161–169.

David W. Opderbeck, 2007. “The penguin’s paradox: The political economy of international intellectual property and the paradox of open intellectual property models,” Stanford Law & Policy Review, volume 18, number 1, pp. 101–160.

Open Society Institute, 2002. “Budapest Open Access Initiative,” at, accessed 16 March 2009.

Lee C. Van Orsdel and Kathleen Born, 2008. “Periodicals price survey 2008: Embracing openness,” Library Journal (15 April),, accessed 16 March 2009.

Lee C. Van Orsdel and Kathleen Born. 2006. “Periodicals price survey 2006: Journals in the time of Google,” Library Journal (15 April),, accessed 16 March 2009.

D. Pauly and K.I. Stergiou. 2008. “Re–interpretation of ‘influence weight’ as a citation–based Index of New Knowledge (INK),” Ethics in Science and Environmental Politics, volume 8, pp. 75–78.

James W. Pellegrino, Naomi Chudowsky, and Robert Glaser (editors), 2001. Knowing what students know: The science and design of educational assessment. Washington, D.C.: National Academies Press, and at, accessed 16 March 2009.

Michael A. Peters, 2007. Knowledge economy, development and the future of higher education. Rotterdam: Sense Publishers.

Michael A. Peters, 2009. “Academic publishing and the political economy of education journals,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Michael A. Peters, Simon Marginson, and Peter Murphy. 2008. Creativity and the global knowledge economy. New York: Peter Lang.

Angus Phillips, 2009. “Business models in journals publishing,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

José Luis González Quirós and Karim Gherab Martín. 2009. “Arguments for an open model of escience,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Eric Raymond, 2001. The cathedral and the bazaar: Musings on Linux and open source by an accidental revolutionary. Sebastapol, Calif.: O’Reilly.

Fytton Rowland, 2002. “The peer–review process,” Learned Publishing, volume 15, pp. 247–258, and at, accessed 16 March 2009.

Joss Saunders and Simon Smith. 2009. “The future of copyright: What are the pressures on the present system?” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Robert Schroeder, 2007. “Pointing users toward citation searching: Using Google Scholar and Web of Science,” Libraries and the Academy, volume 7, pp. 243–248.

Sarah L. Shreeves, 2009. “Cannot predict now: The role of repositories in the future of the journal,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Kai Simons, 2008. “The misused impact factor,” Science, volume 322, number 5899 (10 October), p. 165, and at, accessed 16 March 2009.

Pippa Smart, 2009. “The status and future of the African journal,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Herbert Van de Sompel and Carl Lagoze. 2007. “Interoperability for the discovery, use, and re–use of units of scholarly communication,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Mike Sosteric, 1996. “Interactive peer review: A research note,” Electronic Journal of Sociology, volume 2, number 1, at, accessed 16 March 2009.

Ray Spier, 2002. “The history of the peer–review process,” Trends in Biotechnology, volume 20, pp. 357–358.

Richard Stallman, 2002a. Free software, free society: Selected essays of Richard M. Stallman. Boston, Mass.: GNU Press.

Richard Stallman, 2002b. “The GNU project,” at, accessed 16 March 2009.

Christine A. Stanley, 2007. “When counter narratives meet master narratives in the journal editorial–review process,” Educational Researcher, volume 36, pp. 14–24.

Peter Suber, 2007. “Trends favoring open access,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Kang Tchou, 2009. “The future of the academic journal in China,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Carol Tenopir and Donald W. King. 2009. “The growth of journals publishing,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Peter A. Todd and Richard J. Ladle. 2008. “Hidden dangers of a ‘citation culture’,” Ethics in Science and Environmental Politics, volume 8, pp. 13–16.

Elizabeth Wager and Tom Jefferson. 2001. “Shortcomings of peer review in biomedical journals,” Learned Publishing, volume 14, pp. 257–263.

John Wilbanks, 2007. “Cyberinfrastructure For knowledge sharing,” CTWatch Quarterly, volume 3, at, accessed 16 March 2009.

Sam Williams, 2002. Free as in freedom: Richard Stallman’s crusade for free software. Sebastapol Calif.: O’Reilly.

John Willinsky, 2006a. The access principle: The case for open research and scholarship. Cambridge Mass.: MIT Press.

John Willinsky, 2006b. “The properties of Locke’s common–wealth of learning,” Policy Futures in Education, volume 4, pp. 348–365.

John Willinsky, Sally Murray, Claire Kendall, and Anita Palepu. 2009. “Doing medical journals differently: Open medicine, open access, and academic freedom,” In: B. Cope and A. Phillips (editors). The future of the academic journal. Oxford: Chandos.

Steven Wooding and Jonathan Grant. 2003. “Assessing research: The researchers’ view,” Joint Funding Bodies’ Review of Research Assessment, U.K., at, accessed 16 March 2009.


Editorial history

Paper received 20 November 2008; revised 12 March 2009; accepted 15 March 2009.

Copyright © 2009, First Monday.

Copyright © 2009, Bill Cope and Mary Kalantzis.

Signs of epistemic disruption: Transformations in the knowledge system of the academic journal
by Bill Cope and Mary Kalantzis
First Monday, Volume 14, Number 4 - 6 April 2009

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2014.