First Monday

Higher education, impact, and the Internet: Publishing, politics, and performativity by Peter Roberts



Abstract
This paper considers how and why scholarly publishing has changed over the last two decades. It discusses the role of the Internet in overcoming earlier barriers to the rapid circulation of ideas and in opening up new forms of academic communication. While we live in a world increasingly dominated by images, the written word remains vital to academic life, and more published scholarly material is being produced than ever before. The paper argues that the Internet provides only part of the explanation for this growth in the volume of written material; another key contributing factor is the use of performance-based research funding schemes in assessing scholarly work. Such schemes can exert a powerful influence over researchers, changing their views of themselves and the reasons for undertaking their activities. With their tendency to encourage the relentless, machine-like production and measurement of outputs, they can be dehumanizing. Of even greater concern, however, is the possibility of systems based entirely on metrics, ‘impact’, and revenue generation. The paper critiques these trends, makes a case for the continuing value of peer review, and comments briefly on the subversive potential of the Internet in resisting the dehumanization of scholarly work.

Contents

Introduction
Scholarly publishing in the age of the Internet
Peer review, performativity, and impact
Concluding remarks

 


 

Introduction

In today’s world, more scholarly material is being produced and published than ever before. Part of the explanation for this lies in the advantages conferred by the Internet in reducing space constraints and enhancing the speed of publication. Also crucial, however, is the emergence of performance-based research funding schemes and the pressures they exert on academics to publish. Drawing on examples from Britain and New Zealand, this article argues that such regimes can be dehumanizing, fundamentally altering the way scholars think about themselves and their published work. These negative effects, it is suggested, will be exacerbated if the importance of peer review is downplayed or disregarded in favor of systems based largely or entirely on metrics. If such a scenario unfolds, the Internet could play a key role in creating more machine-like academic cultures, and intellectual life will be much the poorer for this. At the same time, the Internet could also be pivotal in allowing scholars to contest dominant practices in the assessment of published work.

 

++++++++++

Scholarly publishing in the age of the Internet

Two decades ago, I wrote a paper for this journal on scholarly publishing, peer review, and the Internet (Roberts, 1999). At that time, the publication of academic work in electronic form via the Internet was a relatively recent phenomenon. Indeed, the development of the Internet itself, as a vast, public digital space for communication across the globe, was still in its infancy. By the late 1980s, most academics were making use of word processors to write books and papers, but the regular use of computers for other tasks was still uncommon. By the mid-1990s, e-mail had been widely adopted and the Internet was gaining increasing prominence in scholarly life. The growth in digital technologies from that point onwards was rapid and dramatic. Where before networked computing had been largely confined to small groups serving in specialist government, military, or educational roles, now the possibility of a much more open World Wide Web emerged. For those undertaking research, the breaking down of previous barriers imposed by time and space — the idea of being able to connect almost instantly with others thousands of kilometers away — was very appealing. Within a few short years, a rich body of work emerged on the potential of the Internet to transform traditional methods of scholarly publishing.

Throughout the 1990s, there was considerable debate over a perceived ‘crisis’ in scholarly publishing (Astle, 1991; Greenwood, 1993; Guédon, 1994; Odlyzko, 1994; Okerson, 1991a, 1991b; Harnad, 1996; Taubes, 1996a, 1996b; Thatcher, 1995; UCSB Library Newsletter for Faculty, 1996). Print journals had become extraordinarily expensive, with annual subscription costs — typically borne by university libraries — running into the hundreds, sometimes thousands, of dollars for well-known scientific periodicals. Publication in traditional journals involved lengthy delays for academics, who would frequently have to wait at least 12 months following the acceptance of their submitted work for it to appear in print. Almost every element of the scholarly communication process was slowed down by ‘print and post’ systems, from submission, to reviewing, revisions, publication, and distribution. University libraries also had to find ways to store ever-growing bodies of published work, with journals combining with books, reports, and other printed materials to fill the shelves of large multi-story buildings. Finding space for everything was a perpetual problem, and many universities had to resort to using off-campus facilities to store excess academic material.

The possibility of publishing academic work electronically via the Internet seemed to offer not only a means for reducing costs, delays, and storage difficulties, but also new ways to respond to the ideas and findings of scholarly peers (Arnold, 1995; Day, 1995; Harnad, 1997, 1995, 1991; Odlyzko, 1997; Okerson, 1996; Valauskas, 1997). Work could, for example, be reviewed in advance of formal submission via electronic pre-print repositories. Multiple reviewers could contribute to a networked conversation about the merits of a scholarly article or book. Ongoing, open dialogue between the author and reviewers could, in a technical sense, be easily enacted. Peer reviewers could be expected to respond more promptly than they had in the past, and authors too could communicate with editors over revisions, resubmissions, and proof-reading with greater rapidity. Web-based publication would also open up the prospect of academic work being continuously updated as new findings, critiques, or ideas emerged. There would be no single fixed and final version of a paper; instead, academic work could remain ‘live’ and ever-changing, appearing more as an ongoing conversation than a scholarly artefact.

In the second half of the 1990s, Web-based repositories for scholarly work had started to appear. Some of these had much in common with traditional print journals, while others sought to foster new ways of presenting, distributing, reviewing, and reading academic material. As the years following the turn of the century have unfolded, the publication of research papers in digital form has continued to expand. Almost all scholarly journals are now available electronically, either via subscription-based models or on an open access basis. The case for open access in scholarly publishing has epistemological, ethical, and educational foundations (cf., Björk, et al., 2010; Cope and Kalantzis, 2009; Greyson, et al., 2009; Guédon, 2009; Harnad and Brody, 2004; May, 2010; Pyati, 2007) and is part of a broader trend toward greater openness in teaching, learning, and research (Committee for Economic Development, 2009; Iiyoshi and Kumar, 2008; Willinsky, 2006). Enhancing access to knowledge can be seen as an important principle in a democracy, and openness, within certain limits, can be seen as a key educational virtue (Peters and Roberts, 2011).

Approaches to open access vary. Some journals charge no fees for either the submission or the reading of papers (see Budapest Open Access Initiative, 2002; Suber, 2015); others, including those managed by major publishing houses, such as Taylor & Francis, Wiley, and Springer, impose substantial article processing charges for those who choose the open access option. These fees can range from several hundred to several thousand dollars per article. For many well-established journals, a print version of each issue is still produced, either in tandem with the electronic version or at a later date. But print copies are often only obtained by individuals if they are members of professional organizations that sponsor journals. By far the most common way to access material in journals is via library digital subscriptions. Many academic books are also now published in digital form, allowing libraries to make savings in space and the costs associated with this.

The move to digital publication for scholarly journals has, in many senses, been more seamless, less disruptive, and more conservative than might have been predicted. Access to scholarly material is now, for most faculty members and students, easier and faster than ever before. While alternative approaches to publishing keep evolving (Clarke, 2007; Morrison, 2013; Smecher, 2008; Solomon, 2002), there have, to date, been fewer changes to academic journals than some expected. Most journals retain the same key features that were evident in the print and post era, with individual articles housed together in single issues published at regular intervals. Links to other publications might be more readily enabled in some periodicals, and systems for submitting, accessing, and distributing papers may have changed, but little else has altered. What has changed is the total amount of written material now published; this has continued to expand at a rapid rate.

Part of the explanation for this increase lies in the technical advantages associated with contemporary computing; ideas can be recorded, modified, formatted, and submitted for publication much more rapidly and readily than the print-based technologies of the past permitted. The Internet has also opened up new avenues and opportunities for the publication and circulation of ideas. The number of journals available in most fields of study keeps growing, with false starts and closures being far outweighed by the increase in new titles. More than this, though, the Internet has allowed for other modes of written expression via blogs and social media platforms such as Facebook and Twitter. Lines between ‘scholarly’ and ‘popular’ writing are becoming more blurred, as digital repositories open up access to material that was hitherto available only to specialists and as academics seek to gain a wider audience via other means of communication. There is also a growing academic presence on visual platforms such as YouTube. This has not been an either/or process: the increase in ‘alternative’ content available via the Internet has not led to a corresponding decrease in scholarly content. We may live in a world increasingly dominated by images but we are still heavily dependent on the written word, and more words are being produced, by a wider range of people, than ever before.

The opening up of opportunities for faster, easier, more accessible publication and the wider dissemination of ideas via the Internet is, however, only part of the story here. Equally important are the institutional, political, and policy pressures that have been brought to bear on academics under systems of research assessment. The next section discusses one example of such a scheme: New Zealand’s Performance-Based Research Fund (PBRF). Reference will also be made to Britain’s Research Excellence Framework (REF). Research assessment regimes of this kind encourage a factory-like scholarly production process, with dehumanizing consequences for those involved and a diminished sense of what research, publication, and intellectual life have to offer. The negative consequences of such systems could be exacerbated if the principle of academic judgement via peer review is replaced with quantifiable measures of ‘impact’, including those generated by some of the world’s most powerful Internet-based multinational companies.

 

++++++++++

Peer review, performativity, and impact

New Zealand’s Performance-Based Research Fund (PBRF) grew out of the work of the Tertiary Education Advisory Commission (TEAC), a body established shortly after the election of a new Labour-led government in 1999. The PBRF was introduced as part of wider suite of changes designed to advance New Zealand as a knowledge society and economy (New Zealand Ministry of Education, 2006, 2002; Roberts and Peters, 2008; TEAC, 2001a, 2001b, 2001c, 2000). It replaced a system for research funding in the tertiary sector that had, under the previous government, been based largely on student numbers. This earlier approach had created perceived injustices, with funding following the student regardless of differences in research productivity between various tertiary education institutions and organizations. The PBRF would, in theory, usher in a merit-based approach to research funding, rewarding those institutions and individuals who excelled in their scholarly work (PBRF Working Group, 2002).

In making the shift to performance-based research funding, the Commission had considered examples elsewhere in the world. Britain’s Research Assessment Exercise (RAE), as it was then known, was the most familiar and well-established of these. In more recent years, the RAE has morphed into what is now called the Research Excellence Framework (REF) but most of the key planks of the original scheme remain in place. The most notable of these is the principle of peer review. Under the REF, as was the case with the RAE, scholars are evaluated by subject-based expert panels. Funding is distributed to institutions, under different ‘unit of assessment’ areas, on the basis of judgements made by the relevant panel The PBRF shares much in common with the REF. Both schemes are intended to reward and encourage research achievements and contributions. New Zealand academics complete Evidence Portfolios (EPs) for a quality evaluation exercise conducted once every six years. The bulk of the PBRF funding received by participating institutions comes from the results of this assessment process, with the balance being determined by research degree completions and externally generated research income (see further, Curtis, 2008; Roberts, 2006; Tertiary Education Commission, 2013).

An important distinction between the REF and the PBRF, however, is that under the latter the focus is on rating individual researchers, rather than departments or ‘unit of assessment’ areas. This has had a significant bearing on how researchers view themselves and their activities (Ashcroft, 2005; Middleton, 2005; Roberts, 2013; Smith and Jesson, 2005). The REF has a number-based rating system, while the PBRF uses grades. Individual researchers in New Zealand who choose to receive their grades learn whether they are rated ‘A’, ‘B’, ‘C’, or ‘R’. The first of these grades signifies world-class, outstanding research achievements and contributions; the last is awarded to those whose performance is deemed inadequate to receive any PBRF funding. Those holding key positions of responsibility in New Zealand universities — typically, the Vice-Chancellor, Deputy Vice-Chancellor(s), and Pro-Vice-Chancellors or Deans — may have access to the grades for PBRF-eligible academics, either across the university or in a given College or Faculty. Given this situation, individuals can experience implied or overt pressure to perform, even if they choose not to receive their grades.

The PBRF has fostered a culture of relentless production, pushing academics who may for various reasons (including heavy teaching loads or substantial service commitments) have had modest publication records to ‘lift their game’ as writers and researchers. Those with already impressive publishing profiles have felt compelled to keep extracting further improvements from themselves, publishing more, in better journals, and with greater recognition from peers. While many might claim that such pressures are desirable, particularly in publicly-funded institutions, the subtle effects of the PBRF on academic morale and the nature of the research activities undertaken are often overlooked. Now in New Zealand, the tendency to define oneself or others in PBRF terms can become part of the culture of everyday institutional life: ‘I’m an A researcher!’; ‘I’m a B now, but I’m tracking towards an A’; ‘She’s an Associate Professor: shouldn’t she have a higher grade?’; ‘I’m disposable; I’m an R’. The PBRF plays an important role in not just rewarding but also disciplining and punishing individual researchers, with an especially marked effect in subject areas such as Education where many academics have a strong professional and practitioner focus in their work (Middleton, 2005; Seddon, et al., 2012; Smith and Jesson, 2005).

The language of ‘outputs’ dominates research discussions in universities subject to performance-based research funding, and in the end academics can begin to think of themselves in this light: they become ‘outputs’ of a system that manages and measures them and determines their worth as researchers on the basis of a six-yearly grade. This dehumanizes academics, reducing them, symbolically at least, to fodder in a giant revenue-generating machine. The dehumanizing consequences of such regimes extend to those with whom academics work, who collectively become part of the language of outputs. Evidence portfolios in the PBRF have several components: a set of four ‘Nominated Research Outputs’ (the publications selected by the researcher as his or her best in the evaluation period), ‘Other Research Outputs’ (up to 12 additional publications or presentations from within the evaluation period selected by the researcher), and ‘Research Contributions’ (up to 15 items that indicate the esteem in which a researcher is held by his or her peers and the contributions he or she has made to the research environment and to his or her field). Thus, to take the example of thesis supervision, rather than being seen principally as a form of teaching, service, and support, this can end up being regarded as further capital to be traded in the outputs game. Master’s and doctoral degree completions, with the required research component, attract PBRF funding, but the successes of thesis students in gaining awards or in publishing their work can also count as research contributions and form part of an academic’s evidence portfolio.

The machine-like character of scholarly publishing under the PBRF and other similar schemes is, in part, a reflection of the limits imposed by time. With a six-year evaluation cycle, there is little time for sustained reading, dialogue, and reflection. Time itself becomes a commodity in such a system. Time is in perpetually short supply, and the demand to produce never ceases. In the PBRF, academics are not evaluated on what they know, or on how well they can convey their knowledge to others; they are assessed for their performance. The language of performativity, productivity, and accountability forms part of a wider, global, long-term process of progressively commodifying knowledge and reshaping universities to make them operate like corporations (Peters and Roberts, 1999). This in turn reflects the broader emphasis on economic goals in shaping educational policy, a trend that has been in evidence for more than 30 years (Fitzsimons, et al., 1999). The PBRF, in making research a more individualistic, competitive, instrumentalist activity than ever before, is simply conforming to a pattern that has already been well established with other policy reforms in the tertiary education sector.

The PBRF might in theory be intended to reward quality over quantity, but the quality of the assessment process itself can be questioned. With its tightly prescribed word limits and its rigid evidence portfolio structure, the PBRF assessment process provides a rather restricted portrait of an individual researcher and his or her contributions. A far more well-rounded, more complex and nuanced picture might emerge in, say, appointing an academic to a senior position, where a full CV, referees’ statements, a presentation, and an interview would normally be required (Roberts, 2007). Indeed, the very idea of ‘quality’ warrants careful interrogation. ‘Quality’ has become a policy buzzword, but often important questions relating to the use of this term are neglected: Quality as defined by whom? In what ways? In relation to what? Under what circumstances and in contexts? Research activities, even within one field of study, can vary so widely in their aims, methodologies, and underlying theoretical assumptions that distinctions between them on the basis of ‘quality’ can quickly begin to seem spurious.

There is, however, one important element of the PBRF that is worth retaining in any system of research assessment: peer review. In my 1999 article I suggested that peer review would have ‘a vital role to play as we move into a digital scholarly future’. It was acknowledged that ‘[r]efereeing may, at times, be a nasty, interest-serving exercise’; nonetheless, the benefits of peer review would ‘still outweigh a situation where “anything goes”’. Given the rapid expansion in publicly available information in the age of the Internet, mechanisms for determining the rigor and integrity of online material would be essential. I noted that new systems for commenting on scholarly work were emerging (e.g., post hoc assessments of material that has already been released, ongoing debate via interactive Internet publications, open peer review, and the updating of earlier versions of a paper in the light of feedback from readers). Most of these systems, it was observed, ‘still rely on some form of peer review as a legitimating mechanism: judgements about the quality of work are made, or sought, in the company of others with like interests and expertise’. ‘This process’, I concluded, ‘which gives scholarly publishing its distinctive character, will be vital as the information explosion reaches full force in the electronic era’ (Roberts, 1999). Developments over the last two decades have only served to reinforce the key points made in the earlier article.

Peer review remains imperfect. It relies on a sense of trust in the fairness and competence of other scholars. Today, as in 1999, much depends on the goodwill, understanding, and actions of editors. Editors exercise considerable power in determining who is selected to make judgements about the work of others and in interpreting feedback when it is received. Peer review can be conservative, as was noted in the 1999 article, and, given the veil of anonymity that still prevails in most cases, it can shield reviewers themselves from proper scrutiny about the motivations that underpin their comments. Reviewers can be arrogant, self-absorbed, narrow-minded, and mean-spirited. They can be reactionary and defensive in responding to critiques of the ideas and traditions to which they adhere. Editors may deliberately or unconsciously select reviewers who are likely to be hostile to the content or style of a submitted manuscript. They may be looking for reasons to reject a paper rather than reasons to accept it.

Despite all of these potential dangers and drawbacks with the process, peer review is still the best reassurance we have of robust and collegial judgements being made about the quality of academic work. The negative possibilities noted above exist, across different fields, but they are heavily outweighed by the positive features of peer review. Most editors and reviewers approach their duties with a strong sense of responsibility to their scholarly community. Editing and reviewing are time-consuming, difficult, often thankless tasks; they constitute a vital form of academic service. The reports completed by peer reviewers often provide constructive suggestions that demonstrably improve a submitted manuscript. Peer reviewers may notice mistakes, identify silences and weaknesses, and suggest appropriate ways for deepening and extending an argument. Thoughtful, careful reviewing of an academic article, chapter, or book becomes a kind of dialogue between the author(s) and the reviewer. Peer review, at its best, continues an intellectual conversation to which a manuscript author is adding, building on work that has already been undertaken by other thinkers and researchers, allowing new insights and perspectives to emerge.

The Internet offers opportunities for making this scholarly dialogue more immediate and more sustained. This is so not just in relation to the publication of ideas but in the way evaluations are undertaken via schemes such as the PBRF and the REF. The Internet is vital, at every step in the process. A substantial majority of the work evaluated under such schemes is published via the Internet, either in scholarly journals or via reports that are made publicly available. Increasingly, books and book chapters too are available in electronic as well as hard copy form. Communication between those involved in major research projects frequently relies on the Internet. The policy documents that provide the parameters for undertaking research are also invariably available online. The Internet has aided, but not replaced, the principle of academic judgement, applied not by any one individual but in the context of a discussion among peers. Through peer review, the ‘human’ element of the research assessment process is retained. Peer review, particularly where this occurs via face-to-face discussion, can make the process more complicated, slower, and more expensive, but this can be seen as preferable to an approach that relies exclusively on ‘big data’ or automated systems for generating numbers that are taken as measures of performance.

More recently, there has been a shift from scholarly publishing per se to scholarly publishing with ‘impact’ (cf., Bruns, 2013; King’s College London and Digital Science, 2015; Priem and Hemminger, 2010; Snijder, 2013). This is potentially problematic. It is not that academics do not wish their work to have any impact; rather the problems lie in how impact is construed, interpreted, and rewarded. Making a worthwhile difference in students’ lives through one’s teaching and research can be seen as a profoundly important way to have an ‘impact’ with one’s work, but this is too imprecise, too ill-suited to quantitative measures of academic performance, to be seriously considered. It can take many years, sometimes decades, for the value and significance of an academic’s contribution to be truly understood and appreciated by those with whom he or she works, and this does not fit well with the modus operandi for most research assessment regimes. Under the REF, care has been taken not to define ‘impact’ too narrowly, and it is impact beyond academia that is being assessed. But in other contexts, the language of impact could become more and more closely linked with quantitative indicators such as research income and citation-based systems such as the h-index. Such trends are particularly damaging for the humanities and the social sciences, where far fewer opportunities for substantial research funding exist than in medicine, science, and engineering, and where citation counts tend to be much lower than in those domains of study.

Research is becoming more heavily influenced by the language of ‘metrics’, an even narrower focus than the PBRF’s emphasis on ‘outputs’. The assumption in both cases is that if something is to count it must be measurable in some way. An independent review was recently undertaken in the U.K. on the role played by metrics in assessing and managing research (Wilsdon, et al., 2015). In his Foreword, the Chair of the steering group responsible for the report observed:

Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. [...] Yet we only have to look around us, at the blunt use of metrics such as journal impact factors, h-indices and grant income targets to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research. [1]

‘Metrics’, Wilsdon added, ‘hold real power: they are constitutive of values, identities and livelihoods’ [2]. In making this point, Wilsdon refers to the case of Stefan Grimm, a Professor at Imperial College in London, who committed suicide in September 2014. Shortly before his death, Grimm had been informed that despite his relatively strong publication record, his research performance was inadequate: his success in gaining research funding was not at the level expected of Professors in his department at Imperial. The case was widely discussed in the U.K., and there are lessons for other countries in reflecting on this example.

Academics in public universities have important responsibilities to uphold if taxpayer funds are to be well spent. Undertaking rigorous research is one of those responsibilities, but there is no one best way to contribute to this key element of higher educational life. Publishing one’s work in scholarly journals (or in book form) is a reasonable expectation to have of most academics, and this process has been aided in many respects by developments in the age of the Internet. It is not clear, however, why the gaining of research funding should be a universal requirement for academics, given the wide variations in readily available sources of financial support across (and even within) different fields. And even where funding is readily available, it is not self-evident that research undertaken with substantial grants is any ‘better’ — in terms of its rigor, its contribution to knowledge, or its potential value for others — than work completed without funding. Assessing an academic’s worth on the basis of the dollars gained through his or her research activities is another element, perhaps the ultimate step, in the broader process of dehumanization described in this paper. At first glance, it might seem odd that those charged with evaluating academic work, either through exercises such as the PBRF or in appointment, tenure and promotion decisions, could adopt such an obviously flawed, reductionist stance in their assessment activities. Yet, as Lyotard (1984) argued, there is nothing remarkable about such an approach from a performativity point of view. In a world where knowledge is treated as just another commodity, and where the goal is to maximize outputs relative to inputs, ethical objections become irrelevant. What counts is not truth, or knowledge, but what sells. Academic institutions are now very much subject to this logic and unlikely to escape from it without a good deal of struggle, both within their own ranks and further afield.

 

++++++++++

Concluding remarks

It seems unlikely that the relentless production of published work will slow down any time soon. The turn to metrics and the language of ‘impact’ may encourage some academics to be more ‘strategic’ in their scholarly efforts — paying more attention than they may have in the past to citations, journal rankings, and the marketing of their research beyond their institutions — but it will do little to reduce the rate at which material is being published. The opening up of new forms of publication via the Internet may, if anything, contribute to an acceleration in rates of productivity. The creation of machine-like production processes, stimulated by pressures to publish under performance-based research funding schemes, is, this paper has suggested, dehumanizing. Binding measures of performance more tightly to the idea of generating revenue will make the process of dehumanization even more marked.

The Internet could play a part in cementing these dehumanizing trends but it could also be pivotal in contesting them. Internet giants such as Google, with its access to and control over vast amounts of data, can exert a powerful influence over what comes to matter in judging academic performance. The commercialization of everyday activities, aided by aggressive Internet advertising, can also seep into academic life. At the same time, the Internet can provide a platform for organizing and resisting dominant structures and practices. It can enable university teachers, students, and others to make links between local problems and similar concerns at a wider international level. Communication via the Internet, as a form of constructive scholarly conversation, can contribute to a stronger sense of solidarity among academics. It can enable scholars to see that what may seem like an individual matter is often something of greater collective concern.

The Internet may be increasingly dominated by corporate interests, but it still has ‘unruly’ tendencies that refuse to be suppressed. Academics are under pressure to publish, and their control over how they do so may sometimes be limited, but they still, in most countries at least, enjoy considerable freedom in deciding what they have to say. Paywalls put up by multinational publishing firms create some impediments in accessing ‘dangerous’ knowledge, yet, as this paper has noted, there are other ways in the the age of the Internet to set ideas free. Traditional scholarly journals continue to have an important place in the academic world but there are also many other Internet-based ways to make one’s views and research known. Being prepared to ask difficult questions will remain a key task for academics in the future, regardless of the manner in which ideas are produced, conveyed, and debated among peers. End of article

 

About the author

Peter Roberts is Professor of Education and Director of the Educational Theory, Policy and Practice Research Hub at the University of Canterbury in New Zealand.
E-mail: peter [dot] roberts [at] canterbury [dot] ac [dot] nz

 

Notes

1. Wilsdon, et al.., 2015, p. iii.

2. Ibid.

 

References

K. Arnold, 1995. “The body in the virtual library: Rethinking scholarly communication,” Journal of Electronic Publishing, volume 1, numbers 1–2.
doi: http://dx.doi.org/10.3998/3336451.0001.104, accessed 16 February 2019.

C. Ashcroft, 2005. “Performance based research funding: A mechanism to allocate funds or a tool for academic promotion?” New Zealand Journal of Educational Studies, volume 40, number 1, pp. 113–129.

D. Astle, 1991. “High prices from Elsevier,” Newsletter on Serials Pricing Issues, NS 15 (12 December), at http://serials.infomotions.com/nspi/nspi-ns015.txt, accessed 16 February 2019.

B-C, Björk, P. Welling, M. Laakso, P. Majlender, T. Hedlund, and G. Guðnason, 2010. “Open access to the scientific journal literature: Situation 2009,” PloS ONE, volume 5, number 6, e11273.
doi: https://doi.org/10.1371/journal.pone.0011273, accessed 16 February 2019.

Budapest Open Access Initiative, 2002. “Read the Budapest Open Access Initiative” (14 February), at http://www.budapestopenaccessinitiative.org/read, accessed 11 October 2017.

A. Bruns, 2013. “Faster than the speed of print: Reconciling ‘big data’ social media analysis and academic scholarship,” First Monday, volume 18, number 10, at https://firstmonday.org/article/view/4879/3756, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v18i10.4879, accessed 16 February 2019.

R. Clarke, 2007. “The cost profiles of alternative approaches to journal publishing,” First Monday, volume 12, number 12, at https://firstmonday.org/article/view/2048/1906, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v12i12.2048, accessed 16 February 2019.

Committee for Economic Development, Digital Connections Council, 2009. Harnessing openness to improve research, teaching and learning in higher education. Washington, D.C.: Committee for Economic Development, at https://www.ced.org/pdf/Harnessing-Openness-to-Improve-Research-Teaching-and-Learning-in-Higher-Education.pdf, accessed 16 February 2019.

B. Cope and M. Kalantzis, 2009. “Signs of epistemic disruption: Transformations in the knowledge system of the academic journal,” First Monday, volume 14, number 4, at https://firstmonday.org/article/view/2309/2163, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v14i4.2309, accessed 16 February 2019.

B. Curtis, 2008. “The Performance-Based Research Fund: Research assessment and funding in New Zealand,” Globalisation, Societies and Education, volume 6, number 2, pp. 179–194.
doi: https://doi.org/10.1080/14767720802061488, accessed 16 February 2019.

C. Day, 1995. “Economics of electronic publishing,” Journal of Electronic Publishing, volume 1, numbers 1–2.
doi: http://dx.doi.org/10.3998/3336451.0001.111, accessed 16 February 2019.

P. Fitzsimons, M. Peters, and P. Roberts, 1999. “Economics and the educational policy process in New Zealand,” New Zealand Journal of Educational Studies, volume 34, number 1, pp. 35–44.

C. Greenwood, 1993. “Publish or perish: The ethics of publishing in peer-reviewed journals,” Media Information Australia, volume 68, number 1, pp. 29–35.
doi: https://doi.org/10.1177/1329878X9306800106, accessed 16 February 2019.

D. Greyson, K. Vézina, H. Morrison, D. Taylor, and C. Black, 2009. “University supports for open access: A Canadian national survey,” Canadian Journal of Education, volume 39, number 3, pp. 1–32, and at http://journals.sfu.ca/cjhe/index.php/cjhe/article/view/472, accessed 16 February 2019.

J.-C. Guédon, 2009. “Open access: An old tradition and a new technology,” Canadian Journal of Education, volume 39, number 3, pp. i–v, and at http://journals.sfu.ca/cjhe/index.php/cjhe/article/view/523, accessed 16 February 2019.

S. Harnad, 1997. “The paper house of cards (and why it’s taking so long to collapse),” Ariadne, number 8, at http://www.ariadne.ac.uk/issue8/harnad/, accessed 11 October 1997.

S. Harnad, 1996. “Implementing peer review on the Net: Scientific quality control in scholarly electronic journals,” In: R. Peek and G. Newby (editors). Scholarly publishing: The electronic frontier. Cambridge, Mass.: MIT Press, pp. 103–118.

S. Harnad, 1995. “Electronic scholarly publication: Quo vadis?” Serials Review, volume 21, number 1, pp. 70–72.

S. Harnad, 1991. “Post-Gutenberg galaxy: the fourth revolution in the means of production of knowledge,” Public-Access Computer Systems Review, volume 2, number 1, pp. 39–53, and at https://journals.tdl.org/pacsr/index.php/pacsr/article/view/6030/5662, accessed 16 February 2019.

S. Harnad and T. Brody, 2004. “Comparing the impact of open access (OA) vs. non-OA articles in the same journals,” D-Lib Magazine, volume 10, number 6.
doi: https://doi.org/10.1045/june2004-harnad, accessed 16 February 2019.

T. Iiyoshi and M. Vijay Kumar (editors), 2008. Opening up education: The collective advancement of education through open technology, open content, and open knowledge. Cambridge, Mass.: MIT Press.

King’s College London and Digital Science, 2015. The nature, scale and beneficiaries of research impact: An initial analysis of Research Excellence Framework (REF) 2014 impact case studies. Bristol: Higher Education Funding Council of England, and at https://www.kcl.ac.uk/sspp/policy-institute/publications/analysis-of-ref-impact.pdf, accessed 16 February 2019.

J.-F. Lyotard, 1984. The postmodern condition: A report on knowledge. Translated by G. Bennington and B. Massumi. Minneapolis: University of Minnesota Press.

C. May, 2010. “Openness in academic publication: The question of trust, authority and reliability,” Prometheus, volume 28, number 1, pp. 91–94.
doi: https://doi.org/10.1080/08109021003676417, accessed 16 February 2019.

S. Middleton, 2005. “Disciplining the subject: The impact of PBRF on education academics,” New Zealand Journal of Educational Studies, volume 40, numbers 1–2, pp. 131-155.

H. Morrison, 2013. “Economics of scholarly communication in transition,“ First Monday, volume 18, number 6, at https://firstmonday.org/article/view/4370/3685, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v18i6.4370, accessed 16 February 2019.

New Zealand Ministry of Education, 2006. Tertiary education strategy 2007–12: A framework for monitoring. Wellington: Ministry of Education, at https://www.educationcounts.govt.nz/publications/tertiary_education_all/tes/20337, accessed 16 February 2019.

New Zealand Ministry of Education, 2002. Tertiary education strategy 2002–07: Monitoring report 2005. Wellington: Ministry of Education, at https://www.educationcounts.govt.nz/publications/80898/tes/tertiary_education_strategy_2002_-_2007_monitoring_report_2005, accessed 16 February 2019.

New Zealand Performance-Based Research Fund Working Group, 2002. Investing in excellence: The report of the Performance-Based Research Fund Working Group. Wellington: New Zealand Ministry of Education and Transition Tertiary Education Commission.

A. Odlyzko, 1997. “The economics of electronic journals,’ First Monday, volume 2, number 8, at https://firstmonday.org/article/view/542/463, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v2i8.542, accessed 16 February 2019.

A. Odlyzko, 1994. “Tragic loss or good riddance? The impending demise of traditional scholarly journals,” Surfaces, volume 4, at https://pum.umontreal.ca/revues/surfaces/vol4/odlyzko.html, accessed 16 February 2019.

A. Okerson, 1996. “University libraries and scholarly communication,” In: R. Peek and G. Newby (editors). Scholarly publishing: The electronic frontier. Cambridge, Mass.: MIT Press, pp. 181–199.

A. Okerson, 1991a. “The electronic journal: What, whence, and when?” Public-Access Computer Systems Review, volume 2, number 1, pp. 5–24, and at https://journals.tdl.org/pacsr/index.php/pacsr/article/view/6037, accessed 16 February 2019.

A. Okerson, 1991b. “Back to academia? The case for American universities to publish their own research,” Logos, volume 2, number 2, pp. 106–112.

M. Peters and P. Roberts, 2011. The virtues of openness: Education, science, and scholarship in the digital age. Boulder, Colo.: Paradigm Publishers.

M. Peters and P. Roberts, 1999. “Globalisation and the crisis in the concept of the modern university,” Australian Universities’ Review, volume 42, number 1, pp. 47–55.

J. Priem and B. Hemminger, 2010. “Scientometrics 2.0: New metrics of scholarly impact on the social Web,” First Monday, volume 15, number 7, at https://firstmonday.org/article/view/2874/2570, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v15i7.2874, accessed 16 February 2019.

A. Pyati, 2007. “A critical theory of open access: Libraries and electronic publishing,” First Monday, volume 12, number 10, at https://firstmonday.org/article/view/1970/1845, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v12i10.1970, accessed 16 February 2019.

P. Roberts, 2013. “Academic dystopia: Knowledge, performativity and tertiary education,” Review of Education, Pedagogy, and Cultural Studies, volume 35, number 1, pp. 27–43.
doi: https://doi.org/10.1080/10714413.2013.753757, accessed 16 February 2019.

P. Roberts, 2007. “Neoliberalism, performativity and research,” International Review of Education, volume 53, number 4, pp. 349–365.
doi: https://doi.org/10.1007/s11159-007-9049-9, accessed 16 February 2019.

P. Roberts, 2006. “Performativity, measurement and research: A critique of performance-based research funding in New Zealand,” In: J. Ozga, T. Seddon, and T. Popkewitz (editors). World yearbook of education 2006: Education research and policy. London: Routledge, pp. 185–199.

P. Roberts, 1999. “Scholarly publishing, peer review and the Internet,” First Monday, volume 4, number 4, at https://firstmonday.org/article/view/661/576, accessed 16 February 2019.
doi: http://firstmonday.org/ojs/index.php/fm/article/view/661/576, accessed 16 February 2019.

P. Roberts and M. Peters, 2008. Neoliberalism, higher education and research. Rotterdam: Sense Publishers, and at https://www.sensepublishers.com/media/685-neoliberalism-higher-education-and-research.pdf, accessed 16 February 2019.

T. Seddon, D. Bennett, J. Bobis, S. Bennett, N. Harrison, S. Shore, E. Smith, and P. Chan, 2012. “Living in a 2.2 world: ERA, capacity building and the topography of Australian educational research,” at https://www.aare.edu.au/assets/documents/AARE_ACDEreport2012.pdf, accessed 16 February 2019.

A. Smecher, 2008. “The future of the electronic journal,” NeuroQuantology, volume 6, number 1, pp. 1–6, at https://pkp.sfu.ca/files/201-579-1-PB.pdf, accessed 16 February 2019.

R. Smith and J. Jesson (editors), 2005. Punishing the discipline — the PBRF regime: Evaluating the position of education — where to from here? Auckland: AUT and the University of Auckland.

R. Snijder, 2013. “Measuring monographs: A quantitative method to assess scientific impact and societal relevance,” First Monday, volume 18, number 5, at https://firstmonday.org/article/view/4250/3675, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v18i5.4250, accessed 16 February 2019.

D. Solomon, 2002. “Talking past each other: Making sense of the debate over electronic publication,” First Monday, volume 7, number 8, at https://firstmonday.org/article/view/978/899, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v7i8.978, accessed 16 February 2019.

P. Suber, 2015. “Open access overview,” at http://bit.ly/oa-overview, accessed 11 October 2017.

G. Taubes, 1996a. “Science journals go wired,” Science, volume 271, number 5250 (9 February), p. 764.
doi: http://dx.doi.org/10.1126/science.271.5250.764, accessed 16 February 2019.

G. Taubes, 1996b. “Speed of publication — stuck in first gear,” Science, volume 271, number 5250 (9 February), p. 765.
doi: http://dx.doi.org/10.1126/science.271.5250.765, accessed 16 February 2019.

Tertiary Education Advisory Commission (TEAC), 2001a. Shaping the system: Second report of the Tertiary Education Advisory Commission. Wellington: TEAC.

Tertiary Education Advisory Commission (TEAC), 2001b. Shaping the strategy: Third report of the Tertiary Education Advisory Commission. Wellington: TEAC.

Tertiary Education Advisory Commission (TEAC), 2001c. Shaping the funding: Framework. fourth and final report of the Tertiary Education Advisory Commission. Wellington: TEAC.

Tertiary Education Advisory Commission (TEAC), 2000. Shaping a shared vision: Initial report. Wellington: TEAC.

Tertiary Education Commission, 2013. Performance-based research fund: Evaluating research excellence — The 2012 assessment. Wellington: Tertiary Education Commission, and at http://www.tec.govt.nz/assets/Reports/4508da9deb/PBRF-QE-2012-Final-Report.pdf, accessed 16 February 2019.

S. Thatcher, 1995. “The crisis in scholarly communication,” Chronicle of Higher Education (3 March), pp. B1–B2, and at https://www.chronicle.com/article/The-Crisis-in-Scholarly/85578, accessed 16 February 2019.

UCSB Library Newsletter for Faculty, 1996. “Why we buy fewer books and journals: The continuing crisis in scholarly communication, part II,” University of California at Santa Barbara (Spring).

E. Valauskas, 1997. “Waiting for Thomas Kuhn: First Monday and the evolution of electronic journals,” First Monday, volume 2, number 12, at https://firstmonday.org/article/view/567/488, accessed 16 February 2019.
doi: http://dx.doi.org/10.5210/fm.v2i12.567, accessed 16 February 2019.

J. Willinsky, 2006. The access principle: The case for open access to research and scholarship. Cambridge, Mass.: MIT Press.

J. Wilsdon, L. Allen, E. Belfiore, P. Campbell, S. Curry, S. Hill, R. Jones, R. Kain, S. Kerridge, M. Thelwall, J. Tinkler, I. Viney, P. Wouters, J. Hill, and B. Johnson, 2015. The metric tide: Report of the independent review of the role of metrics in research sssessment and management. Bristol: Higher Education Funding Council of England, and at https://blogs.lse.ac.uk/impactofsocialsciences/files/2015/07/2015_metrictide.pdf, accessed 16 February 2019.
doi: http://dx.doi.org/10.13140/RG.2.1.4929.1363, accessed 16 February 2019.

 


Editorial history

Received 20 October 2018; accepted 14 February 2019.


Copyright © 2019, Peter Roberts. All Rights Reserved.

Higher education, impact, and the Internet: Publishing, politics, and performativity
by Peter Roberts.
First Monday, Volume 24, Number 5 - 6 May 2019
https://www.firstmonday.org/ojs/index.php/fm/article/download/9474/7735
doi: http://dx.doi.org/10.5210/fm.v24i5.9474