As 2011 begins, academic staff at New Zealand Universities will emerge from all the ‘formative exercises’, ‘mentoring’ and ‘coaching’ sessions of recent years, to get straight into the real thing: the 2012 round of Performance Based Research Funding. This will be the third (and, dare one hope, the last) in a sequence of formal evaluations of the value of academic research, which began in New Zealand in 2003 and was repeated in 2006.
The year 2009 saw a practice (‘formative’) exercise and now the serious effort will commence, as committees are formed and data-bases updated, ready for the official beginning of the ‘census period’ on 15 June. Academic staff will then devote themselves to putting together their ‘Evidence Portfolio’, reporting publications of the greatest quality (publications that are ‘quality assured’) and providing evidence of the high esteem in which they are held by their peers across the world. This will take the best part of a year, as evidential items are added and validated and claims of worth are (in the modern parlance) ‘sexed-up’. There will also be a last chance to organise a few local conferences, which guarantee peer-reviewed status to every contribution. And then the numbers will be added up and the final rankings decided and announced. By this time it will be April 2013 and the process will have been on-going to a greater or lesser extent for seven years.
And it will all be a monumental waste of time (and money) and a source of great angst when the dreaded judgments are announced. What we do know from the copious amount of data that has been collected on this kind of assessment in the various places in which it is undertaken (UK, Australia, here), is that it will tell us next to nothing about the quality and value of research being done in our universities. The most it will tell us is which groups and individuals have the most capacity to manipulate the system and which institutions have the boldest and most imaginative public relations department.
The bald fact is that the implied comparisons between one ‘output’ (‘peer-reviewed’ publication, conference presentation, composition,…) and another, cannot validly be made across the academic spectrum. This is because research practises and paradigms of judgment vary enormously and, even within particular disciplines, the notion of what may be of ‘assured quality’ can vary enormously, depending on the prejudices of whoever is doing the judging. Also, there is a world of difference between publication in a distinguished journal that may select only a small portion of its offerings, and ‘publication’ by an ad-hoc collective, specifically set up for the purpose of assuring all parties a ‘peer-reviewed’ credit. PBRF fails to distinguish between these. There is also a difference between ten publications, each of which has ten authors, and ten publications by the same single author. Under PBRF each individual in the first case would get the same publication credit as the single author in the second case.
The overwhelming emphasis is on quantity. Studies are increasingly pointing to a growing mountain of publication, which nobody is reading. Academics don’t have time to read this stuff because they are too busy writing more of it. The incoming director of the British Historical Institute made this point a few years ago in relation to the British Research Assessment Exercise (the model for our PBRF), when he talked of a ‘prodigious’ (‘preposterous’, even) output of two thousand books and nearly five thousand articles, simply from British academic authors and this for one year and one subject (history).
The unacknowledged explanation for this inordinate emphasis on numbers is simply that quality is much more difficult to measure than quantity. Of course, the various subject panels do purport to assess the former but the dominant process entails simple arithmetic. Fundamentally, it cannot be otherwise. The history of scholarship is full of cases where the value of particular investigations is not seen until years later and, conversely, discoveries which seem promising lead nowhere, or are plain wrong. There are also works that, of their essence, take time to mature and which do not require (indeed, cannot supply) a steady stream of ‘outputs’ on the way but are nonetheless of lasting value. The PBRF system of evaluation simply discourages long-term projects (such as biographies) or the sort of speculative/conceptual enterprise that used to be the glory of the university academic. Specific examples of all these things may be found in the 2011 update of my original 2008 research paper, published on the NZCPR site (‘Performance-Based Research Funding: why it must end’).
Of course, it is important that the academic staffs of our universities engage in research. At the very least they need to keep up with developments in their field as an essential background to their teaching. This continuing familiarity with what is known may also provide a basis on which academic persons discharge their obligation to make some contribution to social discourse on pressing matters of public policy. Teaching staff at our universities will also engage in fundamental research of their own. This will be partly because of a need to train new researchers and provide a model for them and partly because a familiarity with an evolving field of knowledge can hardly fail to generate questions that cry out to be answered. Some will be more intrinsically motivated by this than others and some may be deemed more ‘successful’, (by whatever processes and criteria are adopted at a particular place and time) but there is an enormous downside in attempting to institutionalise the process of judgment and impose a formal ‘bean-counting’ model. Not only is this process fatally vulnerable to large-scale manipulation and the operation of personal and academic prejudice; as has been argued, it is a disservice to scholarship. As a matter of fact, it is also a disservice to what ought to be the main concern of universi
ties: teaching. The effect of PBRF (and the British RAE) has been to progressively relegate teaching activity to more junior faculty members and senior students. Indeed, there has been a pattern of appointing research ‘high-flyers’ to senior positions on the basis of their PBRF/RAE score, without any requirement that they teach at all, or even appear on the campus. For the generality of teaching staff, it would hardly be surprising if there was a tendency to think that time spent on lecture preparation is wasted time, that could be better spent on getting another paper out.
Beginning in 1985 as Margaret Thatcher’s revenge on Oxford University for denying her an honorary doctorate, the dead hand of bureaucratic accountability has done its destructive work on universities around the world for more than a quarter of a century. It is time to stop. Surely, honour is satisfied now.
To view Ron Smith’s research paper, click here