What's wrong with the PBRF?
The PBRF has had the beneficial effect of shifting public funding into research-active institutions to acknowledge their contributions to scientific and scholarly inquiry. Many individual academics have found it beneficial also because it has meant that their department and faculty heads now take a more active interest in developing their research careers. Nonetheless, the present posting proceeds to outline some specific detrimental effects of this assessment and funding method.
First, the PBRF is used to make claims about research productivity that do not meet the normative standards of research methodology. Even before the results of the 2006 assessment had been released, the Ministry of Education confidently stated that the PBRF has “helped focus the effort of tertiary research on achieving excellence” (Office of the Minister of Tertiary Education 2006 p. 25). This is in spite of the fact that, on further inquiry by the author, the Ministry was unable to produce any evidence of this at that time, nor to operationally define “excellence.”
Naturally, it is tempting to make comparisons between the results of the 2003 and 2006 assessments to look for improvements in the quality-ratings of institutions and disciplines. But, because of the confounding effects of “window-dressing,” improved form-filling skills devoted to evidence portfolios, and more careful selection of “eligible” academic staff into the census, it is not valid to make before-and-after claims about “research quality” - that is, not if such claims are to withstand normative scientific scrutiny. Nevertheless, it is apparent from the above quote that the Ministry of Education was already anticipating improvements, even before the 2006 results were released. Upon the actual release of the results, however, the Tertiary Education Commission and the Minister for Tertiary Education were quick to make confident claims in their media statements. The former stated, to accompany the release of the results, that the Quality Evaluation “shows early signs of having a positive impact on tertiary education-base research (Tertiary Education Commission, 2007a). On the same day, the Minister for Tertiary Education glowingly claimed: “The results . . . demonstrate that New Zealand is continuing to improve the quality of research” (Cullen 2007).
These claims (which have the tone of Maoist propaganda about a “bumper harvest”) were based on comparisons between the 2003 and 2006 surveys which found, for example, that the number of staff who received “A” and “B” ratings had increased, and that all universities’ aggregate quality scores had risen. A closer reading of the summary of the actual results, however, showed that the Tertiary Education Commission was aware of the confounding effects that prevent us from making any credible claims about “improved research quality” - even though they were confident that there had been a quantitative increase.
"The measured improvement in research quality cannot be solely attributed to improvements in actual research quality as there are [sic] likely to be a number of factors influencing the results of the 2006 Quality Evaluation. Nevertheless, the increase in average quality scores, and the marked increase in the number of staff whose EPs were assigned a funded Quality Category between 2003 and 2006 suggests that there has been some increase in the actual level of research quality" (Tertiary Education Commission 2007b p. 10, italics added).
They noted that recruitment activities by institutions had contributed to the measured “improvements”:
". . . the major increase in “A”s in some subject areas could be traced to senior appointments from overseas - of the 218 staff whose EPs [evidence portfolios] were assigned an “A” in the 2006 Quality Evaluation, it was estimated that at least 48 were appointments from overseas" (ibid. p. 72).
The TEC also noted that the assessment panels had generally commented upon an improvement in the presentation of evidence portfolios - although it claimed that this meant that the 2006 round more accurately reflected actual research efforts and quality. Nonetheless, much of the improvement in scores can clearly be attributed to improved skills among academics and their administrative assistants in filling out the on-line forms with suitably impressive details.
Furthermore, the universities themselves openly acknowledged that they had made more careful efforts in 2006 to exclude from eligibility for the survey those staff who were not research-active and who could be classed as “teaching-only, under strict supervision.” The more research-inactive teachers whom one could thus exclude, the greater the aggregate quality score for the university. Universities had renegotiated the employment contracts of some staff in order to use the PBRF eligibility criteria to their advantage, resulting in accusations of manipulation of the system. A TEC audit found that about ten per cent of the sample of those who were eligible and provided evidence portfolios in 2003 were still employed in the sector in 2006, but had become “ineligible” for the 2006 assessment as they no longer met the criteria.
Hence, if one were to apply the normative standards of scientific inquiry to these results, one would have to say that there are numerous confounding factors that make it impossible to conclude to what extent “research quality” in New Zealand’s universities had actually improved between 2003 and 2006, if at all. Indeed, the sample being surveyed between the two assessments had changed considerably, and not in a random way, due to the funding incentives created by the assessment itself, making comparisons invalid without careful statistical controls - which have not been undertaken. In short, the measurement system, and the manipulation thereof, created their own effects, giving the appearance of improvements in research quality in 2006. Even the TEC was prepared only to say that the results “suggest” some level of improvement in research quality, and they would not speculate about how much improvement has occurred. In short, the PBRF’s results produced no conclusive evidence about its effectiveness in encouraging “excellent research.” Because the PBRF is both a measurement tool and an intervention that attempts to alter that which it is measuring, its validity as a measurement tool is strictly limited.
No such evaluation could ever reach the standards of precision of the natural sciences, and the validity of any assessment of research quality is, of course, dependent on the a priori definitions of “research” and “quality,” and on what specific criteria for measurement are chosen. But, in order to gain the confidence of the very researchers who contribute to the assessment, it would be important for politicians, officials and university managers not to use the results in ways that are not justifiable by the standards required for research reporting in reputable journals.
The PBRF does at least successfully distinguish the research productivity of universities from that of polytechnics. The final quality scores of the eight universities ranged from 1.86 to 4.22, while the highest-scoring polytechnic scored 0.96. This confirmed the institutional distinction, based on research, between these two types of tertiary-education organisations. Given, though, that there is no objective criterion against which to measure research quality (or, what one is measuring is a construct invented by those doing the measurement) it would, on the other hand, be of little statistical validity to draw fine distinctions between scores that are very close to one another. The TEC’s results are not reported with a confidence interval of “plus or minus x points” - partly because of the lack of any underlying objective criterion. But the PBRF results do, of course, create a “league table” of universities that is of dubious validity, but is nevertheless seized upon by reporters for public consumption as news. Such league tables are a common feature of the global university environment today, and, while they are known to be of limited validity, they do become an end in themselves with real effects that begin to reshape “in their own image” the institutions that they purport to describe (Marginson, forthcoming).
So, the relative rankings of the universities became an object of intense competitiveness between Vice-Chancellors, because of their reputational effects. In 2003, the University of Auckland was ranked first; but, in 2006, first place was taken by the University of Otago, leaving Auckland second. But the differences in scores between the top three scoring universities in 2006 was very narrow: 4.22, 4.19, 4.10. Nonetheless, Otago’s Vice-Chancellor was quick to capitalise on his university’s score by claiming that it is “New Zealand’s top university.” The University of Auckland had previously been running an advertising campaign, calling itself “New Zealand’s number one university,” and it did not desist in doing so, using its ranking on the Times Higher Education Supplement’s survey as alternative “evidence.” An unseemly war of words ensued between the two.
The fact that the PBRF, which is really only a governmental audit designed for funding purposes, gets misused for political and public-relations purposes does little to raise its reputation among the very researchers upon whom it depends and whom it is supposed to “encourage.” This is especially so if some of the claims made about the assessment’s results have no underlying validity, as understood by the kinds of research practices required of university researchers. But, such systems of performance management contain within themselves the potential to become instruments of power, exercised in ways that go well beyond their original stated objectives.
This leads me to an examination of the PBRF’s effects on academic freedom. Although academics in New Zealand are free to criticise government policies and to question received ideas, the PBRF nonetheless breaches the Education Act’s requirement of government and of university councils to respect academic freedom. Section 161 of the Education Act 1989 defines academic freedom, and this includes “the freedom of academic staff and students, within the law, to question and test received wisdom, to put forward new ideas and to state controversial or unpopular opinions . . . [and] to engage in research.” This is mediated by the requirements to abide by high ethical standards and permit public scrutiny and by “the need for accountability by institutions and the proper use by institutions of resources allocated to them.” The following section of the Act includes the requirement that universities “accept a role as critic and conscience of society.” Section 161 states that the universities, the Minister and all agencies of government “shall act in all respects so as to give effect to the intention of Parliament as expressed in this section.”
Now, it would be an unfair exaggeration to claim that the PBRF represents a gross or blatant violation of Section 161, as New Zealand’s academics are still free to criticise policies and to challenge orthodox ideas. Moreover, academic freedom does not exist in an ideal form, but is always shaped by, and contested within, local, historical contexts. Hence, peer-group norms, academic-disciplinary standards, competitive career objectives, etc. do shape intellectual expression and, from time to time, limit scientific progress. Nor is academic freedom a unique or distinctive liberty, as it sits alongside other democratic principles, such as freedom of speech, freedom of the press and parliamentary privilege. So, while there may never be an ideal institutional space, protected by the Academy’s walls, that preserves an unconditional freedom of thought, the principle of academic freedom does at least provide a check against deliberate interference or manipulation.
Given, then, that the PBRF is, by political design, an attempt to shape the priorities of university researchers, it breaches the spirit of the Act concerning academic freedom. In the TES, the government explicitly states its intent to shape the teaching and research activities of universities in line with its own policy objectives, as a condition of securing public funding. The PBRF in particular seeks to shape research priorities and productivity - and hence the choices of individual scholars and scientists - in line with those national goals. Although I am not at all sure how this present “research output” may be contributing to the government’s goals, the Minister states that it should contribute to his government’s priorities. Governmental and institutional documents are completely transparent about that. So, while not grossly interfering with my freedom as a scholar, this nevertheless represents a direct policy (indeed, political) intervention into my work. The level of monitoring and reporting of individuals’ research productivity has consequently increased, and it should not be forgotten that surveillance in itself does alter behaviour. Activities that were once considered “free,” in the sense of unconstrained by any fear of political disfavour, become required in order to avoid a new form of disfavour. If one is not seen to produce research, one’s position creates a financial risk to the university, and managerial disfavour will quickly follow.
Now, there is an obligation on a person who accepts the privileges of an academic post to exercise one’s academic freedom actively by way of scholarly inquiry and scientific investigation. Many people in academic positions do not actively engage in research - a fact which was always known, but which has been highlighted and quantified by the PBRF itself. And this is, in the author’s opinion, an unjustifiable misuse of the privileges of academic freedom. But the academic freedom that these individuals may have undermined is, in turn, undermined in so far as a system of controls is put in place to “encourage and reward” research - which may be read as, in effect, to make the exercise of one’s “freedom” compulsory and regulated, to goad the inactive into activity, and to “reward” others who are active researchers for doing what they formerly were called to do because of its intrinsic rewards and intellectual value. This paradox expresses itself daily among academics for whom the PBRF becomes the reason for doing research, rather than remaining merely a funding mechanism that supports research that was supposedly already worth doing for its own sake or for its social and economic benefits. Individuals now make choices about their research priorities based on the effect it could have on their quality scores. So, for example, writing text-books, which do not rate highly in the PBRF definition of “research,” is now likely to be neglected in favour of articles for international journals. In effect, the autonomy of the academic community to determine for itself the balance between different forms of scholarship has thus been deliberately re-shaped by political means.
The very purpose and spirit of academic freedom is subtly undermined when the academic community begins to perform research for the sake of a governmental funding mechanism and their university’s share thereof. Academic freedom becomes an academic treadmill. What was once a source of intellectual curiosity, or a matter of professorial judgement, comes to be driven by performance anxiety and fiscal incentives.
Politicians and managers have claimed that the PBRF does not interfere with academic freedom - partially justifiable by the fact that the PBRF assessment makes no critical judgement about the content of one’s publications. Hence, one may still act as “critic and conscience” and yet get a good quality score. This is a fair point, but a superficial one, as it neglects the more pervasive effects that the PBRF is having on academic customs and on the culture of scholarship. When each paper becomes a coin in the university’s slot machine, the pressure comes on from above to shape the scholar’s production of “the currency of knowledge,” and academic freedom is quietly forgotten.
These politically and managerially organised efforts to control (“encourage and reward”) the supposedly “free” pursuit of scholarly inquiry and scientific investigation by means of a system of extrinsic incentives (in the form of extra public funding) directly interfere with the very foundations of academic freedom. This is especially so in New Zealand’s system wherein the individual scholar or scientist is the unit of assessment, and his or her score is known to managers. Hence, the New Zealand Government, its agencies and the universities themselves are failing to perform their duty to give effect to the academic freedom requirements of the Education Act when implementing the PBRF.
So, furthermore, the paradox of “compulsory academic freedom” becomes more starkly evident when we observe PBRF-related performance criteria being linked to employment and disciplinary procedures. Although the PBRF and its individual quality scores were officially intended only for the purposes of a governmental funding mechanism, they are now being misused by university managers for performance-management and disciplinary purposes. In short, the PBRF framework supplies a tool for bullying academic staff and for exerting greater managerial control over their jobs. In the case of Massey University, for example, there is the usual ineffectual “privacy” statement - which purports to ensure that information in evidence portfolios will only be used for the PBRF assessment, and for no other purposes - while, in fact, there is also a “research capability” policy, based on PBRF grades, which threatens academics with relegation to teaching-only posts if they fail to meet the PBRF’s criteria of “research-active,” and advises that employment selection procedures should be based on candidates’ abilities to meet those criteria. Given that managers know the quality scores of individuals anyway, this threat to individual academic employment conditions turns research away from an expression of academic freedom, and creates a “perform or else” imperative. Privacy of personal information is completely compromised.
Hence, one can also observe the commodifying effect of the PBRF. The PBRF inadvertently promotes the perverse perception that the purpose of research is to make money (commodifying research and researchers), rather than institutional income being deployed to produce research for its own value. Each research “output” now acts like a promissory note in a marketplace, creating the confident expectation of augmented institutional income. The active researcher - especially if rated “A” - becomes “hot property” in a competitive employment market; and university research policies are framed in terms of the competitive pursuit of money and the maintenance of financial viability, rather than the pursuit of knowledge and the maintenance of academic freedom. Academic freedom is no longer treated as the premise of the university’s research activities, but instead becomes an obstacle to be navigated in the course of managing “financial risk.” Furthermore, many researchers themselves buy into this commodification by stating that research activities are needed for, or will “look good” within, the PBRF assessment. Many who achieve favorable scores have actively used them to advance their own ambitions. One should not assume that individual academics are merely the “victims” of the new system, as there is a range of individual responses to it, depending, one could argue, on the advantage to be gained from it.
Due to complaints after the 2003 assessment about the costs of complying with the PBRF, the TEC decided that the 2006 assessment was “voluntary” for academics who had previously completed it and been rated in 2003, and for whom no changes were expected. In practice, some universities decided to make it compulsory for all eligible staff to complete an evidence portfolio, for reasons that were not made very clear.
This illustrates two further interesting features of this system: the possibility that the cost of assessment exceeds the value of any improvement in research quality, and the arrogation of the government’s funding audit for internal managerial agendas. On the former point, there is evidence that, once one factors in the costs of producing each PBRF point, the extra funds that the PBRF has so far supplied to the universities may be offset by the cost of performing and complying with the assessment itself. Hazeldine and Kurniawan calculated that the funding reallocation effected by the PBRF over the period 2003 to 2006 “would increase research output by no more than the transaction costs of implementing the new system” (2006, p. 278). This casts further doubt on the political claims about how the PBRF led to improved research quality. In so far as the measured improvements might have represented any real underlying improvement in research quality, one needs to account for the costs of producing such an improvement. Satisfactory results may have been achievable by simply giving the universities extra funding for research, without forcing them through a costly assessment at all. Anecdotally, at an individual level, staff were aware that the time they spent on complying with PBRF requirements could have been time spent in the production of more research. Universities that unnecessarily made the 2006 round “compulsory” were raising their internal compliance costs to a level not even required by the government. This does seem like a senseless waste of time, unless one allows for the hypothesis that the universities’ top management have come to see the PBRF as their own instrument of internal control, and no longer as simply the government’s audit for research funding purposes. When the TEC questioned the Vice-Chancellors about making its “voluntary” assessment compulsory, they were advised by the Vice-Chancellors that the matter was an internal “employment relations” matter in which the TEC had no right to interfere. It must therefore be asked who “owns” the PBRF: the government or the universities? The enthusiasm of the latter for the PBRF comes about because it represents a bigger slice of the public funding pie, as well as an opportunity to extend the reach and the effectiveness of managerial control.
New Zealand’s PBRF system may be viewed as an attempt to “count the currency of knowledge”: to increase the production of “leading-edge” or “world-class” knowledge, and to convert knowledge production into an auditable, money-like form. By making each publication a token convertible into a portion of the sovereign’s budget - and indeed by making the research-active academic a source of a measurable sum of university income - this system partakes of and advances the commodification of knowledge that is typical of the politics of the so-called “knowledge economy.” In doing so, academic freedom is forgotten and undermined, and new managerial capabilities for the control of academic staff are discovered and put into effect. The main objective now is that something reporting on research should appear and that it should appear to be “excellent.” The interest in research itself is superficial, if one takes the PBRF too seriously, as the importance and intrinsic value of knowledge is reduced to its mere appearance and its ability to generate cash. But, there is no firm evidence that the PBRF is achieving its avowed goals. To use the PBRF’s results as evidence for its own success, as politicians and officials have done, is invalid, as the incentives it creates and the consequent behaviours confound those results. Furthermore, the costs of compliance may actually cancel out any benefits produced.
University staff have been slow to assimilate and react to the effects of this new system, but this author’s impression is that sentiment among the academic community is turning against the PBRF, viewing it as a costly, time-consuming scheme with limited benefit for real research, and yet with many disadvantages, such as the rise in invidious competition and managerial control. The PBRF has succeeded in undermining much of what was left of the traditional “vocation” of scholarly and scientific endeavour, as embodied within the university community.
Cullen, M. (2007), ‘Promoting research excellence in New Zealand’. New Zealand Government, 4 May 2007, viewed on 22 May 2007.
Hazeldine, T. and Kurniawan, J. (2006), ‘Impact and Implications of the Performance-based Research Fund Research Quality Assessment Exercise’. In Evaluating the Performance-based Research Fund: Framing the Debate, L. Bakker, J. Boston, L. Campbell & R. Smyth (eds), Institute of Policy Studies, Wellington, pp. 249–284.
Marginson, S. (forthcoming), ‘Global University Rankings’. In Prospects of Higher Education: Globalization, Market Competition, Public Goods and the Future of the University, S. Marginson (ed.), Sense Publishers, Rotterdam.
Office of the Minister for Tertiary Education (2006), Tertiary Education Strategy 2007–2012. Ministry of Education, Wellington.
Tertiary Education Commission (2007a), PBRF Quality Evaluation 2006 Release Summary. Tertiary Education Commission, Wellington, 2007.
Tertiary Education Commission (2007b), ‘Performance-based Research Fund Results’. Tertiary Education Commission, 4 May 2007, viewed on 22 May 2007.
First, the PBRF is used to make claims about research productivity that do not meet the normative standards of research methodology. Even before the results of the 2006 assessment had been released, the Ministry of Education confidently stated that the PBRF has “helped focus the effort of tertiary research on achieving excellence” (Office of the Minister of Tertiary Education 2006 p. 25). This is in spite of the fact that, on further inquiry by the author, the Ministry was unable to produce any evidence of this at that time, nor to operationally define “excellence.”
Naturally, it is tempting to make comparisons between the results of the 2003 and 2006 assessments to look for improvements in the quality-ratings of institutions and disciplines. But, because of the confounding effects of “window-dressing,” improved form-filling skills devoted to evidence portfolios, and more careful selection of “eligible” academic staff into the census, it is not valid to make before-and-after claims about “research quality” - that is, not if such claims are to withstand normative scientific scrutiny. Nevertheless, it is apparent from the above quote that the Ministry of Education was already anticipating improvements, even before the 2006 results were released. Upon the actual release of the results, however, the Tertiary Education Commission and the Minister for Tertiary Education were quick to make confident claims in their media statements. The former stated, to accompany the release of the results, that the Quality Evaluation “shows early signs of having a positive impact on tertiary education-base research (Tertiary Education Commission, 2007a). On the same day, the Minister for Tertiary Education glowingly claimed: “The results . . . demonstrate that New Zealand is continuing to improve the quality of research” (Cullen 2007).
These claims (which have the tone of Maoist propaganda about a “bumper harvest”) were based on comparisons between the 2003 and 2006 surveys which found, for example, that the number of staff who received “A” and “B” ratings had increased, and that all universities’ aggregate quality scores had risen. A closer reading of the summary of the actual results, however, showed that the Tertiary Education Commission was aware of the confounding effects that prevent us from making any credible claims about “improved research quality” - even though they were confident that there had been a quantitative increase.
"The measured improvement in research quality cannot be solely attributed to improvements in actual research quality as there are [sic] likely to be a number of factors influencing the results of the 2006 Quality Evaluation. Nevertheless, the increase in average quality scores, and the marked increase in the number of staff whose EPs were assigned a funded Quality Category between 2003 and 2006 suggests that there has been some increase in the actual level of research quality" (Tertiary Education Commission 2007b p. 10, italics added).
They noted that recruitment activities by institutions had contributed to the measured “improvements”:
". . . the major increase in “A”s in some subject areas could be traced to senior appointments from overseas - of the 218 staff whose EPs [evidence portfolios] were assigned an “A” in the 2006 Quality Evaluation, it was estimated that at least 48 were appointments from overseas" (ibid. p. 72).
The TEC also noted that the assessment panels had generally commented upon an improvement in the presentation of evidence portfolios - although it claimed that this meant that the 2006 round more accurately reflected actual research efforts and quality. Nonetheless, much of the improvement in scores can clearly be attributed to improved skills among academics and their administrative assistants in filling out the on-line forms with suitably impressive details.
Furthermore, the universities themselves openly acknowledged that they had made more careful efforts in 2006 to exclude from eligibility for the survey those staff who were not research-active and who could be classed as “teaching-only, under strict supervision.” The more research-inactive teachers whom one could thus exclude, the greater the aggregate quality score for the university. Universities had renegotiated the employment contracts of some staff in order to use the PBRF eligibility criteria to their advantage, resulting in accusations of manipulation of the system. A TEC audit found that about ten per cent of the sample of those who were eligible and provided evidence portfolios in 2003 were still employed in the sector in 2006, but had become “ineligible” for the 2006 assessment as they no longer met the criteria.
Hence, if one were to apply the normative standards of scientific inquiry to these results, one would have to say that there are numerous confounding factors that make it impossible to conclude to what extent “research quality” in New Zealand’s universities had actually improved between 2003 and 2006, if at all. Indeed, the sample being surveyed between the two assessments had changed considerably, and not in a random way, due to the funding incentives created by the assessment itself, making comparisons invalid without careful statistical controls - which have not been undertaken. In short, the measurement system, and the manipulation thereof, created their own effects, giving the appearance of improvements in research quality in 2006. Even the TEC was prepared only to say that the results “suggest” some level of improvement in research quality, and they would not speculate about how much improvement has occurred. In short, the PBRF’s results produced no conclusive evidence about its effectiveness in encouraging “excellent research.” Because the PBRF is both a measurement tool and an intervention that attempts to alter that which it is measuring, its validity as a measurement tool is strictly limited.
No such evaluation could ever reach the standards of precision of the natural sciences, and the validity of any assessment of research quality is, of course, dependent on the a priori definitions of “research” and “quality,” and on what specific criteria for measurement are chosen. But, in order to gain the confidence of the very researchers who contribute to the assessment, it would be important for politicians, officials and university managers not to use the results in ways that are not justifiable by the standards required for research reporting in reputable journals.
The PBRF does at least successfully distinguish the research productivity of universities from that of polytechnics. The final quality scores of the eight universities ranged from 1.86 to 4.22, while the highest-scoring polytechnic scored 0.96. This confirmed the institutional distinction, based on research, between these two types of tertiary-education organisations. Given, though, that there is no objective criterion against which to measure research quality (or, what one is measuring is a construct invented by those doing the measurement) it would, on the other hand, be of little statistical validity to draw fine distinctions between scores that are very close to one another. The TEC’s results are not reported with a confidence interval of “plus or minus x points” - partly because of the lack of any underlying objective criterion. But the PBRF results do, of course, create a “league table” of universities that is of dubious validity, but is nevertheless seized upon by reporters for public consumption as news. Such league tables are a common feature of the global university environment today, and, while they are known to be of limited validity, they do become an end in themselves with real effects that begin to reshape “in their own image” the institutions that they purport to describe (Marginson, forthcoming).
So, the relative rankings of the universities became an object of intense competitiveness between Vice-Chancellors, because of their reputational effects. In 2003, the University of Auckland was ranked first; but, in 2006, first place was taken by the University of Otago, leaving Auckland second. But the differences in scores between the top three scoring universities in 2006 was very narrow: 4.22, 4.19, 4.10. Nonetheless, Otago’s Vice-Chancellor was quick to capitalise on his university’s score by claiming that it is “New Zealand’s top university.” The University of Auckland had previously been running an advertising campaign, calling itself “New Zealand’s number one university,” and it did not desist in doing so, using its ranking on the Times Higher Education Supplement’s survey as alternative “evidence.” An unseemly war of words ensued between the two.
The fact that the PBRF, which is really only a governmental audit designed for funding purposes, gets misused for political and public-relations purposes does little to raise its reputation among the very researchers upon whom it depends and whom it is supposed to “encourage.” This is especially so if some of the claims made about the assessment’s results have no underlying validity, as understood by the kinds of research practices required of university researchers. But, such systems of performance management contain within themselves the potential to become instruments of power, exercised in ways that go well beyond their original stated objectives.
This leads me to an examination of the PBRF’s effects on academic freedom. Although academics in New Zealand are free to criticise government policies and to question received ideas, the PBRF nonetheless breaches the Education Act’s requirement of government and of university councils to respect academic freedom. Section 161 of the Education Act 1989 defines academic freedom, and this includes “the freedom of academic staff and students, within the law, to question and test received wisdom, to put forward new ideas and to state controversial or unpopular opinions . . . [and] to engage in research.” This is mediated by the requirements to abide by high ethical standards and permit public scrutiny and by “the need for accountability by institutions and the proper use by institutions of resources allocated to them.” The following section of the Act includes the requirement that universities “accept a role as critic and conscience of society.” Section 161 states that the universities, the Minister and all agencies of government “shall act in all respects so as to give effect to the intention of Parliament as expressed in this section.”
Now, it would be an unfair exaggeration to claim that the PBRF represents a gross or blatant violation of Section 161, as New Zealand’s academics are still free to criticise policies and to challenge orthodox ideas. Moreover, academic freedom does not exist in an ideal form, but is always shaped by, and contested within, local, historical contexts. Hence, peer-group norms, academic-disciplinary standards, competitive career objectives, etc. do shape intellectual expression and, from time to time, limit scientific progress. Nor is academic freedom a unique or distinctive liberty, as it sits alongside other democratic principles, such as freedom of speech, freedom of the press and parliamentary privilege. So, while there may never be an ideal institutional space, protected by the Academy’s walls, that preserves an unconditional freedom of thought, the principle of academic freedom does at least provide a check against deliberate interference or manipulation.
Given, then, that the PBRF is, by political design, an attempt to shape the priorities of university researchers, it breaches the spirit of the Act concerning academic freedom. In the TES, the government explicitly states its intent to shape the teaching and research activities of universities in line with its own policy objectives, as a condition of securing public funding. The PBRF in particular seeks to shape research priorities and productivity - and hence the choices of individual scholars and scientists - in line with those national goals. Although I am not at all sure how this present “research output” may be contributing to the government’s goals, the Minister states that it should contribute to his government’s priorities. Governmental and institutional documents are completely transparent about that. So, while not grossly interfering with my freedom as a scholar, this nevertheless represents a direct policy (indeed, political) intervention into my work. The level of monitoring and reporting of individuals’ research productivity has consequently increased, and it should not be forgotten that surveillance in itself does alter behaviour. Activities that were once considered “free,” in the sense of unconstrained by any fear of political disfavour, become required in order to avoid a new form of disfavour. If one is not seen to produce research, one’s position creates a financial risk to the university, and managerial disfavour will quickly follow.
Now, there is an obligation on a person who accepts the privileges of an academic post to exercise one’s academic freedom actively by way of scholarly inquiry and scientific investigation. Many people in academic positions do not actively engage in research - a fact which was always known, but which has been highlighted and quantified by the PBRF itself. And this is, in the author’s opinion, an unjustifiable misuse of the privileges of academic freedom. But the academic freedom that these individuals may have undermined is, in turn, undermined in so far as a system of controls is put in place to “encourage and reward” research - which may be read as, in effect, to make the exercise of one’s “freedom” compulsory and regulated, to goad the inactive into activity, and to “reward” others who are active researchers for doing what they formerly were called to do because of its intrinsic rewards and intellectual value. This paradox expresses itself daily among academics for whom the PBRF becomes the reason for doing research, rather than remaining merely a funding mechanism that supports research that was supposedly already worth doing for its own sake or for its social and economic benefits. Individuals now make choices about their research priorities based on the effect it could have on their quality scores. So, for example, writing text-books, which do not rate highly in the PBRF definition of “research,” is now likely to be neglected in favour of articles for international journals. In effect, the autonomy of the academic community to determine for itself the balance between different forms of scholarship has thus been deliberately re-shaped by political means.
The very purpose and spirit of academic freedom is subtly undermined when the academic community begins to perform research for the sake of a governmental funding mechanism and their university’s share thereof. Academic freedom becomes an academic treadmill. What was once a source of intellectual curiosity, or a matter of professorial judgement, comes to be driven by performance anxiety and fiscal incentives.
Politicians and managers have claimed that the PBRF does not interfere with academic freedom - partially justifiable by the fact that the PBRF assessment makes no critical judgement about the content of one’s publications. Hence, one may still act as “critic and conscience” and yet get a good quality score. This is a fair point, but a superficial one, as it neglects the more pervasive effects that the PBRF is having on academic customs and on the culture of scholarship. When each paper becomes a coin in the university’s slot machine, the pressure comes on from above to shape the scholar’s production of “the currency of knowledge,” and academic freedom is quietly forgotten.
These politically and managerially organised efforts to control (“encourage and reward”) the supposedly “free” pursuit of scholarly inquiry and scientific investigation by means of a system of extrinsic incentives (in the form of extra public funding) directly interfere with the very foundations of academic freedom. This is especially so in New Zealand’s system wherein the individual scholar or scientist is the unit of assessment, and his or her score is known to managers. Hence, the New Zealand Government, its agencies and the universities themselves are failing to perform their duty to give effect to the academic freedom requirements of the Education Act when implementing the PBRF.
So, furthermore, the paradox of “compulsory academic freedom” becomes more starkly evident when we observe PBRF-related performance criteria being linked to employment and disciplinary procedures. Although the PBRF and its individual quality scores were officially intended only for the purposes of a governmental funding mechanism, they are now being misused by university managers for performance-management and disciplinary purposes. In short, the PBRF framework supplies a tool for bullying academic staff and for exerting greater managerial control over their jobs. In the case of Massey University, for example, there is the usual ineffectual “privacy” statement - which purports to ensure that information in evidence portfolios will only be used for the PBRF assessment, and for no other purposes - while, in fact, there is also a “research capability” policy, based on PBRF grades, which threatens academics with relegation to teaching-only posts if they fail to meet the PBRF’s criteria of “research-active,” and advises that employment selection procedures should be based on candidates’ abilities to meet those criteria. Given that managers know the quality scores of individuals anyway, this threat to individual academic employment conditions turns research away from an expression of academic freedom, and creates a “perform or else” imperative. Privacy of personal information is completely compromised.
Hence, one can also observe the commodifying effect of the PBRF. The PBRF inadvertently promotes the perverse perception that the purpose of research is to make money (commodifying research and researchers), rather than institutional income being deployed to produce research for its own value. Each research “output” now acts like a promissory note in a marketplace, creating the confident expectation of augmented institutional income. The active researcher - especially if rated “A” - becomes “hot property” in a competitive employment market; and university research policies are framed in terms of the competitive pursuit of money and the maintenance of financial viability, rather than the pursuit of knowledge and the maintenance of academic freedom. Academic freedom is no longer treated as the premise of the university’s research activities, but instead becomes an obstacle to be navigated in the course of managing “financial risk.” Furthermore, many researchers themselves buy into this commodification by stating that research activities are needed for, or will “look good” within, the PBRF assessment. Many who achieve favorable scores have actively used them to advance their own ambitions. One should not assume that individual academics are merely the “victims” of the new system, as there is a range of individual responses to it, depending, one could argue, on the advantage to be gained from it.
Due to complaints after the 2003 assessment about the costs of complying with the PBRF, the TEC decided that the 2006 assessment was “voluntary” for academics who had previously completed it and been rated in 2003, and for whom no changes were expected. In practice, some universities decided to make it compulsory for all eligible staff to complete an evidence portfolio, for reasons that were not made very clear.
This illustrates two further interesting features of this system: the possibility that the cost of assessment exceeds the value of any improvement in research quality, and the arrogation of the government’s funding audit for internal managerial agendas. On the former point, there is evidence that, once one factors in the costs of producing each PBRF point, the extra funds that the PBRF has so far supplied to the universities may be offset by the cost of performing and complying with the assessment itself. Hazeldine and Kurniawan calculated that the funding reallocation effected by the PBRF over the period 2003 to 2006 “would increase research output by no more than the transaction costs of implementing the new system” (2006, p. 278). This casts further doubt on the political claims about how the PBRF led to improved research quality. In so far as the measured improvements might have represented any real underlying improvement in research quality, one needs to account for the costs of producing such an improvement. Satisfactory results may have been achievable by simply giving the universities extra funding for research, without forcing them through a costly assessment at all. Anecdotally, at an individual level, staff were aware that the time they spent on complying with PBRF requirements could have been time spent in the production of more research. Universities that unnecessarily made the 2006 round “compulsory” were raising their internal compliance costs to a level not even required by the government. This does seem like a senseless waste of time, unless one allows for the hypothesis that the universities’ top management have come to see the PBRF as their own instrument of internal control, and no longer as simply the government’s audit for research funding purposes. When the TEC questioned the Vice-Chancellors about making its “voluntary” assessment compulsory, they were advised by the Vice-Chancellors that the matter was an internal “employment relations” matter in which the TEC had no right to interfere. It must therefore be asked who “owns” the PBRF: the government or the universities? The enthusiasm of the latter for the PBRF comes about because it represents a bigger slice of the public funding pie, as well as an opportunity to extend the reach and the effectiveness of managerial control.
New Zealand’s PBRF system may be viewed as an attempt to “count the currency of knowledge”: to increase the production of “leading-edge” or “world-class” knowledge, and to convert knowledge production into an auditable, money-like form. By making each publication a token convertible into a portion of the sovereign’s budget - and indeed by making the research-active academic a source of a measurable sum of university income - this system partakes of and advances the commodification of knowledge that is typical of the politics of the so-called “knowledge economy.” In doing so, academic freedom is forgotten and undermined, and new managerial capabilities for the control of academic staff are discovered and put into effect. The main objective now is that something reporting on research should appear and that it should appear to be “excellent.” The interest in research itself is superficial, if one takes the PBRF too seriously, as the importance and intrinsic value of knowledge is reduced to its mere appearance and its ability to generate cash. But, there is no firm evidence that the PBRF is achieving its avowed goals. To use the PBRF’s results as evidence for its own success, as politicians and officials have done, is invalid, as the incentives it creates and the consequent behaviours confound those results. Furthermore, the costs of compliance may actually cancel out any benefits produced.
University staff have been slow to assimilate and react to the effects of this new system, but this author’s impression is that sentiment among the academic community is turning against the PBRF, viewing it as a costly, time-consuming scheme with limited benefit for real research, and yet with many disadvantages, such as the rise in invidious competition and managerial control. The PBRF has succeeded in undermining much of what was left of the traditional “vocation” of scholarly and scientific endeavour, as embodied within the university community.
Cullen, M. (2007), ‘Promoting research excellence in New Zealand’. New Zealand Government, 4 May 2007, viewed on 22 May 2007.
Marginson, S. (forthcoming), ‘Global University Rankings’. In Prospects of Higher Education: Globalization, Market Competition, Public Goods and the Future of the University, S. Marginson (ed.), Sense Publishers, Rotterdam.
Office of the Minister for Tertiary Education (2006), Tertiary Education Strategy 2007–2012. Ministry of Education, Wellington.
Tertiary Education Commission (2007a), PBRF Quality Evaluation 2006 Release Summary. Tertiary Education Commission, Wellington, 2007.
Tertiary Education Commission (2007b), ‘Performance-based Research Fund Results’. Tertiary Education Commission, 4 May 2007, viewed on 22 May 2007.
0 Comments:
Post a Comment
<< Home