Slouching toward the Market: the new Green Paper for Higher Education, Part I
The Government’s Green Paper, ‘Fulfilling our Potential: Teaching Excellence, Social Mobility and Student Choice’, represents the further implementation of proposals for the marketization of higher education set out in the 2011 White Paper, Students at the Heart of the System. Higher education is directed toward economic value (for students, employers, and taxpayers) and toward economic impact for increased productivity and economic growth. These goals are to be facilitated by market competition. In these respects, it represents the familiar neo-liberal package of de-regulation via markets together with strong central direction from the Department of Business, Innovation and Skills (BIS).
Improving students as consumers
At the centre of the Green Paper are the new plans for the evaluation of teaching in higher education via a Teaching Excellence Framework (TEF) together with plans to increase the participation of students from disadvantaged backgrounds by making access agreements threshold requirements in the TEF. The latter is a welcome emphasis (on which, more below), but overall the proposals are made up of mutually inconsistent objectives. The first is to provide information to students about the nature and quality of teaching at an institutions (preferably, course-specific information); the second is to provide information about expected graduate earnings; and, third, is to provide employers with information about the skills that can be expected from particular courses.
What is proposed is similar to that of the evaluation of research in the Research Excellence Framework. Institutions and their subject areas will be evaluated according to Teaching Quality, Learning Environment, and Outcomes (meaning retention, degree outcomes and graduate incomes). Transitional arrangements are proposed, but, in the medium term, what is proposed is a TEF with scores at ‘4 levels’ (this is similar to research assessment exercises where submissions were judged in terms of thresholds and 5 bands). However, as with later iterations of research assessment, it is likely that these bands will be recomposed as rank orders and grade points (not least because the Green Paper also indicates that it wishes a more differentiated system of degree results for students associated with Grade Point Averages alongside the resent degree classification into 4 bands). For a summary and diagrammatic representation of the TEF, see Mark Leach’s blog at WonkHE.
Assessment will be via metrics and a peer review process involving teaching and learning experts, student representatives, and members of professional associations, and will be conducted on a five-year basis, with possibilities for interim assessments. Once again, this is similar to the REF process and hardly seems to be ‘light touch’. It is likely to produce internal ‘mirroring’ and ‘dry runs’ within institutions which, according to a later chapter in the green paper, has contributed to the rising administrative burden of the REF.
The incentive for institutions is that success in the TEF will allow them to raise their fees in line with inflation. As Andrew McGettigan has pointed out, this isn’t much of an incentive at current and expected inflation rates (and in the light of the administrative costs of compliance). However, once the architecture is in place, fees may be allowed to increase beyond the flagged £9k+ inflation. Indeed, lobbyists from the Russell Group are at pains to point out that allowing institutions to set their own fees is more consistent with a competitive market and was what the Browne Report initially proposed. After all, provision of data from HMRC on graduate earnings facilitates the linking of fees to future earnings, which only makes sense, from a neo-liberal perspective, if fees are allowed to rise beyond the cap for some courses (for example, in line with fees charged for some Masters courses). For further discussion, see Andrew McGettigan’s account of ‘The Treasury View of Higher Education’
The incentive for students is even less clear. In the name of improving the means by which students can judge value for money, it has proposed a mechanism that will increase the costs of the degrees at institutions which are successful in the TEF and will devalue them at those which are less successful. After all, although it claims that the proposals are ‘student-centred’, the purpose is also to provide employers with information about the relative value of degrees. The Green Paper states, “the absence of information about the quality of courses, subjects covered and skills gained makes it difficult for employers to identify and recruit graduates with the right level of skills and harder for providers to know how to develop and improve their courses” (page 19). This assumes a market process ‘convergence’ between courses and employer requirements, but students face the problem of ‘investing’ in a course that may receive lower scores in a TEF that takes place after they have made their choice.
The Green Paper is unclear about the role of the NSS, but suggests that it may be used in the short run, with the Office for National Statistics given the responsibility for developing more satisfactory metrics. Given that students are seeking to make comparisons across courses and institutions, any data must be able to bear the weight of comparison. Yet, evaluations of the NSS are unequivocal. According to the Report for HEFCE on Enhancing and Developing the Student Survey, “The design of the NSS means that there are limitations on its use for comparative purposes … In particular, its validity in comparing results from different subject areas is very restricted, as is its use in drawing conclusions about different aspects of the student experience. One issue to be borne in mind is that, in most cases, the differences between whole institutions are so small as to be statistically and practically insignificant” (Executive Summary, point 7).
Cheng and Marsh (‘National Student Survey: are differences between universities and courses reliable and meaningful?‘) reach a similar conclusion, “at the university level, there are relatively few universities that differ significantly from the mean across all universities and, at the course level, there is even a smaller portion of differences that are statistically significant. This suggests the inappropriateness of these ratings for the construction of league tables” (2010: 708). In other words, differences between students’ mean rating of courses are in most cases smaller than differences that would arise from random variation in individual student’s assessment of the same course, given the number of students assessing each course.
None of this, of course, prevents Universities misusing NSS data, though perhaps the most egregious is that of the Russell Group and its statement at every available opportunity, including in its comment on the Green Paper, that, “89% of students at Russell Group universities are satisfied with the teaching on their course, compared to a sector-wide average of 87%.”
The problem is also that the population of students on different courses and at different universities is not the same – different gender mix, different mix of social backgrounds, different proportions of ethnic minorities and so on. Differences in scores may simply reflect differences in the background characteristics of their students. To some extent this is recognised in the Green Paper in both its concern to increase participation by students from disadvantaged backgrounds and its concern with the comparatively poor performance of some BME students at universities. The Green Paper is anxious that universities should be rewarded for improvement in these areas, but improvement may also depress scores on other metrics; just as improvements may depend upon factors outside an institution’s control and may be affected by disincentives within a fee-based system of higher education itself.
Nor are the objectives of widening participation and improving service served by the increase in the number of alternative providers (primarily, but not exclusively for-profit providers) and the stratification of the higher education system that is the Green Paper’s aim. Students from disadvantaged backgrounds are less likely to be able to be mobile (not least because of reducing living costs by remaining at home, in a context where maintenance grants have been removed and maintenance loans are insufficient). Indeed, the Green Paper suggests that alternative providers are particularly well adapted to providing for students from disadvantaged backgrounds. Yet, the Harkin Report for the US Senate identified the ‘sub-prime’ education offered for sale, primarily to students from low income and disadvantaged backgrounds.