The Decline and Fall of the British University

I found this at: Here The email given has bounced so I publishing it without permission.
The goal of widening access to education is a noble one and very much in line with the motivations of the post-war British governments. One way of implementing it would have been to investigate why so few students went to university, and, having constructed a careful social analysis, to have increased the percentage of entrants by improving the educational qualities of the average school leaver. Of course thats the hard and genuine route and it takes a generation. An easier way is to water down the educational system to a lower standard and then peg the university income to the number of students accepted while reducing the funding per head. In that way universities are given the happy choice of losing money and enforcing redundancies or watering down their requirements. No prizes for guessing which route the government took and how the universities responded.
It was in 1993 that I experienced these changes as a newly-tenured lecturer. We were summoned to be told that the School of Computer Studies at Leeds was henceforth to adopt a buffet-style form of degree whereby students picked and mixed their degree studies rather than the tab le d’hote system we had used till then. This new system was called ‘modularisation’ and it represented the drive towards student choice desired by government.
An immediate casualty were some hard-core traditional CS modules like complexity and compiler design. Why, argued students, elect to study some damned hard subject like compiler design, when you could study something cool like web design and get better marks? So these old hard core subjects began to drop off. Even worse, the School (following the logic of the market), having seen that these hard core subjects were not attracting a following, simply dropped them from the curriculum. So future students who were bright enough to study these areas would never get the chance to do so.
After a few years of this system, the results percolated through to my office. I could see the results in the lecture hall, but the procession of students who walked into my office and said “Dr Tarver, I need to do a final year project but I can’t do any programming”… well, they are more than I can remember or even want to remember. And the thing was that the School was not in a position to fail these students because, crudely, we needed the money and if we didn’t take it there were others who would. Hence failing students was frowned upon. By pre-1990 standards about 20% of the students should have been failed.
However there are lots of ways round this little problem. One of them is doctoring the marks. Except its not called ‘doctoring’ its called ‘scaling’ and its done by computer. You scale the marks until you get the nice binomial distribution of fails and firsts. You can turn a fail into a II(ii) with scaling. Probably you want to be generous because otherwise students might not elect to study your course next year and then your course will be shut down and you’ll be teaching Word for Windows. Scaling was universal and nobody except the external auditors (who were lecturers who did the same thing themselves) got to see anything but the scaled marks.
Graduating computer-illiterate students who had to do a project in computer science was more of a headache. The solution was to give them some anodyne title that they could woffle or crib off other sources. It was best not to look too closely at these Frankensteinian efforts be cause otherwise you would see stitches where they lifted it off some text which you were never likely to find short of wiring them to the mains to get the truth. It was of course, a lie, but the cost of exposing that lie was likely to have ramifications beyond the individual case. Very few lecturers would want to stir such a hornets nest or have the necessary adamantine quality to inflict shame upon a student whose principal failure was to be allowed to study for a degree for whch he had little ability.
After seven years of the new regime, I had the opportunity to compare the class of 1999 with the class of 1992. In 1992 I set a course in Artificial Intelligence requiring students to solve six exercises, including building a Prolog interpreter. In 1999, six exercises had shrunk to one; which was a 12 line Prolog program for which eight weeks were allotted for students to write it. A special class was laid on for students to learn this and many attended, including students who had attended a course incorporating logic programming the previous term. It was a battle to get the students to do this, not least because two senior lecturers criticised the exercise as presenting too much of a challenge to the students. My Brazilian Ph.D. student who superintended some of these students, told me that the level of attainment of some of our British final year students was lower than that of the first year Brazilian students.
Now parallel with all this was an enormous paper trail of teaching audits called Teaching Quality Assessment. These audits were designed to fulfil the accountability of the lecturers by providing a visible proof that they were doing their job in the areas of teaching and (in another review) research. In view of the scenario described, you might well wonder how it is possible for such a calamitous decline in standards to go unremarked. The short answer is that, the external auditors, being lecturers, knew full well the pressures that we were facing because they were facing the same pressures. They rarely looked beyond the paperwork and the trick was to give them plenty of it. The important thing was that the paperwork had to be filled out properly and the ostensible measures had to be met. Students of the old Stalinist Russian system will know the techniques. Figures record yet a another triumphant over-fulfilment of the five-year plan while the peasants drop dead of starvation in the fields.
Teaching was not the only criterion of assessment. Research was another and, from the point of view of getting promotion, more important. Teaching being increasingly dreadful, research was both an escape ladder away from the coal face and a means of securing a raise. The mandarins in charge of education decreed that research was to be assessed, and that meant counting things. Quite what things and how wasn’t too clear, but the general answer was that the more you wrote, the better you were. So lecturers began scribbling with the frenetic intensity of battery hens on overtime, producing paper after paper, challenging increasingly harassed librarians to find the space for them. New journals and conferences blossomed and conference hopping became a means to self-promotion. Little matter if your effort was read only by you and your mates. It was there and it counted.
Today this ideology is totally dominant all over the world, including North America. You can routinely find lecturers with more than a hundred published papers and you marvel at these paradigms of human creativity. These are people, you think, who are fit to challenge Mozart who wrote a hundred pieces or more of music. And then you get puzzled that, in this modern world, there should be so many Mozarts – almost one for every department.
The more prosaic truth emerges when you scan the titles of these epics. First, the author rarely appears alone, sharing space with two or three others. Often the collaborators are Ph.D. students who are routinely doing most of the spade work on some low grant in the hope of climbing the greasy pole. Dividing the number of titles by the author’s actual contribution probably reduces those hundred papers to twenty-five. Then looking at the titles themselves, you’ll see that many of the titles bear a striking resemblance to each other. “Adaptive Mesh Analysis” reads one and “An Adaptive Algorithm for Mesh Analysis” reads another. Dividing the total remaining by the average number of repetitions halves the list again. Mozart disappears before your very eyes.
But the last criterion is often the hardest. Is the paper important? Is it something people will look back on and say ‘That was a landmark’. Applying this last test requires historical hindsight – not an easy thing. But when it is applied, very often the list of one hundred papers disappears altogether. Placed under the heat of forensic investigation the list finally evaporates and what you are left with is the empty set.
And this, really, is not a great surprise, because landmark papers in any discipline are few and far between. Mozarts are rare and to be valued, but the counterfeit academic Mozarts are common and a contributory cause to global warming and deforestation. The whole enterprise of counting publications as a means to evaluating research excellence is pernicious and completely absurd. If a 12 year-old were to write ‘I fink that Enid Blyton iz bettern than that Emily Bronte bint cos she has written loads more books’ then one could reasonably excuse the spelling as reflective of the stupidity of the mind that produced the content. What we now have in academia is a situation where intelligent men and women prostitute themselves to an ideal which nointelligent person could believe. In short they are living a lie.
It was living a lie that finally put an end to my being a professor. One day in 1999 I got up and faced the mirror and acknowledged I could not do the job any more. I quit;
Mark Tarver

This entry was posted in Academia, UK. Bookmark the permalink.