What is the point of academic journals? The main one, surely, is to disseminate new findings and ideas, but this doesn’t go far in explaining the current publications set-up. Journal articles loom large in government monitoring exercises like the Research Excellence Framework, a Standard & Poor’s-type academic credit-rating. REF figures shape departments’ public research funding and individual researchers’ career prospects.

But the ghost of G.E. Moore haunts the exercise: quality can’t be boiled down to component merits that are then tick-boxed into a ‘metric’. Most academics have macabre tales to tell of their treatment at the hands of journals, whose scrutiny methods vary wildly. Some well-considered organs are fiefdoms run to the fiat of the founding editor. Others’ byzantine vetting methods make Jarndyce v. Jarndyce seem a beacon of procedural clarity. Referees charged with deciding whether, as they say, a submission fills a much-needed gap, play a key role – with the vagaries of prejudice, available time, mood etc. Often they know little about the field or, precisely because they’ve published in it, bomb hapless authors with their données. The result – abetted by the blind refereeing system, a zone of unaccountable power – alloys conservatism with arbitrariness.

The problem, basically, is deciding whether a paper’s any good. Why not just read it? Because verdicts vary with the reader’s prejudices, available time, mood etc. So in practice, REF relies on proxy indicators of quality: a paper is more likely to be any good if it appears in a flagship periodical. It’s tempting to say: being in a stable doesn’t make a donkey a racehorse. That analogy has its limits, and at least a randomly picked object in a stable is more likely to be a horse than such an object elsewhere. How far along the axis the probability distribution peaks might be taken as a rough index of journal quality. But that just shifts the onus of opinion from the paper to its host journal, and yields only the equivalent of an expected value rather than a concrete measure.

Another method involves gauging a paper’s ‘impact’ – a factor the REF explicitly distinguishes from quality. Here, the quant is how influential a paper is – cashed out, say, by mentions of it in other publications. This too has obvious problems. I’ve thought of launching a Journal of Comparative Balls, whose articles’ sole rationale would be to put up Aunt Sallies that invite rebuttal, and so harvest mentions elsewhere; ideally, the rebuttals would in turn be sufficiently asinine to reap further mentions via counter-rebuttals, and so on. The same goes for online methods that gauge quality using such metrics as the number of downloads a paper gets. It’s easy to envisage download consortia springing up to massage the numbers.

Compare a genuinely quantifiable property like celebrity, where frequency of mention does look like a credible measure. But that’s because the measure isn’t a proxy – that’s just what celebrity is. The quest to quantify quality epitomises academia’s current palsy. Under pressure to devise robust quality indices, it has mired itself in mensuration ju-ju, from endless league tables to software like Turnitin, which combs students’ essays and outputs a score for plagiarism. One could poll academics to rank journals; but, apart from this relying on democratic principles largely unknown in academe, voters would be apt to vote for journals that had published their own papers.

Instead of trying to bottle quality like spa water, why not just let everything slosh around in the paddling-pool of the net? All internauts use heuristics – blog tips, for example – to dodge cyberland’s crashpads for the vacationing mind. Journal publication is already bypassed through personal-site uploads and repository sites. Hiring and research-assessment committees, instead of relying on their own heuristics, such as which journals host candidates’ papers, would have to read the work and reach a verdict on it. Public research monies could be advanced on a tendering basis, as they already partly are, rather than a block grant. Similarly, consortia applying for capital grants could still attach publications as proof of excellence.

This won’t happen. There’s no getting away from it: some people – many in positions of power – are rankers. Government, big publishers and campus bosses like ranking, as it regiments and keeps a lock on power. For the masters of destiny any number is better than none.