Join me, comrades, in the distributed nanoclassification of human intellectual failure!
For too long have readers laboured under the weight of a thousand species of false infererence. In too many places, words written or uttered knowingly by persons in public life have been the unsafe scaffolding of arguments which should never have been constructed. Scarcely is it possible to read a single paragraph of a newspaper but one encounter an error of fact or inference fatal to whatever point was being advanced.
What is said and written in public life informs and affects the attitudes of voters. Such influence must be used responsibly. However remote the causal chain between utterance and enactment, however small the responsibility borne by those misquoted in the press or constructing op-ed pieces, it exists. Everyone in our society would benefit if public debate were conducted more clearly and accurately: the effect of regulations would be better predicted. This would happen if there were greater incentives for those engaged in public debate to communicate responsibly, if individuals could vindicate their interest in accurate public communication, if it were cheaper to rebut all the rubbish hack journalism out there, if it didn't cost the budget of Malawi to disprove all the lies told on a single episode of the Today programme. "If" ...
The regulatory cost of bad journalism is externalised from participants in public debate to the victims of inefficient regulation. Public debate is subject to insufficient discipline: little beyond the traditional "whatever the advertisers will bear" and letters to the editor. The publication of a reader's rebuttal is entirely at the discretion of the newspaper editor, and, though the norms and costs differ, the same applies for blogs.
When a newspaper publishes information which is false, the fact of this publication, and the sense in which the original information is false, are themselves a piece of information: a fact about an assertion, or at least, an assertion about an assrtion. Let us call this "accuracy metadata". This metadata obviously may influence reasoning about the original facts, so its availability will affect the ultimate policy decisions taken in response to the perceived preferences of voters. Currently the accuracy metadata for a newspaper consists in errata and letters to the editor published in the newspaper itself, and critical commentary elsewhere, e.g., in others newspapers and blogs. The scarcity of accuracy metadata about their articles reduces the incentives for accuracy, and permits some columnists to make a living pandering to the hatreds and insecurities of readers with little regard for the truth.
Editorial control over accuracy metadata promotes inaccuracy.
On the web, it is broadly feasible to use search engines to find articles referencing a given article automatically; this includes accuracy metadata hosted on independent websites outside the publisher's editorial control. The cost of assessing the accuracy of a piece of information contains the cost of locating its accuracy metadata and evaluating it, and that of publishing this metadata and getting it noticed. Technological change has made some of these processes affordable for a much broader section of society.
People might have the ability to pay for accuracy metadata, but have they the willingness to do so? Can anyone be bothered, except where he's seriously affected? Polities permitting free association of private individuals are increasingly common, and have even included France since 1901. Free societies abound in organisations dedicated to scrutinising public authority and contributing to public debate. These are the agents of the vast multitude who can't be bothered to follow public debate in detail: political parties, trades unions, professional associations, watchdogs, NGOs, single-issues campaigns, for-profit lobbying firms, et c, and are either freeloaded upon, or paid for voluntarily (or compulsorily in the case of some quangos).
Whoever bears the cost of improving public debate, technology has lowered it and will continue to. It is already possible for editorially indepepent accuracy metadata to be published and discovered. Currently this is deficient in three ways. Accuracy metadata tends to cover a whole article or document rather than a given paragraph or sentence; it may not identify what precisely is wrong with a statement (as opposed to indicating its truth value); and it is expensive to process, on account of textual inconsistencies. Technology can improve this granularity, specificity and consistency.
Granularity is the hardest to fix: web documents can be referenced as a whole by means of URLs, but the existing mechanism for identifying material within a document relies on its publisher incorporating anchors within his own text, and this is unconventional for electronic copies of newspaper articles. For the time being, a heuristic way of identifying sentences will have to do: the URL, plus a rough guide to the desired paragraph and some of the text of the sentence, sufficient for a computer to be able to identify the sentence referred to and robust against small revisions of the original document.
Specificity and consistency are two aspects of the same problem, and have a single solution: some means of encouraging people to denote like things alike. There must be a list of ways that a sentence can be wrong, with each having an identifier which will be standard across all the sentences discussed. In practice, what this would look like is the ability to highlight a sentence in the browser, and then select what is wrong with it from a list, with the software publishing this somewhere.
This already exists in a limited fashion on Wikipedia: there is a set of tags which may be added to any article which assert that the preceding text is not from a neutral point of view, or requires a citation, or whatever. Each of these is consistent across Wikipedia: this means that it's theoretically possible to see cheaply which editors are the worst neutrality offenders, incorporators of uncited material, and so on. I learnt from David Stove's article, "What is Wrong With Our Thoughts", that a list of diseases is called a "nosology", and he proposed to establish one for human thought, but averred that it might need billions of entries. Stove was not concerned with User Interface design, but a drop-down list of this length will never do, particularly in the mobile and netbook segment. Crucially, moreover, some ways of being wrong are much more frequent or important than others.
Now our friends on the radical Left, of whom we have none, will immediately protest that this system will entrench (or is it "perpetuate") the hegemony of a single arbiter of what is true. Partially, this criticism will be motivated by relativism. In its extreme form, relativism is the notion that it is morally permissible to mutilate a girl's clitoris if you're black.
Less polemically, it is the claim that truth is not universal, but depends on local conditions. I propose to surrender entirely to this claim. There are similar claims, such as that what is true is only what it is useful to regard as true. I ask those holding this view whether it is only true that Hitler caused the death of millions of Jews insofar as that fact is useful, and who might benefit from this sort of idea? Whatever the philosophical claims which might motivate one to oppose a single nosology of human thought, one need agree with none of them but still acknowledge that the establishment of a single list of ways a sentence can go wrong is not a good idea. Just as there should be a plurality of institutions participating in public debate, let a hundred nosologies bloom, and let a hundred thousand asserters use them.
What I propose is that web-browsing software gain the ability to select text from articles and publish a designation of this text as being defective according to some nosology. So, a Nosology Service Provider (NSP) could describe, collate and publish the types of inferential errors which it is concerned to see rebutted. A citizen, acting on his own or as part of an organisation dedicated to advancing public debate, which we shall call Nosological Assertion Service Providers (NASPs), could then independently publish accuracy metadata asserting that such and such a sentence was false in a particular way, according to his (the NASP's) understanding of the NSP's nosology.
This more than accommodates the relativists. It is perfectly possible that reasonable people may disagree in good faith about what is true, so one needs a plurality of NSPs. In mathematics this is completely formalised, e.g., one simple acknowledges whether one is assuming the Axiom of Choice is true or false. Reasonable people are generally well capable of honestly assessing what someone with different beliefs about what is true may think. They are also capable of starting with the same set of beliefs but assessing ambiguous or complex evidence differently, so NASPs may differ in their use of the same NSP's nosology.
The cost of NASPing will probably be pretty lower than the cost of writing the original sentence. This is an important precondition for what Yochai Benkler in "Coase's Penguin" terms "peer-based commons production". It may be easily possible in the future for anyone viewing the website of some news source to see elsewhere in the window a set of counterassertions about parts of the text, indicating that everyone from the Greens to the Tory Right wants to point out, in relation to some statement by the minister, that "Correlation Does Not Imply Causation". Users could opt not to be presented with text condemned by too broad a selection of NASPs, helping reduce the advertising revenue brought in by atrocious journalism. Statistical analysis of the types of error favoured by particular participants in public debate could be compiled. The more one tried to mislead the public, the less one would be listened to.
I hope one day that the above Ad Hominem Tu Quoque about Hitler is one day noted as such by some NASP, and we'll finally get to see just how much a NASP knows about English literature.