Productivity Shearing in Voluntary Organisations


There's a recurring problem I've observed in voluntary organisations: potential volunteers have different levels of ability and experience, across different areas. There is a key area that affects the cohesiveness and effectiveness of a voluntary organisation: if volunteers have to interact with each other, how well do they do so?

The sorts of interactions I'm concerned with are really things like these:

  • organising when people are available for meetings
  • selling tickets for events
  • circulating news
  • tracking ongoing activities

For some things, it doesn't matter whether everyone uses the same system, but for others it does:

I do some litter-picking around my neighbourhood occasionally. There is a local group I'm part of which organises litter-picking, but which is largely focused on other issues; anyone can just turn up, group or no group, with a glove and a shopping bag and pick up some litter; you largely do not need to worry about how good anyone else is at litter picking: your gloves and bags don't need to be compatible with their gloves and bags.

That group also collects and shares information about problems around the neighbourhood. This is where the trouble starts: we could store the information in our heads, or on paper, or on a specific computer, or on a networked computer. Individuals will have their own preferences, and some of these preferences can be very strong, for two reasons:

  1. some systems are more familiar than others, and it costs time to become familiar;
  2. some systems are much more efficient than others, and it costs less time to use the more efficient systems, so long as you're familiar with them.

There is therefore a minimum and maximum level of efficient that each individual is prepared to work at. Some people, largely down to personality traits, are willing to put in a lot of time to acquire familiarity with new systems which might prove more efficient, or which at least seem to be more effective for co-operating with their colleagues. Others less so.

In an organisation where people are getting paid for what they do, the organisation can simply use its resources to train people up on the systems that it wants, and mandate their use. In a voluntary organisation, much more leadership, persuasion and strategy is required.

Now this only matters where the systems used by one volunteer have any impact on the systems used by another volunteer, but that is a very common occurrence.

Therefore, there will be potential tensions about the range of levels of efficiency that individuals are prepared to work with in a situation where:

  • people want to co-operate,
  • but are not being paid to do so,
  • and where there is a need to use compatible systems.

Some individuals' minimum or maximum levels of efficiency won't even overlap. That is to say, there will be no system which everyone is happy using for booking events, because some only want to use Eventbrite and some only want to use cheques and postage stamps.

This is the "shearing" effect: the cohesion of the group is undermined because the requirement for efficient use of technology affects its members differently depending on how comfortable they are with particular systems. If you push people too far outside their comfort zone, they'll lose interest and volunteer for a different group instead.

The choice, then, is to handicap technologically proficient volunteers by making them use systems that may be orders of magnitude less efficient than what they use in other areas of their lives, or encourage a possibly painful learning process to get other volunteers "up to speed", or some combination of the two. One unexpected barrier may be that people like inefficiency because it gives them something to do, and if that means more productive individuals stop volunteering, so be it. None of this is happening in a vacuum: there are plenty of other things people could be doing with their time, and the rest of the world will on balance be getting more productive as time goes by.

There is, then, a particular danger for organisations that rely on volunteer labour. Persuade your volunteers of the strategic importance of investing time in learning the most productive collaboration systems, or perish!

I loved email. It's dead.


I loved email. It's dead.

We should start thinking of email addresses only as attack vectors.

An email address a piece of information which, once disclosed, allows someone or something to communicate with you forever. The consequence of this communication is that you may get interrupted by a notification, and bear the cost of storing, reading, and/or deleting the message; the message also increases the cost of searching through all your other messages.

These costs are small. The number of emails you receive, however, is very large. I have received well over a hundred thousand emails so far. Over time it adds up.

There are many-to-many communication systems which are indexed on other kinds of addresses (such as your phone number, postal address, your Facebook identity, your cryptographic public key, and so on). Email is like a phone number or a postal address: it has the property that "knowledge-is-permission", i.e., if you know the address, you can send data to it. Unlike other knowledge-is-permission addressing systems, or "capabilities" to abuse the computer science lingo, sending an email is almost costless, much less than the smallest unit of any normal currency.

The problem is that sharing your email address is a transitive operation: you are granting the recipient the capability to share the address with whomever he/she/it chooses. It is of course much worse than that: the address might be obtained accidentally or maliciously by a third party with whom you have no relationship, due to error, or the recipient going bankrupt, or a data breach, or being sold. There are some laws against sharing "personal data" without permission, but they're not remotely sufficient and probably not the right tool for the job anyway.

There is a commercial incentive to obtain email addresses from customers. They improve price discrimination, which means that customers collectively have to pay more (though some may pay less). Therefore companies try to force customers to hand over email addresses. You are required to divulge an email address to obtain the product; this is useful because it helps keep you informed as the product is delivered. But then a few weeks or months later, you start getting adverts from the company.

In the time it took me to write the previous paragraph, an advert arrived by email from a company from which I bought some blinds for my flat in December.

But in the time it took me to write that paragraph, I blocked all future emails from them.

What I have done is established a system of individual addresses for each company I and organisation I deal with. When I signed up with Blinds2Go, they got given my email address as But all I had to type was:

address-tool --retire mk270-blinds

and all future email from them is prevented with a curt "bounce" message, and I never receive a notification or store the message.

Effectively, this amounts to having one email address per interlocutor, with revocation indexed on sender email address.

What we actually need is a distributed store-and-forward messaging system where addresses are not transitive: instead, one would receive an invitation to communicate which could only be used by the recipient and not by third parties. This is vaguely similar to the PGP web of trust, Facebook messages between friends, and so on, but is probably most closely represented by the Scuttlebutt system.

To be continued ...

Experimenting with CompCert


A few weeks ago I experimented with CompCert, a C compiler from INRIA, written largely in Coq, with chunks in OCaml; this allows the Coq parts of CompCert to be formally verified (see below for more on this).

Now I have no need of a guarantee that my compiler is bug-free, but to the extent that translation my code into the subset of C supported by CompCert reduces the bugcount rather than increases it, it's a win. I'm basically using compcert as a lint tool, but it's fun and instructive anyway. The real-world scenario which makes any of this interesting is therefore if you have a C codebase and suspect a bug in your compiler and want to know how hard it would be to maintain that codebase such that it compiled with a compiler believed to be bug free.

For many years I have maintained a codebase of 40K lines of fairly odd C, that implements a computer game I used to run in the 1990s, and which predates modern conveniences that might have been used, such as sqlite, pcre, libevent, reliable IP stacks on NeXTSTEP, ANSI C, free C++ compilers, free Erlang, etc, etc. The code is also unusual in shunning the use of struct, malloc and pointer arithmetic. For almost the last twenty years I've kept it up-to-date with the C toolchains on a number of OSes, as a way of keeping an eye on what the cool kids are breaking.


Firstly, the codebase needs to be able to cope with multiple compilers; gcc and LLVM's clang are close to drop-in replacements for each other from the perspective of the Makefile. Not so, CompCert: -Wall -Werror are not accepted as options by CompCert, as they're effectively on by default. CompCert isn't going to want to know about any code that doesn't pass gcc -Wall -Werror, but there are a few things LLVM thinks it's Ok to warn you about that CompCert is cool with, which feels like LLVM is wasting my time. Getting the build system and revision control happy about parameterisable compiler options has to happen first.

I was forced to do change all the remaining instances of conflation of integer widths. Anyone who's done arithmetic in OCaml will recognised this as one of the house microfascisms of INRIA, but it's a deep issue: a lot of corner cases depend on your installation of the header files and libraries and so on. In my case, function prototypes are culled into a .h file automatically with cproto, which by default changes the width of integers in K&R-style C functions:

void my_function(i)
short i

is output as

void my_function(int i);

which gcc and LLVM tolerate, but CompCert doesn't. There were a couple of other legitimate "Well Don't Do That Then" moments that I won't tax you with. Effectively one's forced to get all the prototypes and headers and includes exactly right. This showed up a bug: a variable which was supposed to be declared extern wasn't, and was separately allocated from the global it was supposed to represent.

The more formal treatment of integer widths also meant fixing a lot of sprintf format strings.

The next thing I had to fix was the idiom

char *messages[] = { "...", "...", "...", NULL };
int x = sizeof(messages) / ...;

CompCert insists on the length of messsages[] being explicitly specified, which means this technique isn't allowed.

The harder stuff was signal() and stdarg; basically, CompCert supports an anaemic subset of C, and doesn't allow stdargs, though it provides the sprintf() clique of functions. Since wrapping sprintf() is about the only thing varargs is used for in C, this turns out not to be a problem, but I originally bet that parts of the codebase were outside the CompCert C dialect and would need to be shunted into libraries.

My own adventure in CompCert land basically amounted to learning new stylistic restrictions in C. Reading around what people have been doing with CompCert I came across a few interesting articles and from this chap I learnt about concolic testing which is another technique I have no use for but am glad to have spent time learning about.

Aliens don't crash land


Today is the anniversary of the Roswell Incident, which is the subject of various conspiracy theories. I don't like conspiracy theories; that way of thinking always tends to involve being selective about whether particular things are plausible.

There's no reason to suppose that life necessarily exists outside our solar system, or that it is impossible for life to exist elsewhere, but that is not the point: we are invited to believe that intelligent creatures from outside our solar system crash landed in Roswell this day sixty-six years ago.

Is plausible that aliens could master interstellar travel, but not the ability to land without crashing?

Science is not Maths


Scientific and mathematical reasoning are much more different from each other than people intuitively realise: they differ in what sort of ideas they start with, what consequence erroneous reasoning has, and the structure of the network of ideas they create.

Science starts with a very small set of assumptions, basically that the whole of the observable universe behaves in a regular, mechanical fashion. New scientific knowledge is created when we make a set of observations and infer a general rule to which these observations conform. The inputs are mostly facts about the observable world around us, and existing scientific theories. The direction of inference is inductive, from the specific to the general. This means that if enough of the specific observations are for some reason wrong, then the generalisation inferred from them could be wrong too.

That is not how maths works: you start with a set of assumptions and without any observations. Indeed: there is no observation you can make of the physical universe which can help you prove or disprove a mathematical theory. Maths starts and ends with objects and concepts which are not physical things we can observe. There is, however, a much more important distinction: mathematical reasoning generally proceeds from the general to the specific, and is like a chain, rather than a rope: if a single assumption or inference is wrong, everything which depends on it is probably wrong too.

People who let their political views get in the way of their ability to think often exult when a theory they dislike is disproven. We don't know what level of academic fraud and incompetence exists, and occasionally the mainstream media covers mistaken results in, e.g., climatology or economics. These disciplines, whatever their status within the sciences are sciences, not branches of maths, yet highly intelligent people seem to believe that some disproven major claim in one of these disciplines invalidates large swathes of results elsewhere in them. That's not how non-mathematical knowledge is structured: it's a rope with many plies, not a chain as weak as its weakest link.

(I'd include theological reasoning alongside maths, as starting from a set of assumptions (e.g., the contents of the Bible), and legal reasoning, at least in the common law and Sharia, as being more like science)

Software continues to overwhelm judiciary and legislators


Anyone watching the Microsoft anti-trust or SCO litigation would have concluded that competition policy and the judicial system are so slow as to be ineffectual. Now they are slow because they have to be; to be faster would involve undermining the legitimate interests of citizens, communities and companies. In effect, the general Turing machine and the Internet have combined such that the existing mechanisms of vindicating people's legitimate interests no longer operate.

Consider this case. That it has even got so far in the system shows that the laity (that is, people outside the new priestly elite that understand how to programme a computer) are just not equipped to do their own jobs anymore, as the specialist subject matter on which they have to make and interpret rules is too far outside their understanding.

The legal system itself is Turing complete, and is harnessed to the commercial interests of the sort of miscreants mentioned in the article. It's not remotely clear that it can operate in its current form in the current technology environment.

Let's re-criminalise bicycle theft in London


Once again, my bike has been stolen. That is the fifth bike in ten years; I've spent more than £2000 on bikes over that time. It'd be cheaper to rent a bike from the sodding criminals, the way I top my Oyster card (except Transport for London purports to be running a legitimate business).

The bicycle is not a viable mode of transport in London currently: you need to live somewhere where you can keep it indoors (effectively impossible if you live in a tiny flat up four flights of narrow stairs), and you can only cycle to places where you can also lock it indoors. If you don't live somewhere you can lock it up, and use the bike for any sort of casual transport rather than commuting to a single location, it'll get nicked. Effectively, you have to have a car, or use public transport. I simply will not accept this.

There is an inexhaustible supply of bike thieves and bikes: small-scale organised crime and junkies will always be with us, but the price per kilogramme of a bike is so low that large scale organised criminals are not interested. The problem is that the police, the courts and the Crown Prosecution Service have decriminalised bike theft: unbelievably, thieves are let off with a caution, the courts apparently are not handing down custodial sentences even when someone is caught with twenty stolen bikes, so understandable the CPS doesn't bother. The real problem with the CPS is that they don't economically model the effects of their enforcement decisions: they decide "can I win this case?", not "what level of prosecutions is necessary to prevent the country slowly descending into anarchy?".

What can be done?

We need to crowdfund private criminal prosecutions of bike thieves. If the thieves know that there's a large group of hostile cyclists literally choosing cases to prosecute at random, and automatically demanding custodial sentences, the ones who are not drug addicts will diversify into other activities.

We need much better datasets about bicycle theft. People do not report thefts to the police because it is a waste of time; that means the data is not captured at all: private initiative can at least capture some of this data, even without a view to enforcement. The datasets will allow serious apps (like the tongue-in-cheek iSteal) to help cyclists minimise the incidence of crime.

Cycling campaigns need to adopt a much more critical attitude towards the police. Currently, bicycles in London have been de facto removed from the system of private property. This is arguably a violation of the ECHR, but don't expect Shami Chakrabarti to mount the barricades over it any time soon. Basically they've nationalised our bikes and given them away to the bad guys; whatever will they think of next? Oh yeah, our medical records.

The police's discretion to caution bike thieves needs to be revoked; I think this can be done by making theft of bikes chargeable only by indictment. I used to object to instrumentalisation of the criminal law, and advance this proposal as an example of what principled people start to think once we head down this slippery slope. If "first-time" offenders can get off with a caution, that doubles the number of bikes at risk.

It may be viable to establish commercial bicycle-theft rapid response and investigation services.

Whatever the case, it is not acceptable in a democracy for a small group of criminals to steal a hundred thousand bikes a year and effectively deny a large group of citizens a healthy and environmentally sound lifestyle choice.

Do legislators still know enough to reform legislatures?


In yesterday's Irish Independent on Sunday there's an article by Senator John Crown, member in the university interest of the Seanad √Čireann, the upper house of the parliament of the Republic of Ireland, which currently faces abolition. The Seanad comprises several groups of individuals, most of these groups selected by various appointment panels representing agriculture, industry and so on; Sen Crown's proposal is to extend the provisions relating to the only elected one of these groups to embrace most of the other groups, while remaining within the various harder-to-change constitutional rules establishing the general setup (like the existence of these groups in the first place).

His idea is to replace the appointment panels with normal elections, and each elector gets to choose which of the panels' senators to vote for (under multi-member STV):

Every adult citizen will be able to vote in any "panel" constituency, but only in one of them.

This is a good example of why legislators should leave reform of the legislature to the experts, by which I mean me and my friends. It turns out that the idea of letting people choose their own constituency for upper house elections has been considered and rejected before: if there's no statistical difference between the constituencies, you get the same result. Therefore if you assign upper house constituencies at random, then the same party (under FPTP/AV) or proportion of parties (under STV) wins in each constituency. Letting people choose their constituency opens the system up to being gamed by the political parties.

Now I think the Irish Senate's composition is a dreadful thing, because it faithfully implements a formally corporatist approach to political and social organisation. This idea is commended in the papal encyclical Quadragesimo Anno (which is not a reason in itself to oppose something - they might have had a stopped clock moment and recommended a good thing for bad reasons). This kind of reasoning, the idea that we should support or oppose a constitutional feature because of who promoted it and why, decades after the fact, was much in evidence at Robin Archer's talk at the Menzies Centre last week (which was very interesting but somewhat wrong, and which I shall blog about tomorrow).

I confine myself to noting that if these upper houses are supposed to have some sort of expertise unavailable to democratic bodies, how many more software developers should be in them?

Intellectual debt


Software projects can accumulate technical debt: the work you need to do to fix the work you've already done.

I think it's possible to accumulate "intellectual debt". Thoughts and ideas that you've had, worked on, developed, talked about, but have not written up and published. You can have an idea, but until you've tried to write it up properly such that someone else could read and criticise it, you can't be sure that it actually makes sense. Of course, there can be a mistake in your write-up, but the process of writing up will force you to confront a lot of the potential problems with any idea.

Having huge amounts of intellectual debt means that you're sitting on a bunch of ideas which may not be correct, and that no-one has proper access to thoughts you probably wanted to share. Ideas are composable: one can depend on another, and if your unpublished ideas depend on a vast chain of your other unpublished ideas, you could be compounding your mistakes. Additionally, you could be rendering your thought too far from the mainstream: if you're right but radically different because people haven't assimilated your earlier ideas, considered and criticised them, then your bigger ideas, composed from the earlier ones, will be harder to promote.

The Internet has fallen apart


The internet used to be a social network.

You used to be able to email people you didn't know: you'd see an article about a topic in which you have some expertise, and you might want to email its author.

Basically, you can no longer do this: you might be able to leave a comment on the article, which is no good if you want to write in private, and in any case this nowadays involves signing up with some awful third-party identity integration service like Disqus or Wordpress or Gravatar or whatever, and your message can get lost in the fray of utter bilge in the comments section, whereas it might have got read and led to a discussion if it had gone by email.

Your other option is sending a message on Twitter, but that's got all the same problems as the above, but limited to 140 characters.

We've certainly lost some of the original benefit of the Internet as a facilitator of discussion, at least between strangers.