Caroline Fiennes is Director of Giving Evidence, which encourages and enables giving based on sound evidence. She advises people and companies on giving well to charities and is one of the few people whose work has featured in the scientific journal Nature and OK! magazine.
Judging by my inbox, there seems to be huge demand for advice about which charities to support. As donors have followed the turmoil around Oxfam, Save the Children, Kids Company and others before them, they want decent independent analysis along the lines of the data that rating agencies provide for bonds or Which? does for fridges.
There isn’t any.
It doesn’t exist, because of costs, incentives and the genuine challenges involved in nailing it down.
Establishing whether a charity is effective means establishing whether its work results in some useful change in the world which would not have happened otherwise. That relies on two factors. First, the quality of its intervention or programme; and second, how well that intervention is implemented. Both need to be good.
For example, we know that insecticide-treated bed nets are great at preventing malaria, but if a charity buys nets really expensively, or doesn’t prevent them from being stolen from its trucks, delivers nets to places that don’t have malaria — or abuses children along the way — then it will not be terribly effective. There’s a further question about whether a particular charity is better than its peers.
Assessing an intervention is hard. It requires distinguishing the effect of the programme itself from that of other factors, such as the passage of time, random chance, general changes in society, and atypical characteristics in the people who seek the charity’s help.
Sophisticated research methods are necessary. The interventions run by many charities — and indeed by other entities such as governments — simply haven’t been evaluated well enough to pinpoint their effects.
That isn’t normally the charities’ fault. Most charities are specialists in delivering services or changing public behaviour, not in doing research. The coverage of social science research on the types of programmes that charities run has been limited to date: extending it is largely a function of funding.
Gathering data on implementation is easier. Nonetheless, they are often unreliable or non-existent. For instance, few charities have a robust system for hearing from the communities they serve, partly because of cost.
In the absence of independent analysis, the common fallback is to trust the self-evaluations which charities routinely produce. having been a charity chief executive myself, I’m amazed that so many donors do this. In this system, the charity produces research material on which it will be judged and existential decisions about its lifeblood funding will be made. What could possibly go wrong?
Thus for many charities — perhaps most — nobody knows how effective they are.
One implication is that, in my opinion, most philanthropy advisers are selling snake oil. When people offer to pay me to tell them which charities are best, in general I don’t do it, because I don’t know. Neither does anyone else: the data just don’t exist.
There’s a whole industry offering this advice, much of which reminds me of medieval medicine men before the advent of evidence-based medicine. The client is willing to pay for the guru’s advice, the guru performs some ritual alleging a magical insight, but in truth they have no clue. Medicine only moved beyond that stage when people started producing and using rigorous research that reliably shows what works.
In fact, it’s not quite true that there is zero independent analysis of charities. There is a little, mainly in international development. GiveWell was set up by US hedge fund managers, and analyses charities in tremendous detail, using independent research about the charity’s intervention. It has assessed hundreds of charities and currently recommends nine (several of which do deworming which is highly controversial, and even Give Well says may achieve nothing at all).
ImpactMatters estimates the effectiveness of non-profits by conducting “impact audits”. It aims to create an industry standard, such that if donors require a rigorous audit of the impact of a charity they’re considering supporting (by ImpactMatters or another competent analyst), charities’ performance will increase. It has published analysis of 13 programmes.
ImpactMatters finds it costs about $10,000 to analyse a single charity properly. GiveWell’s research costs about $1.5m a year and it claims to have influenced about $90m of donations annually.
Funding this analysis is difficult. New Philanthropy Capital in the UK had to stop publishing charity analyses when I was there a decade ago because it couldn’t raise money to keep them updated. Published charity analysis is essentially a public good — and we all know how public goods are typically under-resourced. Some commentators think that most donors care about the amount of good which they personally achieve, rather than the amount of good which is achieved in aggregate.
All of GiveWell’s recommendations are either a single programme run by a larger charity, or a charity which only runs one programme: they distribute bed nets, or run the fantastic annual anti-hunger programme that I described in this column last year, for example.
Though many charities, including the big aid agencies, run scores of programmes, nobody to my knowledge has ever rigorously analysed the whole of such a charity. This is obviously more complicated than assessing single intervention organisations, but is probably not impossible. One might analyse all their programmes, or perhaps a random subset of them, plus look at how they decide which programmes to run.
It would take a bit of time and brainpower to devise a decent method and test it a few times. Given the sums of money given to charities, particularly the large ones, investing in credible analysis of them seems like a good move.
In short, if you want decent analysis showing which charities are any good, you may need to pay for it.