Last week, I participated in an extremely interesting webinar by Owen Barder, Senior Fellow, Center for Global Development. His key message was that development problems are complex and cannot be solved with linear thinking. In his presentation, Owen shared his perspective and research findings on “the role that knowledge can play in tackling ‘wicked’ problems such as poverty reduction, illustrating how concepts derived from evolution theory like variation and selection are a useful framework to think about the way ideas and knowledge are applied in development projects”.
(You can click on the presentation above, or you can download the presentation as a pdf file here. )
In general, Owen ran in open doors with me with the concept of diversity & selection, as am trying to advocate for more open, crowed-sourced innovation approaches within development already. However, the webinar also made me thinking, and I had a good follow up discussion with colleagues afterwards, as well as an email exchange with Owen himself. A question that was bothering me was the human dimension of the failed experiments that Own was referring to as a necessary output of diversity & selection paradigm. In a purely technical industry it might be ok to say we try 100 approaches for potential products and 95 won’t work (see the the famous nozzle example). But in development we are talking about people’s lives. If you initiate a project and it fails, there is a danger that the actual people involved are in an even worse place (often economically, but certainly emotionally) than before. Promises are made, communities are mobilized, social capital and energy is invested, trust is built, and when the project fails, you don't get that back anytime soon. It’s a rather bureaucratic approach saying this will be the case in 95 of our projects, in order to gain the great value we get out of the 5 successful projects. Can we afford people’s lives and the social capital of communities being the collateral damage of our innovation efforts in development? From a global perspective you can of course say that the overall cost of not innovating through experimentation in the end will be even higher, but that is hard to explain to those individuals and communities which happen to be the guinea pigs for our failed development experiments. (I'm actually surprised how much of an ethical question this becomes when thinking about it. I'm sure Michael Sandel at Harvard could make a great lecture out of this example).
A colleague of mine had concerns along the same lines, as we know that there is the ‘do no harm’ approach, but I we also know that even if it’s considered, there is no guarantee to avoid substantial risks for the humans involved. The organizer of the webinar, Giulio Quaggiotto, suggested that the answer might came again from pointing back to feedback loops that Owen highlighted in his presentation. If you have the appropriate ‘loops’ then you figure out you are going in the wrong direction much faster than if you stick to the original plan just because you have to declare to the donors that you had a ‘success’.
In a follow up email exchange, Owen himself elaborated a bit more on this. First, he pointed to Tim Harford's new book, “Adapt – Why success always starts with failure”, that a key characteristic of successful adaptation is the ability to 'fail safely'. If you bet everything on one approach, you can't afford for it to go wrong. You have to find a space in which it is safe to experiment.
So the question is: how we can do that in development? If we apply this diversity & selection approach, we must (and we can) do this in a way which respects people and communities. Owen mentioned “medicine in which we already have quite tightly controlled ethical standards which must be applied in a clinical trial. In universities, researchers are required to get approval from their Ethics committee before conducting any experiments” (e.g. randomized controlled trials) with human subjects.
While there is probably no general answer to this question, Owen indicated we should always look at specific cases and assess whether they meet agreed ethic principles. There are definitely cases where variation & selection can be applied without doing actual collateral damage, being more innovative and experimental in ways that respects the human dimension of our ‘development experiments’. And then there will be other cases where we can't (though Owen says he thinks these might be few and far between), in which case we probably shouldn't be using the diversity & selection approach.
Owen’s explanations made a lot of sense to me. My own conclusion from this discussion is that what seems would be needed then are some sort of ethical as well as practical guidelines which help assessing for which scenarios the approach should be applied and for which it shouldn't, including an outline of the recommended steps to mitigate any potential collateral damage. I think Owen is indeed right that what is key is the ability to "fail safely", and I think what is needed is a methodological approach for this.
I’d be glad for comments. Maybe you have examples and ideas under which conditions we should go for this “development by evolution” approach – and when we shouldn’t!