Monday, 26 February 2018

How to program for uncertain results? The innovation journey of a 'slightly unsusual' programme in UNDP

Innovations are driven by risk-takers. Part of UNDP’s role in innovation is to provide the space for risk-takers to develop and test their ideas. And it turns out that sometimes these are not individuals, but entire programmes! The Pacific Risk Resilience Programme (PRRP) covers the Pacific countries of Fiji, Solomon Islands, Tonga and Vanuatu, and takes an unusual approach within UNDP’s programme portfolio. Now in its fourth year, it didn’t exactly follow the standard programming approach in which a challenge and a development model is identified in the beginning, and a set of interventions is designed that would then be rolled out over the course of the next years, with clear activities and results prescribed for each year. Instead, PRRP didn’t actually describe the model or the interventions themselves at all. Instead, they let the model emerge over time by running sprints of interventions and evaluating them frequently, something that is known in the information technology world as ‘agile development’. I've talked to the programme manager, Moortaza Jiwanji, about their approach, what they learned from doing things differently, and the implications of their experience for UNDP programming.

Q: Why did you feel the need to do things differently than in ‘traditional’ programming?

The main reason was that we that we had to develop something for which there was no precedent. We were venturing into unknown territory, and it made little sense for us to prescribe what results would look like 4-years in advance with a results framework that pretends to know exactly what activity would be best to deliver by year 4. We simply couldn’t see how that would work.

Climate change and disasters have a real impact on people in the Pacific. Despite the unprecedented levels of funding and programming in the region, it is disheartening to see how communities are still experiencing the same types of impacts of climate change and disasters and in some cases, these are becoming worse! This is particularly concerning given that the symptoms of climate change such as cyclones, flooding and droughts are likely to increase in intensity and frequency in the future.

Much of this programming in the Pacific has led to some concrete results on the ground, but we felt that there was not enough thought being put into addressing the root causes of these vulnerabilities. It is less obvious to see how development itself is being adjusted to address these risks. For instance, why is it that schools and houses are still built in flood prone areas without the appropriate materials and design codes? It is also becoming increasingly clear that development itself is a primary cause of this vulnerability to climate change. This could perhaps explain the cyclical nature of these impacts.

We realized that something needed to change within development itself and not just in climate change programming. Most programming is focused on technical solutions such as building sea-walls rather than dealing with the root causes. It seemed at the time that there was not much programming experience in dealing with climate change from a ‘development’ perspective. We knew what needed to change but there was limited experience in the region and globally to show us how.

That’s why we decided to develop a model for risk-informed development without any preconceived ideas about what it would look like and how it would work. Our starting point was to address deep-seated governance issues, not for climate change, but for development. We also felt that this would be an opportunity for UNDP to build a niche for itself, particularly given that we are a ‘development’ agency and also deal with governance reform. We were able to do this through the Pacific Risk Resilience Programme (PRRP) which started in 2013. PRRP was funded by the Australian government as they also felt willing to try something different given that they were also not seeing the aggregate results in the region.

Q: What exactly was ‘different’ about your approach?

We knew we had to do two things differently: First, in order to tackle the root causes of climate change and disaster risk, we had to work deep within development itself. And second, because at the time back in 2013 there was not much experience of dealing with climate and disaster risk from a truly ‘development’ perspective, we had to follow an approach that was largely experimental at the time and depart from more traditional approaches to programme design and implementation.

So what was different about our approach? First, unlike most development partners in this area we did not work as an outside partner with climate change and disaster management functions in government. Instead we programmed ‘from within’ governance systems where our government partners owned the development interventions from community to national level. We also used a human-centered design approach that really focused on developing individual mechanisms with the same people that were going to apply them in their government ministries and agencies. Both these aspects allowed our country partners to help design and fully lead the initiatives themselves rather than UNDP leading the way. This admittedly raised some eyebrows at the time as there was an expectation for climate change to work with (and provide funding for) the ‘usual suspects’.
Figure 1: The Innovation Feedback Loop

Second, unlike standard programme design approaches we did not predetermine our activities and outputs well in advance for the next four years. Instead we built smaller and targeted experimental interventions where we saw some prospects of success (see adjacent diagram), e.g. where we found receptive partners and conducive political environments. This allowed us to understand which activities did yield the best results as we implemented them. We then spent a great deal of time and energy in measuring any apparent successes and even more importantly failures. Learning from these experiences was the most important ingredient and we spent a great deal of time and energy on doing this collectively with our partners. So whether a pilot leads to measurable successes or failures, the real success of this approach comes in how well you learn and subsequently redesign and modify approaches based on these learnings. Based on this iterative process we would then develop the overarching model as it emergedfrom those experiences. The interesting thing about this experience is that we developed this modus operandi ourselves, completely unaware that UNDP was promoting through its global Innovation Facility exactly such innovation and design approaches that encourage this type of experimentation. At the time of designing this programme back in 2013 we decided to call this approach ‘emergent design’, but it aligns very much with the innovation principles of agile development and problem-driven adaptive iteration.

Based on the learning from these experiments, we have now developed an overarching model around the concept of ‘risk governance’, designed and tested to risk-inform development ‘from within’ and at all levels of governance. For more information you can read our recently launched policy brief and you can also see practical examples of how this is benefiting countries in the Pacific on our website

Q: What were the challenges you encountered?

Risky business. Developing a programme based on emergent design or agile development principles is extremely exciting. However, it can also be quite stressful because in essence you are taking a significant risk in programming something that has not been tested successfully yet. This is particularly challenging when it comes to convincing your programme stakeholders, such as your donor, country partners and even internal management.

Raising eyebrows. In the early days, we seemed to develop somewhat of a reputation as being the slightly unusual programme within UNDP. This was not always cast in a positive light and this is partly because we did not have a fixed and clearly defined results and resources framework over a four-year period.

Buy-in from stakeholders. There are three types of stakeholders that we dealt with through this experience: the country partners (or beneficiaries); our donor partner; and UNDP itself. The approach of working from within and building governance systems to risk-inform development was most positively received by our government and donor partners and then eventually with our managers within UNDP. This took a little while perhaps largely because we were venturing into the unknown and did not have a clear narrative to describe and justify our approach, particularly in the early days.

It can take some time to show predictable and regular results. Agile development or emergent design approaches can take some time to achieve tangible results. It is almost by definition impossible to predict when and how results are going to be achieved. This was particularly challenging when working in an environment where programmes are expected to report on results against clearly defined outputs and targets at least every quarter.

Q: What were the benefits of taking this approach compared to more traditional approaches?

In essence, agile development allowed us to get results that otherwise would have never emerged had we prescribed our specific outputs the traditional way several years in advance. And on top of it, the solutions that we did get through the agile development approach now address the actual problem we’re trying to tackle much better.

Over time we saw that taking this approach was extremely beneficial, particularly to our government partners, in offering more sustainable and realistic solutions to the complexities of climate change and disasters in the Pacific. You can see this by the fact that our country partners are now advocating for this approach within their own countries.

Unexpected solutions. What’s really interesting is that taking this approach has led to solutions that we would have never designed up front. For instance, we now have Ministries of Women leading on climate-informing community development initiatives. Private sector networks are now being formed to not only work better together in times of disasters but also to provide a more effective link with government and partners. Local governments are leading the way in risk-informing infrastructure projects. You can see these examples and others on our website under ‘Results’ on

Ability to adapt. Secondly, most of our country partners have really appreciated the ability of UNDP to adapt to a constantly changing environment. They often feel that projects that have fixed activities and outputs for a four-year time horizon are unrealistic and can compromise their own ability to initiate real change on the ground. What we see now is that our partners are leading the way and collectively we continue to discover new innovations.

Finally, taking this approach to development programming is immensely rewarding both on a professional and personal level. It almost feels as if there is no other way to deal with the complexities of development in the Pacific and even beyond.

Q: What would you recommend to others who want to take this approach?

I would recommend four key things. First, don’t be afraid to fail and be completely open about this to your partners. This is critical in finding innovative solutions to complex development challenges. Secondly, invest in smaller and manageable initiatives through prototypes. This will help minimize your risks and allow for real creativity. Third, you will have to tailor your results framework in a way that you frame your described activities and outputs as e.g. number of experiments run and evaluated, number of experiments identified for scaling up, etc., rather that describing up front what these experimental interventions will specifically look like. This will give you the leeway to explore uncommon and innovative solutions, while at the same time hold yourself accountable to measurable milestones within this agile development journey. Finally, taking a leap into the unknown can be risky and cause negative perceptions of your work around you. Develop a small group of like-minded colleagues from within and outside the organization who are genuinely willing to try this out and support you. At the same time, it is imperative to engage management early on in an open but confident way about what you are doing and why.

Q: What could all this mean for the future of UNDP’s programming?

We had very interesting conversations with counterparts within institutional donor organizations who frankly told us that refining this agile development approach further could be very rewarding for UNDP. It would allow the organization to position itself as a unique implementing partner that can offer a different way of programming than most other implementation contractors, especially in programmes that try to tackle government reform issues. I feel that the future for UNDP and similar organisations working in this space lies in innovating its programming itself through such agile development, or ‘emergent design’ principles. Not exclusively, but at least as part of its portfolio. Not only is there a lack of this approach in the development space, but more country partners will want this because it is particularly suited to addresses complex development challenges for which no clear solutions exist yet. This needs to go beyond mimicry though and requires fundamental behavioral shifts in terms of how we design, execute and evaluate our work. But the outcomes are worth it. As I said, this has been the most rewarding professional and personal experience for me so far.

Tuesday, 16 January 2018

Artificial Intelligence will change Knowledge Management as we know it

I recently came across this blog post by a start-up that is developing an Artificial Intelligence that is being trained to read and write at the level of a specialized human analyst and produce briefings in human language based on a set of different information resources. It’s just one example of many different companies that are currently working on this challenge. The obvious clients are intelligence agencies, governments, or news agencies, but eventually this will enter all of our everyday work very soon.

I thoroughly believe that this is what knowledge management in large organizations will look like in 10-15 years from now. In my organization, we’re challenged daily to consolidate the key lessons and insights from all our country-level programmes and experiences, lest meaningfully combine them with information, trends and insights from the larger development sector. We complain that we’re overwhelmed by the information overload that social media, Yammer and knowledge networks impose on us, and retreat to focusing on a narrow set of information that confirms our biases, pretending we know what we need to know, when in fact we always only have a small piece of the puzzle. Artificial Intelligence promises to overcome this dilemma, as it will have immediate access to all information available, and can do the necessary analysis for us.

We might not be quite there yet to make this practical for organizations like UNDP, but we’re getting closer and closer. Last year we as UNDP KM team at HQ engaged with a well-known AI systems provider, and while both organizations were not quite ready yet to commit partnering on an AI system that can make sense of unstructured texts, trends, insights and lessons in the development sector, will have to get real about this soon if we as an organization want to be ready for what is to come. To quote the same article above: “With technology that can read and write, you have the flexibility to generate custom insights in any format or level of detail. If you’re a subject matter expert, Primer can tell you a detailed story that takes your knowledge into account. If you’re new to a subject, it can generate an introduction to get you up to speed quickly. If you have an interest in a particular angle on the story, or a geographic lens that you want to zoom in on, the insight can be customized for you. Imagine the possibilities if you had one thousand analysts working for you, all day, every single day. What questions would you ask, what kinds of briefings would you have them prepare?”

Now, technology can always only be one part of the solution. It is important to keep in mind Dave Snowden’s adage that if you have $1 to invest in KM, invest 99 cents in connecting your employees over shared opportunities and 1 cent on content. Connecting people has always been (and will always be) at the center of knowledge management, where we try to connect staff to those who have the skills, capacitating them to identify the right people, enable them to collaborate and research in real time and turn the result of that into actionable insights. It’s why other UN organizations have often looked to UNDP for KM advice, because it regularly chose to make strategic investments in connecting and fostering networking among its staff, being the first UN agency to pioneer email-based knowledge networks in 1999, the first to introduce organization-wide corporate social networking with its award-winning platform Teamworks in 2009, and continuing that trajectory with Yammer (among other things) today. Connected people are the ‘operating system’ of any meaningful KM effort that allows real-time collaboration within a human context that can be turned into actionable insights.

But what forefront thinkers in the AI space tell us about AI’s implication for governments is true for international organizations like UNDP as well: AI will relieve knowledge workers form drudgery tasks, split up our work into automated tasks (e.g. research, collation) and human tasks (value-based decision making, social interactions), and augment the capacities of knowledge workers by adding layers of real-time and predictive analysis that humans couldn’t do by themselves. Together with many of my KM colleagues who are much more skeptical about AI than I am, I also believe the focus will be on augmentation, not replacement. Nonetheless, all indications suggest that we are at the beginning of a revolution of how knowledge work looks like, and organizations like UNDP will be affected internally by both the benefits and risks. The only way to get ourselves ready for it, is by doing what the innovation community always does: Striving to get our feet wet early, and learn, learn, learn.

Tuesday, 4 October 2016

Who is Reading UNDP’s Publications, And Why?

[This post was originally published at on Oct 3, 2016]
It has been two years since the World Bank published a report that stated that over 30 percent of its policy reports have never been downloaded even once and only 13 percent of policy reports were downloaded at least 250 times. The debate among development practitioners that followed made it clear that the World Bank is by far not alone with this phenomenon and that most international organizations, including UNDP, face the exact same challenge.
As UNDP provides support services for implementation of the Sustainable Development Goals (SDGs), we in UNDP’s Knowledge Management Team see the importance of getting insights into the perceived value of our knowledge products and therefore UNDP’s thought leadership in various SDG topics.
In fact, UNDP’s Knowledge Management Strategy 2014-2017 pointed out that UNDP needs to invest in its process of planning, developing and disseminating knowledge products in ways that make them “more relevant to clients’ needs, more flexible and timely in their development and format, and more measurable in their quality and impact.”
During the debate that followed the World Bank’s report, we the Knowledge Management Team at UNDP thought long and hard how to get meaningful data about who is actually reading its publications, to what extent those readers find individual publications useful, and most importantly, for what end those products are actually being used. In order to do this right one would almost need to talk to each individual reader and ask them one by one, which is kind of impossible on an ongoing basis. Or is it?
Well, after several prototypes and tests during the last year, we’ve finally come up with a model to do just that. In March 2016, we tweaked UNDP’s Public Library of Publications so it would present users with a post-download pop-up asking them whether they would be willing to leave their email address so we could contact them later.

In the six month since we introduced this question, over 42,000 users left us their email addresses, and we since followed up with 27,000 of them (through a weekly survey issued a few weeks after a download of a specific publication), asking them how useful they found the specific document they downloaded, what organization they are with, and whether/how the publication made a difference in their work. As of September 2016 we received 1186 survey responses, and the insights we get from our audience goes far and beyond any of the intel we had in the past.
We can now see how useful our publications are to our users, and to what extent specific publications reflect on UNDP’s thought leadership in that topic:

Even with the possibility of a voluntary response bias, the numbers serve as a valuable baseline to track changes in perceived usefulness over time. In addition, we now for the first time get a clear picture who is getting value out of our publications:
And most importantly, we learn from our audience how and for what purpose they use the downloaded publications in their work:
We are also getting great qualitative feedback on how we can improve specific publications in the future, and the individual comments provide great anecdotal evidence at project or community level that demonstrate the impact of UNDP’s work on the ground. Here are some of the impact stories we’ve received:
  • “The publication was used in the development of our food security and livelihood strategy for the Uganda refugee operation.”
  • “The publication has been useful as a starting point to persuade managers of Nature Reserves and Forest Reserve to consider ecotourism planning besides conventional forest management planning.”
  • “Some of the inputs were used in our legislative agenda setting, especially those that are applicable to the Philippines situation.”
  • “I am working in Rwanda’s Environment Management Authority and the publication is useful for public sensitization.”
  • “I introduce the paper to PhD students in my development administration class and asked them to prepare a paper on SDG targets.”
  • “The publication was of fundamental importance for the Pedagogical Political Plan formulation for professional training courses developed within my organization, the Military Police of Mato Grosso, Brazil.”
Going forward, we are making this qualitative feedback available to all our staff, so they will be able to look up their publication and go through all the individual comments the publication received. It is this kind of evidence that shows us where investment in the quality of our publications pays out, and where we need to switch gear, improve our efforts, or shift our focus entirely with regards to specific thematic areas. Most of all, it is these stories that inspire us as staff on a daily basis as they remind us why we are doing what we are doing in our pursuit of sustainable human development.
Of course, this measurement approach is only reaching those who download publications online, and will miss out on all those who receive them through hard copies or through presentations at workshops and conferences.
What did your organization do to get feedback from your offline audience, and do you have any suggestions for how UNDP could fine-tune the above measurement approach? Leave comments below, I’d be glad to hear your suggestions!

Thursday, 18 June 2015

The “Duh-test”, or what is not a lesson learned

I was recently reviewing a number of texts which my organization collected from past projects and initiatives (some through an internal mandatory monitoring tool, others gathered as part of After Action Reviews or Lessons Learned Papers), which all meant to capture ‘lessons learned’ from specific experiences.

And while these texts were not wrong per se, I realized that there seems to be a fundamental misconception what constitutes a good lesson, and what doesn’t. Here are a few typical examples of what we often collect as part of such lessons learned exercises:
  • “Ensure that the [Team] Manager has excellent leadership, project and team management skills, understanding of programming and experience working in [the subject matter].”
  • “Project outputs must be compatible towards project goals. Throughout the project there is a need for careful identification of project goals and outputs to ensure that they are compatible with each other. This can be only ensured through a consultative and participatory approach in project design with target institutions, implementing partner and experts.”
  • “Managing relationships between key national and international players during [the project activity] is very important. Recognizing and respecting national ownership and leadership of the process is vital and key to winning the trust of the national authorities.”
  • “The better local authorities are involved in the process, the better the expected results are easily achieved and durable.”

The above examples are representative for a common type of lessons learned write up, which fails to pass what I would call the “Duh-test”:

If a ‘lesson learned’ statement is so obvious that it is self-evident to every reader, and at the same time so generally applicable to almost any type of project or initiative, it basically becomes meaningless.

It is good when a team realizes that it failed to put in place a team leader who has leadership and team management skills (and yes, it should remind itself do better next time), but there is literally no value in sharing that learning point with others outside the team, simply because everyone already knows that this should always be a criteria for selecting team leaders. There is nothing new to learn here that would change anyone’s views or actions.

Also, if a lesson is so generic that it could apply to any scenario, we deprive ourselves of the learning effect that comes from understanding the particular conditions responsible for making your project work or not work, so others can go and try to replicate or avoid those conditions.

Such lessons that are either too obvious or too generally applicable produce ‘lessons learned noise’ because these same lessons are reported from countless projects over and over again, without anyone actually learning from them. At the same time, this noise detracts everyone’s attention from the meaty lessons learned pieces that really provide value to a wider audience.

So what is it that makes lessons learned write-ups actually add value? Maybe asking ourselves the following three questions could help make lessons learned statements worth capturing and sharing:
  1. Will anyone else actually learn something new from this lesson, as opposed to self-evident truths that everyone usually already knows?This is the “Duh-test” and should always be the first criteria.
  2. Is this lesson particularly relevant to your specific situation, as opposed to a lesson that it so general that it would apply to any scenario?The more general a lesson is, the less useful it is.
  3. Does the lesson include or lend itself to a concrete action that you or someone else can take in order to effect a change in future practice? Capturing a lesson is only meaningful if there is an actual change triggered by it

But aren’t the ‘bad’ examples mentioned earlier still true and important to highlight, even if they are not particularly new or context specific? Doesn’t the fact that everyone agrees to them intuitively and that they apply to all our projects and initiatives all the more valuable?

Absolutely! But I would never call them ‘lesson learned’. Rather, these are important principles that anyone should abide by, no matter what subject matter expertise or functional roles someone has. We should treat them as guiding lights for our work, teach them in our training curriculae, communicate them our onboarding and induction sessions and embed them our policy guidance.  Some lessons from projects, if they are collected often enough, might eventually be added over time to such a common canon of principles. But we should stop collecting what is already part of that canon over and over again from individual projects, which is no good use of anyone’s time.

Monday, 13 October 2014

What remains after the bonfire: How do we define success of an event?

During the last few weeks I was heavily involved with the SHIFT Week of Innovation Action, a series of parallel events taking place in 21 different country offices. Over 50 practitioners were invited to ‘shift’ from one country office to another to share their experience on innovation methodologies and what they learned from their ongoing innovation projects (many of them funded by UNDP’s Innovation Facility), learn from others, and ‘shift mindsets’ in the process.
As part of the team that coordinated the event week I was in awe of the incredible energy coming from country office colleagues and the enthusiasm, creativity and time commitment on the side of organizers, participants, and the coordination team here in New York. And from the feedback that has been rolling in so far (the evaluation survey shows about 95% of participants were satisfied or very satisfied with the event) it seems the SHIFT initiative was a success all around.
Yet, we all remember other instances of well-organized events which achieved great visibility, but when people were asked there months later what the impact of the event has been, we didn’t have much to show for it.

So you had a nice event that brought people together and left everyone happy and excited, but so what? What came out of it?

I believe we have to be very honest about how we define success of events. Yes, it is good when participants convey in a survey how much they enjoyed the gathering. And it is also great when the event achieves visibility and external recognition with good communication during and immediately after the event, such as national media coverage of the SHIFT hackathon in Belarus, great videos produced about SHIFT events in Haiti, Montenegro or Georgia, or outreach products such as the SHIFT Exposure compilation, that give audiences a glance of what happened.
But it is not enough. Because if 12 months from now, none of the new ideas generated will have inspired actual initiatives, projects or products, if none of the innovative prototypes developed will have been applied in real life, none of the solutions shared will have been successfully replicated or brought to scale, and no one who couldn’t participate in person has a chance to learn from what was discussed the event – then I don’t think we can call the exercise a success.

Then it will just have been a bright bonfire that burned for a single night. We have a nice picture of it, but it will not warm anyone going forward.

So here is what I think is needed to make events worth the investment we put into them in terms of time and money. And please feel free to add your own bullet points to this list:

1. Set up an after-event communication plan, and follow up diligently

Rather than letting organizers and participants disperse after a good event, let’s use the current momentum and excitement when people return to their offices. Make a plan on how we want to communicate the results, increase visibility and leverage the event’s discussions and activities to initiate new collaborations, products and projects. Maybe this is the opportunity to promote an existing Community of Practice (COP), or establish a network of mentors around your topic! Make sure to use all available channels, from internal COPs, to external online networks (LinkedIn, Devex, DGroups, World Bank networks, etc.) to public social media channels (Twitter, Facebook, Slideshare) and try to engage new audiences.

2, Relentlessly focus on knowledge and learning products

Communication products and activities are crucial for getting recognition and visibility, and for reporting back to donors. But the important substance, the ‘meat’ of knowledge and learning points is what others really need in order to apply the results of the event to their work. Where can new colleagues who join the organization six months from now access the video recordings and slides of the presentations given so they can follow the event’s learning points? Where can they find blog posts and short interviews with personal insights and reflections of participants on what they learned at the event and how they intend to apply that to their own work? And where are the hands-on knowledge products that help them review the examples shared and apply the solutions that were discussed? If there are only glossy brochures and good-looking PR videos, but no substantive project examples, how-to articles, lessons learned summaries, guidance notes or toolkits coming out of the event, then we might look good externally, but the event was still a failure for the organization as nobody other the handful of on-site participants will learn anything from it.

3. Track status initiatives and projects coming out of the event

One of the reason we as organizations facilitate working-level events is to fulfil our role as a broker of exchanges to inspire and improve our projects and programming. We must come to an understanding that we cannot afford to organize events that look great from the outside but that do not result in concrete, improved approaches, projects and initiatives that are being replicated and scaled up in other countries and regions. We need to wrap up events with concrete commitments on what will happen next, and be diligent in checking-in with organizers and participants at different intervals after the event on how their commitments, prototypes and follow-up activities are evolving (and no, just planning for the next event to discuss the issue further doesn’t count! ;). That means that as an organization we have to expect more from participants than showing up and consuming presentations, but rather for all to become part of an active knowledge production and application process that extends far beyond the event’s closing session.

This is all much easier said than done. For SHIFT week, our team is trying to practice these points, by setting up an editorial calendar through which we will keep communicating about SHIFT results in the upcoming weeks and months, by supporting the formation of mentor groups for follow-up questions, and by following up with teams on potential knowledge products that could emerge from different events. I know there will be a lot of imperfections along the way, but if at the end of the day there will be more products that others can really learn from such as the Guidance for Project Managers on Crowdfunding, the live-stream recordings from Jamaica and Egypt on design thinking with governments or the top tips and questions from the SHIFT Rwanda coffee learning session, and if brilliant initiatives such as the 112 emergency service for people with hearing and speech impairments in Georgia, the bilateral knowledge exchange on public service centers between Bangladesh and China and others can be turned into re-usable guidance for other countries to build on, then we can truly say that the SHIFT Week of Innovation Action was a huge success.

In your option, what other elements are important for defining success of events?

Thursday, 29 May 2014

Rethinking knowledge products after the 'PDF shock': Make them leaner, faster, and never without the community!

Since the World Bank published its report early this month which states that over 30% of its policy reports have never been downloaded even once (!) and only 13 percent of policy reports were downloaded at least 250 times, a fascinating debate on the purpose and value of knowledge products is flourishing the web, and the posts from KM practitioners all over keep pouring in.

It’s not just the World Bank, but most international organizations

Interestingly, I have been thinking about exactly the same questions for the last 9 months now as I was drafting UNDP’s new Knowledge Management Strategy for the upcoming years. Here’s a passage which captures UNDPs own dilemma regarding knowledge products:

The current process of knowledge product definition, development, dissemination and measurement does not yield the quality, reach and impact that is needed for UNDP to be a thought leader in development.” The Strategy goes on to stress that UNDP intends to revise its process of planning, developing, and disseminating knowledge products in a way that makes them “more easily accessible, more relevant to clients’ needs, more accountable towards the community they seek to engage, more flexible and timely in their development and format, and more measurable in their quality and impact.”

Format matters

A lot of contributors to the debate, such as the commenters of the respective Washington Post article, the DevPolicy Blog, Crisscrossed or my KM colleagues from the KM4dev network highlight how we have to get much smarter in developing formats that actually appeal to an audience that is increasingly passing on lengthy unappealing reports and paper. And there is a lot of truth to this. Colleagues at UNDP are increasingly learning that short and snappy products, such as blog posts, 2-pagers or infographics will allow communicating important key points from their work to a larger audience and also more just-in-time. Compared with heavy research reports which take months and years to finalize, the advantage of light-weight formats is that they allow for adjusting content quickly as new data and evidence emerges, which makes the product more relevant and timely the moment it is distributed.

The launch of a paper cannot be the end of the project

Ian Thorpe (who arguably came up with the most crisp blog title in the debate so far ;) also makes an excellent point in clarifying that we have to invest much more in dissemination and outreach. All too often the launch of a product is declared the successful end of a research project, when in fact, this should be just the starting point of a whole new phase where we reach out to potential audiences through all possible traditional and social media channels, organize webinars and on-site events to raise awareness of the knowledge product and its key points, and inject ourselves into ongoing debates where our product can add real value. Budgets for development of knowledge products leave this part of the process chronically underfunded, and we as KM practitioners need to make a point that a dissemination and public engagement strategy has to be an integral part of any knowledge production process.

The real issue is the lack of community feedback loops

But while clear abstracts, interesting illustrations, good formatting and focused outreach will go a long way in mitigating the “too long; didn’t read” (TL;DR) problem, my personal belief is that we must pay much more attention to where the problem of unread knowledge products starts: at inception. The Complexia blog nails it when it points out that there is a “lack of demand-driven research” in which “research projects tend to be more driven by the interest of individual researchers”.

How can it be that organizations give authors green light for the development of papers and reports for which they haven’t done any preliminary analysis of what the targeted community needs and whether the product to be developed is likely to find an audience? How is it possible that we can go through an entire cycle of a product production process without probing with the relevant communities of practitioners outside our organizations whether the questions we ask and the conclusions we draw resonate with the audience that is supposed to benefit from them? And not just once in a peer review when the product is almost finished, but at every step, from inception to formulation of research questions, outline and early drafts?

It is clear to me that we need to get rid of our internal navel-gazing posture and get much better at involving the relevant communities much earlier in the process, and at much more frequent intervals than we do today. This is not rocket science, as such ongoing feedback loops can be achieved through regular blog posts about work in progress, a targeted e-discussion at an early stage, and frequent participation in external online fora to vet ideas. But it requires that authors start seeing themselves not as isolated writers, but as facilitators of a larger debate who are tasked to feed the essence of that debate into their product. Authors who make a living of the actual impact of their publications understand this, as you can see from countless books of business advisors and speakers. Authors who are just hired to deliver a product for an organization by a certain deadline (often without even being credited for it) don’t have that incentive.

Are we at international organizations ready to change this? What can we do to turn this pattern around and start thinking about the relevance of knowledge product from the users’ perspective?

Tuesday, 22 April 2014

Kick-starting Innovation in Response to the Syria Crisis: A Peer Assist Conversation with Arndt Husar and George Hodge

In November 2013 I got deployed for 3 months to Amman on a consulting assignment to support the setup of UNDP’s Sub-Regional Response Facility for the crisis in Syria. A key role of the Facility is to operationalize the Strategic Plan’s key area of ‘Resilience’ in an environment of crisis by marrying the humanitarian response for Syria with a development response. So far there has been a primarily humanitarian angle to the Syria crisis, with OCHA, UNHCR, WFP and FAO leading the response efforts in the region. UNDP’s interest in this situation is to widen the perspective and highlight that there is a dramatic development cost for Syria’s neighbor countries Lebanon, Jordan, Turkey and Iraq, which deal with the largest refugee movement since the WWII. Given that most refugees are not staying in camps are embedded in host communities with families and friends, the host communities face a heavy strain on local services such as access to housing, water, sanitation, health care, education and the labor market, as well as on social cohesion more generally. UNDP finds itself in a situation where it needs to explore new solutions to something that in this scope and in this way hasn’t been done before. This calls for innovation, and my job at the Facility was to help the facility to establish a KM and Innovation framework and action plan to define what the Facility can do from a knowledge and innovation perspective in the next two years, to help UNDP implement a resilience-based response to the Syria crisis.

I have been following UNDP’s work on innovation, and I closely followed the Global Innovation Meeting that took place in November 2013 in Montenegro, including its outputs such the excellent Budva Declaration. Still, much of this to me was rather theoretical and I didn’t have a lot of practical experience on how to approach and manage an innovation initiative. So I contacted a number UNDP colleagues who work on innovation and asked them to participate in Peer Assists – a knowledge management methodology that brings together a group of peers to elicit feedback on a problem, project, or activity, and draw insights from the participants' knowledge and experience (to learn more about Peer Assists, watch this excellent 6 min video here).
I was lucky to win Arndt Husar from the UNDP Global Centre for Public Service Excellence, and George Hodge from the UNDP Country Office in Armenia for a Peer Assist session I conducted in Dec 2013 to tap their brains on implementing innovation initiatives, and I hope the following shortened transcript of the conversation can be of as much help to others as it was to me!

Johannes:  I invited you to this conversation, because you both have been involved in practical innovation events and initiatives in UNDP in the past. My hope is to get to a better understanding of the conditions under which certain innovation initiatives make sense and how we would plan for something like this. Where would a team like ours here in Amman start? And what would be the conditions under which it would make sense to have e.g. an innovation camp in one of Syria’s neighbor countries to identify new solutions together with municipalities and local actors?

George: I like that you are trying to incorporate different approaches in addressing these challenges. The first thing I usually do is to get out of the office and talk to municipal officials and the other stakeholders in order to get a better sense of the problems. When running social innovation camps and innovation challenges, the better you can define the problem at the very beginning, the happier you will be about what happens later in the cycle.
If you run an open innovation challenge where you ask “tell us about your problems, and suggest solutions”, you get a better sense of where the problems lie. Whereas if you want to address more concrete problems of your stakeholders, then you should run an innovation camp or challenge around a specific question.
I recommend visiting your stakeholders to get a sense of which public services are under strain. Then you can run a series of concurrent challenges asking “Can you come up with ways in which we can overcome this particular issue”. But if you are at an early stage and you are trying to make sense of the environment, an open challenge may be best. You should expect over 100 responses, and from that you will get a sense of where people see the most pressing problems.

Arndt:  That was great feedback from George. I think I need more clarity on the scale: Is this something you want to do at an inter-governmental level, or do you want to look for solutions in each specific country? This will define your immediate counterparts and your outreach. Municipalities are good, because they are the ones delivering your cutting-edge services. You could break your issues in the sub-region down into national challenges, and then go out and do the sensing with local partners. For me the big question is the connection between that massive scale of four countries, and the local services that you are looking at.

Johannes: To give you feedback on the scale, we are talking more or less about three countries: Jordan, Lebanon and Turkey, because those are the three countries that host the biggest population of refugees. They are also at a more advanced development stage with larger middle income population, so we may have a higher chance to get some of those technology solutions with private sector off the ground.

Arndt: Ok, I think that’s very good. This sounds similar to what we had in Singapore with our social innovation camps. We had activities in multiple countries and at the end convened a regional summit where we brought the various countries together to combine different country perspectives, which worked quite well.
Regarding the conditions that have to be in place to organize something like this, Mm experience in our regional initiative in Asia-Pacific was that the outputs depended hugely on the local organizing partner and also on the network that the local partner brought in. If you partner with some agency that has a very specific urban network, then you get that kind of result. If you partner with an agency that has a lot of techies in it, then you are going to get a lot of IT solutions. So it’s really important to pick the right partners (ideally a consortium of partners) so you can have a wider range of solutions.
Another lesson that we learned is that during the short duration of a three day camp people really just scratch the surface. I think if you want to get real solutions from municipalities to be innovative, you need to along the lines George suggested: First the sensing, then the definition of the problem, then the call, then probably some research (almost like the production of a case study), and then you go into prototyping and so on. That’s a really thorough preparation process, which we ourselves were not able to do through just a social innovation camp. The above requires a bit more work.

George: What Arndt has just identified is why we – after running a few Social Innovation Camps – decided to set up an Innovation Lab, because we realized we needed a bigger support structure around our events. You need to do the sense making first and then come up with a series of really specific challenges - this will give you results that are much better aligned with your mission.
After the first social innovation camp, we gave the teams grants and just said “good luck, come to us of you have any problems”, but this approach is too passive. By now we have a much bigger support structure where we get our hands dirty with the teams and actively invest in the initiatives that pass through the lab – much more like a incubator. 

Johannes: So how many of these three-day social innovation camps in Armenia did you run? And have they all been successful?

George: We have run four of them, and they were definitely worth it. I would say if you run a three-day event, it is useful to look at what is already out there of which UNDP is not aware, and identify people who have a deep understanding of the social problems in question. Maybe there are teams that are already established, but don’t know how to scale their activities, in which case you could throw UNDP’s institutional weight behind an initiative and scale it quickly.

Johannes: And then you can go a step further and turn that to a ‘lab’ structure. What is a lab exactly?
George: It’s an incubator for social projects where we mimic business incubators: Identify ideas, conduct rapid prototyping, get a very basic product into the world, test it with users, and then look at the feedback. Is this working? Is this gaining traction? If yes, let’s invest more! And all the while you are giving the team access to mentors and design workshops, and develop their capacity. With a lab you can test multiple ideas and hypotheses at the same time, look for results, and then scale. This is very different to traditional programming approaches.  The lab in Armenia attempts to solve big social challenges by harnessing citizens’ experiences, insights and ideas alongside public services.  It applies approaches like horizon scanning, design challenges, user-innovation and service design.

: And what does it take? How much does it cost and how many people are involved?

George: Social Innovation Camps all-in-all cost about $25,000 here in Armenia. The event team should include an Event Coordinator working part-time, plus a full-time assistant, plus a full-time intern over the course of four months. And all that a Lab does is to extend this team people on an ongoing basis.
In terms of time you are looking at four months from the launch of the call for ideas until the event itself. During the preparation phase the team goes out running workshops, talking to people and developing ideas about the problems or challenges. Once the applications come in, they are also looking to find the experts that will complement the idea owners. For example, we had a doctor approach us who said she wanted to digitize Armenia’s blood registry. Of course she had no idea of how to build a database and how to build a web interface. She was just a doctor who understood the problem and had an idea how it could be fixed. So the organizing team then goes out and finds the extra skill sets and people needed to build a crude prototype of the initiative (in this case an online blood registry database) at the event.

Johannes: But even that is a lot of things to do in just four months.

Arndt: Well, in your case we may need to rethink the whole approach. For emergency response situations you need to get to solutions much more rapidly. It is almost like ignoring what Design Thinking says: You have to give it time through various iterations. But you are in a situation where you need quick solution which you can rapidly scale up. Maybe we need to design a process that can be more integrated into government and existing institutions, so we skip the part of building a team. Instead we do stakeholder engagement, and bring in NGOs, relief organizations and governments, host community and refugee representatives and come up with something that can be immediately picked up by the municipalities. Thereby you could crunch the time that you would usually need for the incubation.
George: But then, it is still an untested idea, and the whole point is to establish whether an idea works.

Arndt: Yes, you would still have to test that idea and develop it further.

George: Another idea that builds on Arndt’s could be to do an open call for existing initiatives that are already working on a pre-defined problem. If they have developed a prototype which has generated results we can help them scale up. You are basically looking to adopt alpha or beta products in the late stages of development. You can apply the same skill sets that you would need for a Social Innovation Camp, but you get to scale faster. The challenge is finding these existing initiatives.

Johannes: How do you identify who you can actually talk to? It’s not like you have in a Country Office a list ready with all the actors that you could potentially approach on specific issues.

George: Well, I would start with whatever list you have, and talk to them. First you talk about the problems they face and then you ask them who else they know in their network who is good at addressing this kind of issue, and contact them as well. Repeat this process until you find existing initiatives or citizens who are addressing the problems in creative ways.

Arndt: In your case in Amman, the approach is really a lot about user experiences, either from refugees that are coming from other countries, or the host communities. Think of who has the best information and can identify the crunch points for which you want to develop solutions. Then interview people and collect stories, so you do the sensing exercise in that particular environment that surrounds the issue you are looking at.

Johannes: In both the Armenia Social Innovation Camps and in the regional camps in Singapore, who did that sensing. Was it both of you, or did you hire consultants to do that?

Arndt: In our case it only happened in one camp, and I must say, that we didn’t go with the standard Social Innovation Camp approach because it didn’t do as much of sensing. It basically relied on the participants, the idea givers, to do that. Some of them had done a decent job, but a lot of them had not. The one camp that did this really quite well was the one in Singapore where we had a local counterpart institution that had the mandate to engage on this in the long term, and they did a fantastic job on this.

George: In the case of Armenia, we did it in a team of three: a consultant, an intern and me. The three of us travelled across the country delivering the sensing workshops. Because as Arndt rightly highlighted, you need to get a lot of different perspectives on the problem. I recommend keeping the team in-house instead of giving it to contractors, so you can develop and keep the knowledge, relationships and networks. Also, if you outsource there can be issues of quality assurance. It takes a little bit more work, but at the same time, it really supercharges your learning. We now have an entire list of organizations and people with whom we would never have developed relationships had we not run the event and the lab in-house.

Arndt: It sounds to me that if you want to do a three country project, it will be a quite elaborate process. If you want to create real value it make sense to invest some funds into it and have people do it full time.

Johannes: Do you think it is better to start off with only one country, rather than doing it in three countries at once?

George: It depends on how you are going to do it. If you do it in-house start with one country and make sure you build the skills within the team. But if you are going to outsource it, you might as well go big from the start.

Arndt: I would tend to say go big, because for the Sub-Regional Facility, that will prove your value.

: Well, it’s not like I am going to do this outreach myself, I am here on a detail assignment for another few weeks. But what I need to do is to put the train on the rails and a plan in place so the Facility can hire someone full time to take that plan and run with it.

Arndt: The other thing is: So far we only looked at the option of Social Innovation Camps at the grassroots level. The other possibility is to think of this rather as a public service initiative in support of an innovation-type exercise. You can look at examples of Policy Labs that were presented in at the Global Innovation Meeting in Montenegro, and now again at our event in Singapore, such as like the Mind Lab (Denmark) or designGov (Australia). The latter is more a governmental initiative which also applies a design thinking approach where it is government officials who do all the sensing. They have almost a purely government-driven lab, but they hire service providers which will accompany this process. I wouldn’t be surprised if some of Syria’s neighboring countries had a few companies providing design thinking consultancy services that you can hire to facilitate governments finding solutions. This is different than just coming up with a solution from a UNDP office and then developing a pilot, which is the traditional way UNDP does business. Instead, it means that people really go down to the ground level, get a sense of problems, go back and develop prototypes, test them out, and then scale up what works.

: In the case of Singapore, did the government already have something like that in place, or did they ask UNDP to help them establish the lab?

Arndt: Here in Singapore the Human Experience Lab (THE) Lab is a relatively recent initiative, while Denmark, Australia and UK have policy labs since while. The Singapore lab is only half a year old, but they have been practicing design thinking for a bit longer. Supporting such government labs is possibly an alternative option for UNDP, although these modalities don’t necessarily exclude each other. You could have a government-driven process going on, with social innovation camps as satellite events. That would be really nice because the innovation camps could feed into an official government-driven process and you could have a dialogue between the processes. That might actually be even better than each modality on its own. But of course you need more money for that.

Johannes: From your experience, how many of these solutions that come out of such modalities are IT-driven, like Web 2.0 or e-government solutions, and how many of them are non-tech, like processes or policies?

Arndt: The social innovation camps I’ve seen were almost all tech, but that is because we had specifically asked for tech. The output from the government lab was mostly non-tech, such as project re-engineering, cutting silos, cutting bureaucracy, finding new ways of communication with citizens, etc.

George: From its origin, the specific objective of social innovation camps is to apply Web 2.0 to social problems. So if you are looking beyond tech then you are looking at innovation challenges or design teams. Service design approaches do not involve crowd sourcing but can be a good way of generating lots of ideas for alternative approaches to service delivery. You certainly get non-tech solutions emerging from this. However, if you embrace an open process and just collect ideas, it actually doesn’t matter whether the solutions are IT-based or not. The point is: if you open up to many different inputs, you get good ideas.
Arndt: I second that. It is also easier to scale if the solution is non-tech.

Johannes: From all that you said, I understand that at one point we will have to make that national , regional or global call when you put the question out there who has a solution to a particular problem. How do you actually plan for that?

George: From our experience here in Armenia, once you have a sense of a problem, you can either do an open challenge where you say “tell us about problems and suggest solutions” or a slightly more specific one “tell us about problems in this sector and suggest solutions”, or you ask a really focused question like “tell us about solutions for this specific problem”. Once you have the question you build a small website and integrate social media into it. Whatever the social media platform of choice is in the country you are working in, focus on the top two. Then you have to go into different communities, both online and offline, and starting talking to people. You can’t just build a beautiful garden and expect people to come, you have to go out and engage with people where they are. Our intern worked very hard going into every single web forum in Armenia she could find, talking to people and looking for activists with whom we could work. Just talking to them at a personal level about the issues, and through that engagement process, a lot of hits were generated on our website, and then turned into applications for the event itself. You then complement that with a series of sensing workshops, where you are targeting specific groups of stakeholders. In your example you would talk to host communities, refugees, to state officials, municipal officials – especially in front line service delivery, and journalists. You invite them to a workshop where you ask them to define the problem more clearly. Or if you have already identified the question or challenge, you let them brainstorm on solutions. You also ask them “who do you know who is already working in this”, so you can meet them and see if you can support their work. From that engagement process you will get a lot of connections and ideas. Then you take the best 5 or 6 ideas into a prototyping event, much like a social innovation camp.

Johannes: Well, thank you so much guys, this all really helps our thinking here a lot. We will have to develop an action plan on this going forward. I will write this up, both in a blog but also in the form of some concept note for the Facility. And I’ll keep you updated on how I can inject those thoughts into what we are doing at the Facility, and we’ll be happy to share how far we will have come with it in a few months’ time!