Thursday, 27 September 2018

We are both underestimating and overestimating the dynamics of Artificial Intelligence for KM: My 2 cents featured in the updated Agenda Knowledge for Development (K4D)

Today the Knowledge for Development Partnership (K4D) initiative released an updated version of the 'Agenda Knowledge for Development', which formulates 14 "knowledge goals" meant to supplement the UN's Sustainable Development Goals and strengthen the Agenda 2030 from a Knowledge Management perspective.

The updated version includes one added Goal 14 ‘The arts and culture are central to knowledge societies', as well as 57 added statements (in addition to the 73 statements in the previous edition) from knowledge management practitioners in development across the world, including my own.



I've already shared on this blog my reflections on the role that I believe Artificial Intelligence will play in Knowledge Management in the coming years, and I've re-iterated my view on this in my statement for K4D which is now featured under Part II of this document (page 84), "Statements on Knowledge for Development" next to the statements from 130 of my fellow KM colleagues:

"It is easy to hail knowledge as ultimate driver for change and key resource to achieve the SDGs. The sobering reality, however, is that more knowledge doesn’t per se make the world a better place. In fact, one can argue that humankind has reached a point in history where there is more knowledge than it can productively handle. Despite the known benefits of democracy, support for democratic principles is shrinking worldwide. Despite the advances to human progress through science, increasing portions of populations wilfully choose ignorance and ideology over scientific evidence. And despite unprecedented access to news and information sources, consumers chose to rely on fake news instead of fact checking. These are symptoms of a world in which there is just too much information for the human brain to meaningfully process. And the instinctive response is to retreat to what we already know and are comfortable with, rather than expose ourselves continuously to a complex world in which discerning the best route of action among many truths is very hard work and just plain exhausting.

One way in which humans will try to resolve this in the next decade is that we will turn to Artificial
Intelligence (AI) to sift through the massive amounts of knowledge and information available, and make sense of it for us. As with past tech trends, we are currently both underestimating and overestimating the dynamics of this technology in the way we manage knowledge. We are underestimating the profound transformational impact AI will have on the way we learn about, curate and analyze examples and insights from worldwide activities in our everyday work. And we are at the same time overestimating the extent to which technology can solve our underlying problem of using knowledge to better the human condition. Programmed biases in AI systems, questions of legitimacy and over-reliance on ‘black box’ AIs, and issues around ethics and local context are just some of the problems that we will have to resolve as we will increasingly rely on machine learning. Knowledge for development needs to be mindful of the issues that knowledge complexity is triggering in societies, and brace itself for the full force of the AI revolution that will transform the way we manage this knowledge in the upcoming 10-15 years so that we, as development practitioners, are well positioned to both reap its benefits and mitigate its pitfalls as we work towards achieving the SDGs."

What do you think? Is this view too much preoccupied with the current global political context, which may be more of a momentary snapshot than a long trend, or do you agree that the dichonomy between information availability and capacity to dicern and process it will be the key KM challenge of the years to come?

Monday, 26 February 2018

How to program for uncertain results? The innovation journey of a 'slightly unsusual' programme in UNDP

Innovations are driven by risk-takers. Part of UNDP’s role in innovation is to provide the space for risk-takers to develop and test their ideas. And it turns out that sometimes these are not individuals, but entire programmes! The Pacific Risk Resilience Programme (PRRP) covers the Pacific countries of Fiji, Solomon Islands, Tonga and Vanuatu, and takes an unusual approach within UNDP’s programme portfolio. Now in its fourth year, it didn’t exactly follow the standard programming approach in which a challenge and a development model is identified in the beginning, and a set of interventions is designed that would then be rolled out over the course of the next years, with clear activities and results prescribed for each year. Instead, PRRP didn’t actually describe the model or the interventions themselves at all. Instead, they let the model emerge over time by running sprints of interventions and evaluating them frequently, something that is known in the information technology world as ‘agile development’. I've talked to the programme manager, Moortaza Jiwanji, about their approach, what they learned from doing things differently, and the implications of their experience for UNDP programming.

Q: Why did you feel the need to do things differently than in ‘traditional’ programming?

The main reason was that we that we had to develop something for which there was no precedent. We were venturing into unknown territory, and it made little sense for us to prescribe what results would look like 4-years in advance with a results framework that pretends to know exactly what activity would be best to deliver by year 4. We simply couldn’t see how that would work.

Climate change and disasters have a real impact on people in the Pacific. Despite the unprecedented levels of funding and programming in the region, it is disheartening to see how communities are still experiencing the same types of impacts of climate change and disasters and in some cases, these are becoming worse! This is particularly concerning given that the symptoms of climate change such as cyclones, flooding and droughts are likely to increase in intensity and frequency in the future.

Much of this programming in the Pacific has led to some concrete results on the ground, but we felt that there was not enough thought being put into addressing the root causes of these vulnerabilities. It is less obvious to see how development itself is being adjusted to address these risks. For instance, why is it that schools and houses are still built in flood prone areas without the appropriate materials and design codes? It is also becoming increasingly clear that development itself is a primary cause of this vulnerability to climate change. This could perhaps explain the cyclical nature of these impacts.

We realized that something needed to change within development itself and not just in climate change programming. Most programming is focused on technical solutions such as building sea-walls rather than dealing with the root causes. It seemed at the time that there was not much programming experience in dealing with climate change from a ‘development’ perspective. We knew what needed to change but there was limited experience in the region and globally to show us how.

That’s why we decided to develop a model for risk-informed development without any preconceived ideas about what it would look like and how it would work. Our starting point was to address deep-seated governance issues, not for climate change, but for development. We also felt that this would be an opportunity for UNDP to build a niche for itself, particularly given that we are a ‘development’ agency and also deal with governance reform. We were able to do this through the Pacific Risk Resilience Programme (PRRP) which started in 2013. PRRP was funded by the Australian government as they also felt willing to try something different given that they were also not seeing the aggregate results in the region.

Q: What exactly was ‘different’ about your approach?

We knew we had to do two things differently: First, in order to tackle the root causes of climate change and disaster risk, we had to work deep within development itself. And second, because at the time back in 2013 there was not much experience of dealing with climate and disaster risk from a truly ‘development’ perspective, we had to follow an approach that was largely experimental at the time and depart from more traditional approaches to programme design and implementation.

So what was different about our approach? First, unlike most development partners in this area we did not work as an outside partner with climate change and disaster management functions in government. Instead we programmed ‘from within’ governance systems where our government partners owned the development interventions from community to national level. We also used a human-centered design approach that really focused on developing individual mechanisms with the same people that were going to apply them in their government ministries and agencies. Both these aspects allowed our country partners to help design and fully lead the initiatives themselves rather than UNDP leading the way. This admittedly raised some eyebrows at the time as there was an expectation for climate change to work with (and provide funding for) the ‘usual suspects’.
Figure 1: The Innovation Feedback Loop

Second, unlike standard programme design approaches we did not predetermine our activities and outputs well in advance for the next four years. Instead we built smaller and targeted experimental interventions where we saw some prospects of success (see adjacent diagram), e.g. where we found receptive partners and conducive political environments. This allowed us to understand which activities did yield the best results as we implemented them. We then spent a great deal of time and energy in measuring any apparent successes and even more importantly failures. Learning from these experiences was the most important ingredient and we spent a great deal of time and energy on doing this collectively with our partners. So whether a pilot leads to measurable successes or failures, the real success of this approach comes in how well you learn and subsequently redesign and modify approaches based on these learnings. Based on this iterative process we would then develop the overarching model as it emergedfrom those experiences. The interesting thing about this experience is that we developed this modus operandi ourselves, completely unaware that UNDP was promoting through its global Innovation Facility exactly such innovation and design approaches that encourage this type of experimentation. At the time of designing this programme back in 2013 we decided to call this approach ‘emergent design’, but it aligns very much with the innovation principles of agile development and problem-driven adaptive iteration.

Based on the learning from these experiments, we have now developed an overarching model around the concept of ‘risk governance’, designed and tested to risk-inform development ‘from within’ and at all levels of governance. For more information you can read our recently launched policy brief and you can also see practical examples of how this is benefiting countries in the Pacific on our website www.pacific-prrp.org.

Q: What were the challenges you encountered?

Risky business. Developing a programme based on emergent design or agile development principles is extremely exciting. However, it can also be quite stressful because in essence you are taking a significant risk in programming something that has not been tested successfully yet. This is particularly challenging when it comes to convincing your programme stakeholders, such as your donor, country partners and even internal management.

Raising eyebrows. In the early days, we seemed to develop somewhat of a reputation as being the slightly unusual programme within UNDP. This was not always cast in a positive light and this is partly because we did not have a fixed and clearly defined results and resources framework over a four-year period.

Buy-in from stakeholders. There are three types of stakeholders that we dealt with through this experience: the country partners (or beneficiaries); our donor partner; and UNDP itself. The approach of working from within and building governance systems to risk-inform development was most positively received by our government and donor partners and then eventually with our managers within UNDP. This took a little while perhaps largely because we were venturing into the unknown and did not have a clear narrative to describe and justify our approach, particularly in the early days.

It can take some time to show predictable and regular results. Agile development or emergent design approaches can take some time to achieve tangible results. It is almost by definition impossible to predict when and how results are going to be achieved. This was particularly challenging when working in an environment where programmes are expected to report on results against clearly defined outputs and targets at least every quarter.

Q: What were the benefits of taking this approach compared to more traditional approaches?

In essence, agile development allowed us to get results that otherwise would have never emerged had we prescribed our specific outputs the traditional way several years in advance. And on top of it, the solutions that we did get through the agile development approach now address the actual problem we’re trying to tackle much better.

Over time we saw that taking this approach was extremely beneficial, particularly to our government partners, in offering more sustainable and realistic solutions to the complexities of climate change and disasters in the Pacific. You can see this by the fact that our country partners are now advocating for this approach within their own countries.

Unexpected solutions. What’s really interesting is that taking this approach has led to solutions that we would have never designed up front. For instance, we now have Ministries of Women leading on climate-informing community development initiatives. Private sector networks are now being formed to not only work better together in times of disasters but also to provide a more effective link with government and partners. Local governments are leading the way in risk-informing infrastructure projects. You can see these examples and others on our website under ‘Results’ on www.pacific-prrp.org.

Ability to adapt. Secondly, most of our country partners have really appreciated the ability of UNDP to adapt to a constantly changing environment. They often feel that projects that have fixed activities and outputs for a four-year time horizon are unrealistic and can compromise their own ability to initiate real change on the ground. What we see now is that our partners are leading the way and collectively we continue to discover new innovations.

Finally, taking this approach to development programming is immensely rewarding both on a professional and personal level. It almost feels as if there is no other way to deal with the complexities of development in the Pacific and even beyond.

Q: What would you recommend to others who want to take this approach?

I would recommend four key things. First, don’t be afraid to fail and be completely open about this to your partners. This is critical in finding innovative solutions to complex development challenges. Secondly, invest in smaller and manageable initiatives through prototypes. This will help minimize your risks and allow for real creativity. Third, you will have to tailor your results framework in a way that you frame your described activities and outputs as e.g. number of experiments run and evaluated, number of experiments identified for scaling up, etc., rather that describing up front what these experimental interventions will specifically look like. This will give you the leeway to explore uncommon and innovative solutions, while at the same time hold yourself accountable to measurable milestones within this agile development journey. Finally, taking a leap into the unknown can be risky and cause negative perceptions of your work around you. Develop a small group of like-minded colleagues from within and outside the organization who are genuinely willing to try this out and support you. At the same time, it is imperative to engage management early on in an open but confident way about what you are doing and why.

Q: What could all this mean for the future of UNDP’s programming?

We had very interesting conversations with counterparts within institutional donor organizations who frankly told us that refining this agile development approach further could be very rewarding for UNDP. It would allow the organization to position itself as a unique implementing partner that can offer a different way of programming than most other implementation contractors, especially in programmes that try to tackle government reform issues. I feel that the future for UNDP and similar organisations working in this space lies in innovating its programming itself through such agile development, or ‘emergent design’ principles. Not exclusively, but at least as part of its portfolio. Not only is there a lack of this approach in the development space, but more country partners will want this because it is particularly suited to addresses complex development challenges for which no clear solutions exist yet. This needs to go beyond mimicry though and requires fundamental behavioral shifts in terms of how we design, execute and evaluate our work. But the outcomes are worth it. As I said, this has been the most rewarding professional and personal experience for me so far.


Tuesday, 16 January 2018

Artificial Intelligence will change Knowledge Management as we know it

I recently came across this blog post by a start-up that is developing an Artificial Intelligence that is being trained to read and write at the level of a specialized human analyst and produce briefings in human language based on a set of different information resources. It’s just one example of many different companies that are currently working on this challenge. The obvious clients are intelligence agencies, governments, or news agencies, but eventually this will enter all of our everyday work very soon.

I thoroughly believe that this is what knowledge management in large organizations will look like in 10-15 years from now. In my organization, we’re challenged daily to consolidate the key lessons and insights from all our country-level programmes and experiences, lest meaningfully combine them with information, trends and insights from the larger development sector. We complain that we’re overwhelmed by the information overload that social media, Yammer and knowledge networks impose on us, and retreat to focusing on a narrow set of information that confirms our biases, pretending we know what we need to know, when in fact we always only have a small piece of the puzzle. Artificial Intelligence promises to overcome this dilemma, as it will have immediate access to all information available, and can do the necessary analysis for us.

We might not be quite there yet to make this practical for organizations like UNDP, but we’re getting closer and closer. Last year we as UNDP KM team at HQ engaged with a well-known AI systems provider, and while both organizations were not quite ready yet to commit partnering on an AI system that can make sense of unstructured texts, trends, insights and lessons in the development sector, will have to get real about this soon if we as an organization want to be ready for what is to come. To quote the same article above: “With technology that can read and write, you have the flexibility to generate custom insights in any format or level of detail. If you’re a subject matter expert, Primer can tell you a detailed story that takes your knowledge into account. If you’re new to a subject, it can generate an introduction to get you up to speed quickly. If you have an interest in a particular angle on the story, or a geographic lens that you want to zoom in on, the insight can be customized for you. Imagine the possibilities if you had one thousand analysts working for you, all day, every single day. What questions would you ask, what kinds of briefings would you have them prepare?”

Now, technology can always only be one part of the solution. It is important to keep in mind Dave Snowden’s adage that if you have $1 to invest in KM, invest 99 cents in connecting your employees over shared opportunities and 1 cent on content. Connecting people has always been (and will always be) at the center of knowledge management, where we try to connect staff to those who have the skills, capacitating them to identify the right people, enable them to collaborate and research in real time and turn the result of that into actionable insights. It’s why other UN organizations have often looked to UNDP for KM advice, because it regularly chose to make strategic investments in connecting and fostering networking among its staff, being the first UN agency to pioneer email-based knowledge networks in 1999, the first to introduce organization-wide corporate social networking with its award-winning platform Teamworks in 2009, and continuing that trajectory with Yammer (among other things) today. Connected people are the ‘operating system’ of any meaningful KM effort that allows real-time collaboration within a human context that can be turned into actionable insights.

But what forefront thinkers in the AI space tell us about AI’s implication for governments is true for international organizations like UNDP as well: AI will relieve knowledge workers form drudgery tasks, split up our work into automated tasks (e.g. research, collation) and human tasks (value-based decision making, social interactions), and augment the capacities of knowledge workers by adding layers of real-time and predictive analysis that humans couldn’t do by themselves. Together with many of my KM colleagues who are much more skeptical about AI than I am, I also believe the focus will be on augmentation, not replacement. Nonetheless, all indications suggest that we are at the beginning of a revolution of how knowledge work looks like, and organizations like UNDP will be affected internally by both the benefits and risks. The only way to get ourselves ready for it, is by doing what the innovation community always does: Striving to get our feet wet early, and learn, learn, learn.

Tuesday, 4 October 2016

Who is Reading UNDP’s Publications, And Why?

[This post was originally published at UNDP.org on Oct 3, 2016]
It has been two years since the World Bank published a report that stated that over 30 percent of its policy reports have never been downloaded even once and only 13 percent of policy reports were downloaded at least 250 times. The debate among development practitioners that followed made it clear that the World Bank is by far not alone with this phenomenon and that most international organizations, including UNDP, face the exact same challenge.
As UNDP provides support services for implementation of the Sustainable Development Goals (SDGs), we in UNDP’s Knowledge Management Team see the importance of getting insights into the perceived value of our knowledge products and therefore UNDP’s thought leadership in various SDG topics.
In fact, UNDP’s Knowledge Management Strategy 2014-2017 pointed out that UNDP needs to invest in its process of planning, developing and disseminating knowledge products in ways that make them “more relevant to clients’ needs, more flexible and timely in their development and format, and more measurable in their quality and impact.”
During the debate that followed the World Bank’s report, we the Knowledge Management Team at UNDP thought long and hard how to get meaningful data about who is actually reading its publications, to what extent those readers find individual publications useful, and most importantly, for what end those products are actually being used. In order to do this right one would almost need to talk to each individual reader and ask them one by one, which is kind of impossible on an ongoing basis. Or is it?
Well, after several prototypes and tests during the last year, we’ve finally come up with a model to do just that. In March 2016, we tweaked UNDP’s Public Library of Publications so it would present users with a post-download pop-up asking them whether they would be willing to leave their email address so we could contact them later.

In the six month since we introduced this question, over 42,000 users left us their email addresses, and we since followed up with 27,000 of them (through a weekly survey issued a few weeks after a download of a specific publication), asking them how useful they found the specific document they downloaded, what organization they are with, and whether/how the publication made a difference in their work. As of September 2016 we received 1186 survey responses, and the insights we get from our audience goes far and beyond any of the intel we had in the past.
We can now see how useful our publications are to our users, and to what extent specific publications reflect on UNDP’s thought leadership in that topic:


Even with the possibility of a voluntary response bias, the numbers serve as a valuable baseline to track changes in perceived usefulness over time. In addition, we now for the first time get a clear picture who is getting value out of our publications:
And most importantly, we learn from our audience how and for what purpose they use the downloaded publications in their work:
We are also getting great qualitative feedback on how we can improve specific publications in the future, and the individual comments provide great anecdotal evidence at project or community level that demonstrate the impact of UNDP’s work on the ground. Here are some of the impact stories we’ve received:
  • “The publication was used in the development of our food security and livelihood strategy for the Uganda refugee operation.”
  • “The publication has been useful as a starting point to persuade managers of Nature Reserves and Forest Reserve to consider ecotourism planning besides conventional forest management planning.”
  • “Some of the inputs were used in our legislative agenda setting, especially those that are applicable to the Philippines situation.”
  • “I am working in Rwanda’s Environment Management Authority and the publication is useful for public sensitization.”
  • “I introduce the paper to PhD students in my development administration class and asked them to prepare a paper on SDG targets.”
  • “The publication was of fundamental importance for the Pedagogical Political Plan formulation for professional training courses developed within my organization, the Military Police of Mato Grosso, Brazil.”
Going forward, we are making this qualitative feedback available to all our staff, so they will be able to look up their publication and go through all the individual comments the publication received. It is this kind of evidence that shows us where investment in the quality of our publications pays out, and where we need to switch gear, improve our efforts, or shift our focus entirely with regards to specific thematic areas. Most of all, it is these stories that inspire us as staff on a daily basis as they remind us why we are doing what we are doing in our pursuit of sustainable human development.
Of course, this measurement approach is only reaching those who download publications online, and will miss out on all those who receive them through hard copies or through presentations at workshops and conferences.
What did your organization do to get feedback from your offline audience, and do you have any suggestions for how UNDP could fine-tune the above measurement approach? Leave comments below, I’d be glad to hear your suggestions!

Thursday, 18 June 2015

The “Duh-test”, or what is not a lesson learned

I was recently reviewing a number of texts which my organization collected from past projects and initiatives (some through an internal mandatory monitoring tool, others gathered as part of After Action Reviews or Lessons Learned Papers), which all meant to capture ‘lessons learned’ from specific experiences.

And while these texts were not wrong per se, I realized that there seems to be a fundamental misconception what constitutes a good lesson, and what doesn’t. Here are a few typical examples of what we often collect as part of such lessons learned exercises:
  • “Ensure that the [Team] Manager has excellent leadership, project and team management skills, understanding of programming and experience working in [the subject matter].”
  • “Project outputs must be compatible towards project goals. Throughout the project there is a need for careful identification of project goals and outputs to ensure that they are compatible with each other. This can be only ensured through a consultative and participatory approach in project design with target institutions, implementing partner and experts.”
  • “Managing relationships between key national and international players during [the project activity] is very important. Recognizing and respecting national ownership and leadership of the process is vital and key to winning the trust of the national authorities.”
  • “The better local authorities are involved in the process, the better the expected results are easily achieved and durable.”


The above examples are representative for a common type of lessons learned write up, which fails to pass what I would call the “Duh-test”:

If a ‘lesson learned’ statement is so obvious that it is self-evident to every reader, and at the same time so generally applicable to almost any type of project or initiative, it basically becomes meaningless.

It is good when a team realizes that it failed to put in place a team leader who has leadership and team management skills (and yes, it should remind itself do better next time), but there is literally no value in sharing that learning point with others outside the team, simply because everyone already knows that this should always be a criteria for selecting team leaders. There is nothing new to learn here that would change anyone’s views or actions.

Also, if a lesson is so generic that it could apply to any scenario, we deprive ourselves of the learning effect that comes from understanding the particular conditions responsible for making your project work or not work, so others can go and try to replicate or avoid those conditions.

Such lessons that are either too obvious or too generally applicable produce ‘lessons learned noise’ because these same lessons are reported from countless projects over and over again, without anyone actually learning from them. At the same time, this noise detracts everyone’s attention from the meaty lessons learned pieces that really provide value to a wider audience.

So what is it that makes lessons learned write-ups actually add value? Maybe asking ourselves the following three questions could help make lessons learned statements worth capturing and sharing:
  1. Will anyone else actually learn something new from this lesson, as opposed to self-evident truths that everyone usually already knows?This is the “Duh-test” and should always be the first criteria.
  2. Is this lesson particularly relevant to your specific situation, as opposed to a lesson that it so general that it would apply to any scenario?The more general a lesson is, the less useful it is.
  3. Does the lesson include or lend itself to a concrete action that you or someone else can take in order to effect a change in future practice? Capturing a lesson is only meaningful if there is an actual change triggered by it


But aren’t the ‘bad’ examples mentioned earlier still true and important to highlight, even if they are not particularly new or context specific? Doesn’t the fact that everyone agrees to them intuitively and that they apply to all our projects and initiatives all the more valuable?

Absolutely! But I would never call them ‘lesson learned’. Rather, these are important principles that anyone should abide by, no matter what subject matter expertise or functional roles someone has. We should treat them as guiding lights for our work, teach them in our training curriculae, communicate them our onboarding and induction sessions and embed them our policy guidance.  Some lessons from projects, if they are collected often enough, might eventually be added over time to such a common canon of principles. But we should stop collecting what is already part of that canon over and over again from individual projects, which is no good use of anyone’s time.

Monday, 13 October 2014

What remains after the bonfire: How do we define success of an event?


During the last few weeks I was heavily involved with the SHIFT Week of Innovation Action, a series of parallel events taking place in 21 different country offices. Over 50 practitioners were invited to ‘shift’ from one country office to another to share their experience on innovation methodologies and what they learned from their ongoing innovation projects (many of them funded by UNDP’s Innovation Facility), learn from others, and ‘shift mindsets’ in the process.
As part of the team that coordinated the event week I was in awe of the incredible energy coming from country office colleagues and the enthusiasm, creativity and time commitment on the side of organizers, participants, and the coordination team here in New York. And from the feedback that has been rolling in so far (the evaluation survey shows about 95% of participants were satisfied or very satisfied with the event) it seems the SHIFT initiative was a success all around.
Yet, we all remember other instances of well-organized events which achieved great visibility, but when people were asked there months later what the impact of the event has been, we didn’t have much to show for it.

So you had a nice event that brought people together and left everyone happy and excited, but so what? What came out of it?

I believe we have to be very honest about how we define success of events. Yes, it is good when participants convey in a survey how much they enjoyed the gathering. And it is also great when the event achieves visibility and external recognition with good communication during and immediately after the event, such as national media coverage of the SHIFT hackathon in Belarus, great videos produced about SHIFT events in Haiti, Montenegro or Georgia, or outreach products such as the SHIFT Exposure compilation, that give audiences a glance of what happened.
But it is not enough. Because if 12 months from now, none of the new ideas generated will have inspired actual initiatives, projects or products, if none of the innovative prototypes developed will have been applied in real life, none of the solutions shared will have been successfully replicated or brought to scale, and no one who couldn’t participate in person has a chance to learn from what was discussed the event – then I don’t think we can call the exercise a success.

Then it will just have been a bright bonfire that burned for a single night. We have a nice picture of it, but it will not warm anyone going forward.

So here is what I think is needed to make events worth the investment we put into them in terms of time and money. And please feel free to add your own bullet points to this list:

1. Set up an after-event communication plan, and follow up diligently

Rather than letting organizers and participants disperse after a good event, let’s use the current momentum and excitement when people return to their offices. Make a plan on how we want to communicate the results, increase visibility and leverage the event’s discussions and activities to initiate new collaborations, products and projects. Maybe this is the opportunity to promote an existing Community of Practice (COP), or establish a network of mentors around your topic! Make sure to use all available channels, from internal COPs, to external online networks (LinkedIn, Devex, DGroups, World Bank networks, etc.) to public social media channels (Twitter, Facebook, Slideshare) and try to engage new audiences.

2, Relentlessly focus on knowledge and learning products

Communication products and activities are crucial for getting recognition and visibility, and for reporting back to donors. But the important substance, the ‘meat’ of knowledge and learning points is what others really need in order to apply the results of the event to their work. Where can new colleagues who join the organization six months from now access the video recordings and slides of the presentations given so they can follow the event’s learning points? Where can they find blog posts and short interviews with personal insights and reflections of participants on what they learned at the event and how they intend to apply that to their own work? And where are the hands-on knowledge products that help them review the examples shared and apply the solutions that were discussed? If there are only glossy brochures and good-looking PR videos, but no substantive project examples, how-to articles, lessons learned summaries, guidance notes or toolkits coming out of the event, then we might look good externally, but the event was still a failure for the organization as nobody other the handful of on-site participants will learn anything from it.

3. Track status initiatives and projects coming out of the event

One of the reason we as organizations facilitate working-level events is to fulfil our role as a broker of exchanges to inspire and improve our projects and programming. We must come to an understanding that we cannot afford to organize events that look great from the outside but that do not result in concrete, improved approaches, projects and initiatives that are being replicated and scaled up in other countries and regions. We need to wrap up events with concrete commitments on what will happen next, and be diligent in checking-in with organizers and participants at different intervals after the event on how their commitments, prototypes and follow-up activities are evolving (and no, just planning for the next event to discuss the issue further doesn’t count! ;). That means that as an organization we have to expect more from participants than showing up and consuming presentations, but rather for all to become part of an active knowledge production and application process that extends far beyond the event’s closing session.


This is all much easier said than done. For SHIFT week, our team is trying to practice these points, by setting up an editorial calendar through which we will keep communicating about SHIFT results in the upcoming weeks and months, by supporting the formation of mentor groups for follow-up questions, and by following up with teams on potential knowledge products that could emerge from different events. I know there will be a lot of imperfections along the way, but if at the end of the day there will be more products that others can really learn from such as the Guidance for Project Managers on Crowdfunding, the live-stream recordings from Jamaica and Egypt on design thinking with governments or the top tips and questions from the SHIFT Rwanda coffee learning session, and if brilliant initiatives such as the 112 emergency service for people with hearing and speech impairments in Georgia, the bilateral knowledge exchange on public service centers between Bangladesh and China and others can be turned into re-usable guidance for other countries to build on, then we can truly say that the SHIFT Week of Innovation Action was a huge success.



In your option, what other elements are important for defining success of events?

Thursday, 29 May 2014

Rethinking knowledge products after the 'PDF shock': Make them leaner, faster, and never without the community!

Since the World Bank published its report early this month which states that over 30% of its policy reports have never been downloaded even once (!) and only 13 percent of policy reports were downloaded at least 250 times, a fascinating debate on the purpose and value of knowledge products is flourishing the web, and the posts from KM practitioners all over keep pouring in.

It’s not just the World Bank, but most international organizations

Interestingly, I have been thinking about exactly the same questions for the last 9 months now as I was drafting UNDP’s new Knowledge Management Strategy for the upcoming years. Here’s a passage which captures UNDPs own dilemma regarding knowledge products:

The current process of knowledge product definition, development, dissemination and measurement does not yield the quality, reach and impact that is needed for UNDP to be a thought leader in development.” The Strategy goes on to stress that UNDP intends to revise its process of planning, developing, and disseminating knowledge products in a way that makes them “more easily accessible, more relevant to clients’ needs, more accountable towards the community they seek to engage, more flexible and timely in their development and format, and more measurable in their quality and impact.”

Format matters

A lot of contributors to the debate, such as the commenters of the respective Washington Post article, the DevPolicy Blog, Crisscrossed or my KM colleagues from the KM4dev network highlight how we have to get much smarter in developing formats that actually appeal to an audience that is increasingly passing on lengthy unappealing reports and paper. And there is a lot of truth to this. Colleagues at UNDP are increasingly learning that short and snappy products, such as blog posts, 2-pagers or infographics will allow communicating important key points from their work to a larger audience and also more just-in-time. Compared with heavy research reports which take months and years to finalize, the advantage of light-weight formats is that they allow for adjusting content quickly as new data and evidence emerges, which makes the product more relevant and timely the moment it is distributed.

The launch of a paper cannot be the end of the project

Ian Thorpe (who arguably came up with the most crisp blog title in the debate so far ;) also makes an excellent point in clarifying that we have to invest much more in dissemination and outreach. All too often the launch of a product is declared the successful end of a research project, when in fact, this should be just the starting point of a whole new phase where we reach out to potential audiences through all possible traditional and social media channels, organize webinars and on-site events to raise awareness of the knowledge product and its key points, and inject ourselves into ongoing debates where our product can add real value. Budgets for development of knowledge products leave this part of the process chronically underfunded, and we as KM practitioners need to make a point that a dissemination and public engagement strategy has to be an integral part of any knowledge production process.

The real issue is the lack of community feedback loops

But while clear abstracts, interesting illustrations, good formatting and focused outreach will go a long way in mitigating the “too long; didn’t read” (TL;DR) problem, my personal belief is that we must pay much more attention to where the problem of unread knowledge products starts: at inception. The Complexia blog nails it when it points out that there is a “lack of demand-driven research” in which “research projects tend to be more driven by the interest of individual researchers”.

How can it be that organizations give authors green light for the development of papers and reports for which they haven’t done any preliminary analysis of what the targeted community needs and whether the product to be developed is likely to find an audience? How is it possible that we can go through an entire cycle of a product production process without probing with the relevant communities of practitioners outside our organizations whether the questions we ask and the conclusions we draw resonate with the audience that is supposed to benefit from them? And not just once in a peer review when the product is almost finished, but at every step, from inception to formulation of research questions, outline and early drafts?

It is clear to me that we need to get rid of our internal navel-gazing posture and get much better at involving the relevant communities much earlier in the process, and at much more frequent intervals than we do today. This is not rocket science, as such ongoing feedback loops can be achieved through regular blog posts about work in progress, a targeted e-discussion at an early stage, and frequent participation in external online fora to vet ideas. But it requires that authors start seeing themselves not as isolated writers, but as facilitators of a larger debate who are tasked to feed the essence of that debate into their product. Authors who make a living of the actual impact of their publications understand this, as you can see from countless books of business advisors and speakers. Authors who are just hired to deliver a product for an organization by a certain deadline (often without even being credited for it) don’t have that incentive.

Are we at international organizations ready to change this? What can we do to turn this pattern around and start thinking about the relevance of knowledge product from the users’ perspective?