The CEDARS Center / SHOUT Group Blog

The CEDARS Center / SHOUT Group Blog
Return to CEDARS' Home Page.
Join the SHOUT Group by registering on CEDARS to be allowed to upload projects and experiences.
Contribute to this blog.

Sunday, February 12, 2017

Fiction or non-fiction in international development

I will not denounce the author of this iconoclastic drawing (Steve Hodgins), and I take all responsibility for the text that it inspires.

"Fiction" or "non-fiction"? Would we consider our wok in global health, global development as one or the other? Easy question. Right?

Well, I've long had this word stuck in my brain: "pretense". Maybe that's why Steve's drawing triggered me to post it and write a comment.

We often think of corruption as "them", these people who will take personal benefit from public efforts. This district manager who collected 90 days of per diem in just a month (truth or urban myth? but not impossible at all). This West African country where most workshops take place in a hotel in the capital, but just outside the administrative boundary of the capital city, so that everyone gets travel per diems. These many places where we allow "sitting fees" to discuss policy and programs with officials, because presumably they hold their population hostages, and, either we care so much about the populations, or we care so much about the success of our projects -- who knows? Maybe it's a mixture of the two. But anyway, these cases of corruption are--in a post-colonial mindset--"their" problem.

We talk far less of the corruption of pretense, which this picture illustrates so well.

"How does it relate to our work, really?" you may ask.
Of course all of us sometimes embellish a bit the promises of our tools, methods and proposals. But that's par for the course. We're only human. Development is a vocation, it's also a job, and -- yes -- a little bit of industry. What are you going to do? We ask.

But you see, words matter. And when our rhetorical inflation keeps "scaling up", then our words get further and further away from reality. At some point... that disconnection from reality matters. Maybe there's only so much that alternative facts can deliver...
That's why some people care deeply about evaluation by the way -- evaluation is nothing more than a methodological quest to move away from marketing and rhetoric to rediscover a little bit of truth when we can. (I often argue about the need for a diversity of methods of evaluation, and the appropriate nature of evidence given the question-context, but I never argue about its fundamental importance.)

So how bad can this wandering from "non-fiction" to "fiction" be?

I'll give just one example, because on this example I feel pretty confident that I have looked at the good, the bad, and the ugly of our work and evidence.

Here's a clear sign that you've entered the "fiction" section of international development: when you read somewhere about "ensuring sustainability", you know that you're in the fiction section:
- "ensuring sustainability" is like promising a free lunch, free money, or predicting the future. Mystics can promise it, but there's no space in scientific language for such a hyperbolic phrase. So when a donor uses it, it simply means "make us dream", or "we haven't thought so much about this, but we please tell us something that looks good". And when the proposal writer writes a paragraph about "ensuring sustainability", given that sustainability is about 1,001 necessary but not sufficient conditions, it only means "here's a selection of factors of sustainability that we think you will like as a story". Basically, it's pleasing fiction. To be sure, someone will have to come up with a list of indicators, because in that fiction section of the library someone has come up with a list of indicators of sustainability which are -- you know it - "SMART". Of course, there's not one study in the world to validate this list of indicators, but remember: we are in the fiction section. As long as there's a fit between the empty promises of the proposal and the empty request of the RFP, it's still a good score on the proposal. All is good.

Now, projects happen in the real world. And there's a lot of pressure on them. Project managers are making 34 decisions a day (I made up that number just now; I like it), and they have a very demanding job. Usually -- and I don't blame them -- slowing down the project to address all the demands of negotiation, consultation, conflict management, and forward thinking required by sustainability is not really something that they can do. Their decision space is more about reducing the unintended effects on, than maximizing, sustainability.
Six months from the end of the project, there will be a lot of attention paid to "handing over" and "transitioning". Too late, I'm afraid.
And the final evaluation will state, as it has stated 4,685,904 times before (I love this new world where we can make up our own numbers!) that -- wait for it:
be ..
from ...
<you know it's coming, right?>


But you see - until that moment - we were looking at sustainability in the fiction side of the library.

That's a form of corruption of our mission and efforts -- not the only one -- but think about this cartoon next time you see the words "ensure" and "sustainability" in the same sentence. (And maybe you'll find other examples where we corrupt our best judgement and professional thinking.)


Thursday, January 26, 2017

The no-blueprint blueprint to development

Eric Sarriot 

Note: I moved last August to Save the Children in Washington DC, where I now work as Sr Health Systems Strengthening Advisor. I look forward to continued engagement with my friends and former colleagues at ICF/CEDARS.

It’s a little known secret that I own a network of low altitude satellites which monitor every workshop and conference on sustainable development in the world. Like every day.

I took a random representative sample of all 784 workshops and meetings on development that were held last Tuesday and did a textual content analysis. The following exchange was recorded verbatim 1,458 times, which corresponds to 1.86 times per meeting with a 95% C.I. from 1.26 to 2.18:
  • “Thank you to the panel for presenting an interesting approach, but I don’t think this can be applied with a cookie cutter in every [country | province | district | commune | village],” said one participant from ‘the field’. To which a first panelist replied:
  • “Obviously, we are not proposing this as a blueprint. There is no blueprint to this complex issue.” A second panelist interjected:
  • “I totally agree, there is no blueprint. Our approach needs to be adapted to the context.”


My satellite monitoring is equipped with internal logic contradiction sensors, and these sensors were systematically triggered by this last statement.

Funny story actually about these sensors--when I had first installed the logic analysis program, I struggled a bit. I was getting error messages like “logic routines not applicable to development work,” and “analytics must be supported by either evidence or shallow catchphrases supporting comfortable intellectual habits.” I had to upgrade the software to accept logic again. I’ll spare you the details of programming, but it involved encoding into bits Aristotle, Descartes, and Einstein’s thought experiments. Hard work, but I’ve digressed.

The root of the internal logic contradiction is the simple fact that it’s only blueprints that you need to adapt to context. So, if you’re going to adapt to context, don’t tell me that there’s no blueprint. Say: “we intend to adapt the blueprint for the context.” And that leaves unsolved the question of what to do, when there’s actually no blueprint. But let’s take it a step at a time.

The beauty of blueprints
‘Blueprint’ is actually a metaphor in development—not a real thing. Martin Reynolds of the Open University in the UK regularly points out that we should not “mistake the map for the territory.” So, let’s start with what a real blueprint is actually for.

A blueprint is a document which details the way to build something, and shows how to arrange different sub-systems (drywall, electrical, water pipes, ventilation, etc.) of a structure. It’s great to have a blueprint, because someone has thought through and tested configurations of these sub-systems and made sure that they all work together to provide integrity and functionality to the structure. It spares you from [new metaphor coming] reinventing the wheel each time, and taking advantage of evidence-based best practices. Consider my neighborhood, a lot of houses were built on the same pattern in 1940. Only small variations due to topography existed when the houses were built, but people have been building additions, knocking down walls and modifying them ever since, so every house is now a little bit different from the next one.

I want to finish my basement and need to figure out how to do it. Lucky for me, my neighbor did her basement and let me look at how she did it. I’m happy with what she did and I’m going to use it as a blueprint for my own basement. Since our two houses are not exactly identical (our basement stairs were put in different places for one), I will have to adapt her blueprint to the specific context of my house.

So, blueprint: great. Adapting to context: of course. It’s not either or. It’s the latter because of the former. If there was no blueprint, I would not be adapting my neighbor’s approach, I would have to imagine something different for my basement.

And the same applies to global health. Consider just a couple of examples:

IMCI (the integrated management of childhood illness) was a blueprint. One could argue that because people always ignored the health systems strengthening element of IMCI, basically because the blueprint was not respected, IMCI was considered as a failure. I know this is a long debate. Another example? iCCM (integrated community case management), infection prevention and control, prevention of post-partum hemorrhage in health facilities and with misoprostol in communities, the childhood immunization schedule—all countries adapt those strategies or intervention packages, but there is an unmistakable blueprint. Even some more complex non clinical interventions have blueprints, sometimes tacit or enshrined in legal documents and policies. You want to run an NGO (non-governmental organization) to deliver a public good? Well, there’s a dominant blueprint that you need to have executive leadership, held accountable to some sort of a board, and a financial accountability and oversight structure. After that it gets messier, but those parts seem to be based on a blueprint that is accepted for the robustness and risk mitigation they provide to the organizations. And—again--it always has to be adapted to context.

So, before I tear some of this down, let’s recap the major points so far:
  • Blueprints are useful and they can help us be efficient and avoid re-inventing things that have been tested and validated through empiricism and accumulated human experience and wisdom. 
  • All blueprints need to be adapted to context. It would be utterly ignorant not to adapt to context, the worst kind of hubris. Even the science of management has long accepted contextual management as a requirement. I don’t think that I need to get into an inventory of the ‘white elephants’ of international development at that point. (Do we love our metaphors or what?) 
  • So, please don’t ever brag again about adapting to context. And do me a favor; next workshop you attend, when the panelist says, “there is no blueprint. Our approach will be adapted to the context”, please rough him up a bit and just make him stop. It really messes with my satellite monitoring analytics, and I can’t have any of that.

When there’s actually no blueprint
Let’s go back to our panelist and participant from ‘the field.’ [Spoiler alert: I may caricature the differences in perspectives to stress my point.] The panelist actually has a blueprint, a plan, an idea, an intervention, which he believes is now tested and proven to be able to deliver a public good. Variation in contexts is a challenge--an adaptation and implementation challenge--to be able to deliver what he knows can work and to take it to scale. The statement “there is no blueprint”, we now know, only serves to control the unpleasant complexity and skepticism of the participant, but what the panelist really wants to apply is definitely a blueprint. Adapting to context is a bone he is throwing to these pesky field people who don’t know any better and would have us to boutique projects all the time.

The field participant, on the other end, is immersed in a context. She is richly informed about the geography, history, politics, micro-social and societal reality of that context. The level of complexity increases with the level of attention paid to details, and our participant has seen over and over again when approaches cooked outside of her context have failed on the cliffs of that complexity. (This metaphor at no extra-charge.)  What she really is hinting at is not that a solution needs to be adapted to context, but that a solution needs to be developed, created, imagined, and invented in the specific context where the problem is identified.

Those two views are not solved by the adapting to context platitude; they represent very different approaches to problem-solving. So, who’s right?

Well, no surprise here; the answer is… it depends.

As we have just seen, there’s a beauty and value to blueprints. But there’s also a world where contextual design and innovation dominate. And it is underappreciated in central / global spheres of decision making of global health. As many things are, there is a continuum to navigate, but the dominant model of our work is blueprint thinking. The necessary and productive intellectual discussion about where blueprints fail us and when we need a different type of thinking is too systematically squelched. This could be due to power differentials between the center and the periphery of all our systems, and to blind spots emerging from our different points of observations.[1]

There are a couple of models out there describing where complex takes over complicated in the problem definition and solutioning space. One of the most famous is David Snowden’s Cynefin model, which represents problems from simple, to complicated, to complex, on to chaotic. When problems are complicated, best practices can be identified and promoted through protocols and, yes, adaptable blueprints. But when we enter the space of complexity, emergence takes precedence over best practices. I once tried to map out how complexity increases in the definition of global health problems, based on work by Geyer and Rihani. It turned into this table, which might provide a concrete illustration.

The more your problem is on the right side of the table, the more useful will be a blueprint, if used with smart adaptation to context. But as you move to the left, the value of the blueprint decreases. At some point, adaptation is no longer the solution. Creation, invention, context-based design become the requirements. This means that you start from the perspective of the context actors, as opposed to that of the global experts.

Figure: increasing level of complexity in problem definition from right to left (Source:

A clarification: I am not claiming that this is the way to determine whether a blueprint is appropriate or not. But I suggest that more often than not, problems on the left side of the table will not be amenable to blueprints, even if they may incorporate sub-issues where a valid best practice or blueprint is available.

In conclusion, let’s acknowledge that “there’s no blueprint; we need to adapt to context” is an illogical statement, used sometimes with the best intentions, but also too often as the expression of a central-planner bias preventing an intellectual debate that we badly need. In the absence of a blueprint for figuring out whether a blueprint can be used, maybe we can start by listening to the question of the field participant with a little less condescension and a little more intellectual curiosity.

So, make sure to bring that up at the next workshop. And remember: my satellites are watching!

[1] I’ve probably been led on this trail of thinking following a presentation that I made in June 2016. The topic of it was about blind spots in global health, specifically blind spots to self-organization. Definitely some overlaps. The summary of the presentation is available here:

Wednesday, January 25, 2017

Designing programs that work better in complex adaptive systems

Ann Larson
Social Dimensions

I became interested in complex adaptive systems (CAS) in 2013. I lead a team identifying lessons from 18 programs scaling up women’s and children’s health innovations. It became clear that a critical success factor was how effective the implementation team was at recognising and responding to challenges. With colleagues, I learned about the properties of CAS, examined if they were present in several case studies of national scale-ups, and uncovered the effective and ineffective responses to the unexpected turn of events (Larson, McPherson, Posner, LaFond, & Ricca, 2015). As a result of this work, I see properties of CAS operating everywhere.

A consensus on how to create change within a CAS is emerging, based on experience and backed by a growing body of research.  This presentation briefly describes some of the most commonly stated principles, and then asks the question, ‘why are these practices not informing the design of programs, especially in international development?’ It appears that this is not due to lack of knowledge or interest. Instead, it arises from the nature of donor organizations and the power relations between those who commission, conduct and assess the designs on the one hand and the government officials, local NGO staff, front line staff and community members on the other hand.

Next, the presentation gives an overview of complexity-sensitive design approaches that could improve projects’ implementation and impact. There are many promising methods being trialled.  However, they should be accompanied with a large yellow sticker: USE WITH CAUTION. Recent reviews suggest that they are difficult for stakeholders to understand and conduct (Carey et al., 2015) and are not congruent with donor requirements (Ramalingam, Laric, & Primrose, 2014). Importantly, pilots of the use of systems thinking to design programs are not validated; we do not know if they actually create accurate descriptions of how systems work or contribute to improved outcomes (Carey et al., 2015).

This presentation, originally given at an evaluation conference, argues that evaluators and the process of evaluation should be central to complexity-sensitive design. First, the information used to inform designs needs to unite rigorous, generalizable evidence and nuanced experience of working within the specific context. Evaluators regularly draw on both sources of knowledge. Second, these design approaches need evaluating. What value do they offer over conventional methods and are they really as appropriate, effective and efficient as their proponents promise?

Presentation is available here


Wednesday, April 27, 2016

A review of "Systems science and systems thinking for public health: a systematic review of the field"

By Eric Sarriot

A recent publication in BMJ titled Systems science and systems thinking for public health: a systematic review of the field by Gemma Carey et al. describes findings from a systematic review of current literature on systems science research in public health, with a focus on specific “hard” and “soft” systems thinking tools currently in use. A review of the literature sub-selected for analysis in this paper revealed the absence of some pertinent articles that may have enriched the discussion, but as the authors acknowledge, quoting Williams and Hummelbrunner, “holism is ‘somewhat of an ideal. In reality all situations, all inquiries are bounded in some way.”

An interesting application of systems thinking can be found in David Peters and Ligia Paina’s paper on the Develop-Distort model. The Develop-Distort model paper does not reference the great thinkers of Soft Systems or Systems Dynamics, which could be why it did not qualify to be part of this systematic review, yet it is also of great interest. With this model, and other emerging ones, the question then becomes whether new tools and methods, that abide by key principles, should and could fit into the constantly evolving field of systems thinking. Of course, this question in and of itself, does pose some bias.

The review by Carey et al. continues by ascribing sub-selected literature with four types of systems thinking categories:
  • Position pieces: the literature in this category mostly advocates for greater uses for systems thinking in public health;
  • Papers with an analytic lens: most articles here maintain the caveat that once analysis using a systems thinking approach is complete, many researchers revert back to previously used analytic tools, likely due to a lack of practice and training in systems methodologies;
  • Bench-marking of best practices: where systems thinking is used to evaluate public health practice – with some articles evaluating the best practice based on whether it abides by systems thinking principles, rather than whether the application of systems thinking advanced thinking and performance; and
  • Systems modelling: modelling of real-life or dynamic processes using systems thinking.

While the discussion is fairly long, it makes several good points, including that systems thinking is not a panacea and should not be approached as such, that there is a need for greater verifiability of models, and last but not least, that there is a need to improve skills of public health researchers in systems methods and thinking. The authors then move to discussion on the value of soft system methodologies emphasizing how metaphors can be used as a useful heuristic. The authors describe this evolution in thinking as a challenge to how health policy makers define “evidence,” and conclude with a note that systems thinking in health will improve if and as we learn to ask the right questions of systems science, and play down some of the accompanying rhetoric.

Thursday, December 3, 2015

Scenario Planning for Development

CEDAR's Sustainability Framework has long talked about the importance of planning for various scenarios by taking the long view in project planning, management, and evaluation activities, reiterated again in a recent paper by Eric Sarriot et al., A causal loop analysis of the sustainability of integrated community case management in Rwanda which also studied scenarios.

As Wilson Center's New Security Beat writes in its article Scenario Planning for Development: It's About Time, "scenario planning systematically looks at existing and emerging trends and their plausible - though sometimes unlikely - combinations in order to reduce risk. It's an exercise that does not produce single point predictions, but examines a range of possible situations to help prepare for the unexpected."

It's interesting, and heartening, to see USAID changing its approach to development over the last few years by incorporating longer term goals, "adaptive programming" to better respond to external influences such as natural disasters, disease outbreaks, or shifts in governance structures, and increasing focus on "exit pathways". New Security Beat writes of these adaptive approaches that are increasingly embraced, in line with and in response to the rise in uncertainty, preparedness/response shortfalls, and growing complexity. This indeed seems a direction to impact development work for years to come.

Read more about scenario planning and its increasing use by USAID here.

Tuesday, December 1, 2015

Social accountability - review of existing literature and learning

Social accountability is an essential element to improving health outcomes and facilitating health sector reform. The following links provide two important summaries of some literature on the topic. 
CORE Group

 This review discusses three social accountability models used in various sectors at community, district, and national levels, to increase accountability and improve health outcomes. The approaches reviewed, analyzed, and described are: (1) Citizen Voice and Action, implemented by World Vision; (2) Partnership Defined Quality, implemented by Save the Children; (3) and the Community Score Card, implemented by CARE.

Voice and Accountability in the Health Sector
Health & Education Advice & Resource Team (HEART)

This resource by HEART is a nice and concise review of key peer publications of voice and accountability in the health sector, assessing specific initiatives in the health sector, using Bangladesh as a country example, and providing available models for increasing social accountability.

Wednesday, November 25, 2015

Training Opportunity

CEDARS tends to focus on health-in-development and related topics, and we have focused a lot of attention on design, management and evaluation processes to enhance sustainability, making use of quantitative data as much as possible, while being firmly anchored in implementation.

Here's however a major training opportunity for people who are interested in the hard science, the quantitative underpinnings of some of the major issues in sustainability. No better place to do it than the Santa Fe Institute. The focus is on urban sustainability, but it's probably a good opportunity to learn about methodologies which we need to pay more attention to.

Here's to the young innovators ready to learn new things. And if you're not that young, you just qualified by virtue of thirsting to learn.

Check out the course information here.