The CEDARS Center / SHOUT Group Blog

The CEDARS Center / SHOUT Group Blog
Return to CEDARS' Home Page.
Join the SHOUT Group by registering on CEDARS to be allowed to upload projects and experiences.
Contribute to this blog.

Tuesday, November 4, 2014

Resources for adaptive management practices and cost-effectiveness in development

We recently added two new documents to our CEDARS Center resource repository to help development professionals think about adaptive management practices and program cost-effectiveness during implementation, planning, or evaluation.

Have a read through below and click through the links. As always, should you have comments or questions (or additional resources we can share with the sustainable health and human development community), do not hesitate to reach out to us.

Navigating Complexity: Adaptive Management at the Northern Karamoja Growth, Health, and Governance Program [document available here]

This paper, by Engineers Without Borders Canada, under contract with Mercy Corps (MC), is a case-study regarding adaptive management practices within Mercy Corps’ (USAID-funded) project, Growth, Health & Governance Program (GHG). The paper covers building the culture necessary for learning and adaptation, discusses some tools and processes that support adaptation, and some implications for funders and practitioners. Throughout the document, culture is emphasized as the most important factor to be successful in adaptive management and provides strategies and attitudes deemed necessary in achieving this culture. The tools and processes are presented with the purpose of reinforcing the described culture. One of the tools, the Results Chain, is an interesting way of conceptualizing the path to reaching the goals of the project and is similar to a results framework.

A blog post, with a summary is available here: http://usaidlearninglab.org/lab-notes/navigating-complexity-adaptive-management-northern-karamoja-growth-health-governance

Cost-Effectiveness Measurement in Development: Accounting for Local Costs and Noisy Impacts [document available here]

This policy research working paper from the World Bank Group, Africa Region, can help individuals think about cost-effectiveness within their programs from implementation, planning, or evaluation perspectives. As evidence from rigorous impact evaluations grows in development, there have been more calls to complement impact evaluation analysis with cost analysis, so that policy makers can make investment decisions based on costs as well as impacts. This paper discusses important considerations for implementing cost-effectiveness analysis in the policy making process. The analysis is applied in the context of education interventions, although the findings generalize to other areas. First, the paper demonstrates a systematic method for characterizing the sensitivity of impact estimates. Second, the concept of context-specificity is applied to cost measurement: program costs vary greatly across contexts -- both within and across countries -- and with program complexity. The paper shows how adapting a single cost ingredient across settings dramatically shifts cost-effectiveness measures. Third, the paper provides evidence that interventions with fewer beneficiaries tend to have higher per-beneficiary costs, resulting in potential cost overestimates when extrapolating to large-scale applications. At the same time, recall bias may result in cost underestimates. The paper also discusses other challenges in measuring and extrapolating cost-effectiveness measures. For cost-effectiveness analysis to be useful, policy makers will require detailed, comparable, and timely cost reporting, as well as significant effort to ensure costs are relevant to the local environment.

You can find additional information on this paper here: http://documents.worldbank.org/curated/en/2014/09/20196499/cost-effectiveness-measurement-development-accounting-local-costs-noisy-impacts-cost-effectiveness-measurement-development-accounting-local-costs-noisy-impacts

Tuesday, September 2, 2014

Learning from a post-project evaluation study, applying systems thinking and addressing complexity in community health

By Michelle Kouletio

It doesn't happen enough, but every once in a while a seed of a sustainable health intervention is planted in the ground.  In this case, the seed was planted at the doorstep of a mayor's office in northwestern Bangladesh, amid narrow, busy roads with open sewers of common of bustling secondary cities. Among the middle class families live the extreme poor.  Along with several other challenges, the poor are failed by the government health system whose facilities are overwhelmed by patient volume and whose outreach workers do not serve.   While national policy assigned elected municipal leaders with responsibility for ensuring coverage of equitable health services, these leaders were provided little guidance nor resources.

With support from the USAID Child Survival and Health Grants Program, Concern Worldwide worked in this and other municipalities in Bangladesh to empower municipal leaders to develop a replicable model for social mobilization in complex urban environments. What made this project unique from so many well intended and ambitious community health projects was the embedment of a systematic sustainability planning and monitoring system that established a shared vision and measurement framework across the Mayor’s office, elected representatives, service providers, social leaders and health volunteers along with the project team.

As the technical advisor for Concern Worldwide, I had the privilege of backstopping this initiative.  It took quite of bit of extra effort, particularly in developing practical capacity measuring tools from scratch and maintaining regular reviews at the neighborhood and municipality level. Sustainability planning also required tackling structural barriers and inter-ministerial relations which could have otherwise been ignored in a conventional project. However, the benefits of this deeper analysis and shared accountability approach resulted in real improvements in equitable health outcomes and an enduring political mobilization approach that allowed the population to continue reaping benefits years after the project closed.

Two recently published articles on this work further validate the importance of the hard work of the Concern Worldwide staff in Bangladesh and their contribution to the developing body of evidence on adaptive health systems and sustainability planning:


Post-script from Eric Sarriot:

These two papers coincide in their release to form a useful series on the Concern CSHGP Bangladesh experience. The first one is part of a larger and important Supplement of Health Research Policy and Systems on Systems Thinking in Health, coordinated by Taghreed Adam of WHO’s Alliance for Health Policy and Systems Research.



Thursday, August 21, 2014

Putting vulnerability first – the need to revise targeting and delivering aid to promote sustainability and resilience

By Debra Prosnitz, MPH


The Humanitarian Policy Research Group (HPG) recently released a policy brief on resilience: Political flag or conceptual umbrella? Why progress on resilience must be freed from the constraints of technical arguments (1). Reading this brief, I began to reflect on the relationship between resilience and sustainability. In the context of development, resilience should address ability to cope and recover from crisis, and the sustainability should address the process of strengthening social capital to sustain progress in health, social progress, etc. (2). If sustainability is a process, perhaps resilience is one of the outcomes. I decided that the importance of this topic deserved a CEDARS blog update.

In this short (4-page) brief, Simon Levine succinctly captures two approaches to addressing resilience, brings to light the stalemate and consequent inaction in which they are stuck, and suggests a new way forward in theory through action (3).

Levine discusses two “broad arguments” for addressing resilience, both of which, he argues, distract us from the underlying need to identify and understand vulnerabilities of individuals and communities, and find ways to address these vulnerabilities: The political argument convinces us that something needs to be done - “since the shocks and stresses that cause crises cannot be prevented, the task is to ensure that people are better able to cope when things do go wrong,” and the technical argument calls for a refined definition of resilience and new approaches for addressing it because what has previously been implemented has not adequately addressed the complexity and challenges of resilience, such as climate change. While we may not yet know the best way to define resilience or the best strategies and approaches to address it, we must not let this stop or delay efforts to do so.  We know that vulnerabilities exist and should be addressed, and should move forward by identifying and understanding vulnerabilities of individuals and communities, and findings ways to address these vulnerabilities.

Resilience then—even if imperfectly conceptualized--can already be enhanced by ensuring that “vulnerabilities are the center of development policy and investment,” and that marginalization be addressed and minimized through development; resilience should not become another sector of development. Thus, Levine suggests that the definition of and theoretical framework for resilience can be developed and refined through action (rather than delaying action pending conceptual clarity).

While Levine addresses the complexity of development and the “structural, institutional and bureaucratic obstacles” to making change to how development aid is targeted and delivered, I wish he had made a clearer link between resilience and sustainability. I’m envisioning resilience as a pillar of sustainability, because while we can’t predict which shock will occur, the occurrence of shocks and changes following even the best of our interventions is almost a certainty. Further, neither resilience nor sustainability can be achieved by external intervention and leadership alone. Community involvement should be a prerequisite for approaching both, with communities and individuals leading efforts to identify vulnerabilities and define resilience.

Levine’s brief is a glimpse into larger bodies of work he has published on this subject (4), which can also be found on the HPG website. Whatever your stance on resilience as a concept, this paper is an important reminder of the old adage not to let the perfect become the enemy of the good; we should not be complacent in our thinking about models of aid delivery, and should be actively thinking about and advocating for changes in the way in which aid is both targeted and delivered. Vulnerability should be the primary criteria, with aid targeted toward the most vulnerable first and delivered in a way that bolsters resilience.  

----------
(1) S. Levine, Political flag or conceptual umbrella? Why progress on resilience must be freed from the constraints of technical arguments. Policy Brief 60. Humanitarian Policy Research Group (London: ODI, 2014).
(2) Sarriot, et al. Taking the Long View: A Practical Guide to Sustainable Planning and Measurement in Community Oriented Health Programming. Macro International, Inc. (Calverton, MD, 2008) defined sustainability as “a process that advances conditions that enable individuals, communities, and local organizations to improve their functionality, develop mutual relationship of support and accountability, and decrease dependent on insecure resources…(and) enables local stakeholders to play their respective roles effectively, thus maintaining gains in health and development…”
(3) As our fearless CEDARS leader Eric Sarriot summed it up “praxis can improve without a perfect epistemology.” 
(4) S. Levine, Assessing resilience: why quantification misses the point. Humanitarian Policy Group Working Paper. (London: ODI, July 2014); and 
S. Levine and I. Mosel, Supporting Resilience in Difficult Places: A Critical Look at Applying the ‘Resilience’ Concept in Countries Where Crises Are the Norm, HPG Commissioned Paper for BMZ (London: ODI, 2014).

Tuesday, July 15, 2014

Concise but Important Text on Development Evaluation by Fred Carden

I came across a very interesting and concise article by Fred Carden from RTI in the December 2013 American Journal of Evaluation, which makes plain and simple some important points—whether you’re working from the perspective of sustainability, systems strengthening, country ownership, or learning and adaptation.

Carden’s introductory quote captures his key point about evaluation succinctly: “it’s not about your project, it’s about my country.”

While I would prefer critiquing the emphasis for “international development aid evaluation” instead of Carden’s “development evaluation” (which I think is always relevant—aid or no aid*), Carden makes three simple but important points:
(1) Evaluation will endure but “[international] development [aid] evaluation” is not a permanent field of practice.
(2) Evaluation needs to look at systems, not projects; and
(3) Evaluation requires local expertise.

I think our work on the Sustainability Framework has certainly taken this look at systems before projects to heart, even if it has been a challenge. We have used local expertise, but whether we have given sufficient leadership to local expertise is more questionable. We’ve recently had another experience with this type of local system perspective:

Ilona recently assisted a Gates Grantee in laying the foundation for an evaluation approach looking at the local and national systems rather than just the project, in this case by developing a theory of change. A theory of change (TOC) is a type of logic model that articulates an expected outcomes pathway, the causal relationships, and the underlying assumptions that relate to the broader social, cultural, political, economic, or institutional environment behind the process of change. TOCs are under-utilized tools that can be very helpful in mapping out, in Carden’s words, the “constellation of activities [and we would argue also, of actors] that create change and betterment” in a system. If conceived with that perspective in mind, a TOC encourages a broader view of change beyond the immediate project that encompasses (and is grounded in) the realities of the context and therefore allows key elements outside the project boundaries to be explored and included in the evaluation of a change process. By clarifying how particular activities are expected to produce particular outputs and outcomes (and the relevant assumptions), the TOC helps in framing the evaluation questions to provide meaningful insights or evidence of program success and identify possible counterfactual explanations, as relevant.

Such approaches and the logic behind them are going to continue to grow in relevance. Carden articulates this logic very concisely and clearly—we recommend you take a look at his short (less than 3 pages!) article.   http://intl-aje.sagepub.com/content/early/2013/07/12/1098214013495706.full

Cheers,

Eric
[*] I would suggest just one language adjustment to the first point and state that “international development evaluation” is not a permanent field of practice. Development—as in social and human development-- needs to continue here, there, everywhere and at all times; and all stakeholders from government to civil society need to rely on good evaluation to learn and adapt.


Saturday, June 21, 2014

And finally.....Country Ownership and its Measurement - Part 4 of 4

Preamble: this is the last of a 4 piece series. 
Part 1 dealt with why country ownership matters to us now. (last footnote of this entry not withstanding)
Part 2 dealt with the understanding of ownership from large development aid actors, what our role is in advancing and measuring ownership.
Part 3 discussed dimensions and metrics of ownership, then addressed the methodological choices we must make as happening on a rugged, and even dancing landscape.
This is the 4th and final entry, where I discuss an fundamental choice we must make in our measurement approach. You probably should read at least Part 3 before proceeding -- the graph below is explained in Part 3.

Practically, what does all this this mean?

This discussion may have gotten a little ethereal for some, so let me try to bring it back to the practical and for this, we need to get back to the “why” of ownership assessment and consider our options. 

Imagine then that your options for measurement are between B5 (limited subject engagement) and B7 (limited objectivity), what option should you take? 




Let’s consider the options, starting as far to the left of the spectrum as we can:

We have far more experience measuring capacity than ownership. Given that capacity is conceptually part of ownership, and that ownership is even more of an abstract concept, I’ll stick to a capacity measurement example.

The more informative measures require a high level of locally relevant detail, which is very hard to obtain from the outside. Consider a simple capacity indicator, for example in human resources for health management: the percentage of a specific type of personnel actually available over a given year to perform a specific function. This seems like a very objective measure, somewhere toward B1. But experience shows that to make the indicator most informative (and useful for decision-making), you need to be guided through the complexities of public administration procedures and rules of the specific country. Coming to the proper definition of the type of personnel, what being “in the plan” means, and where the responsibility for ensuring that key positions are filled actually lies, is challenging from an outsider’s perspective. Without having a “guide” into the local health system under investigation, proper and meaningful measurement will be very difficult. Of course it gets more and more difficult once you start asking about things such as shared accountability, institutional ownership and political will.

So, staying on the left side of the spectrum, first you will struggle tremendously finding valid measures—it might be possible for a research exercise, but will certainly be challenging for a time-bound monitoring activity. (I am speaking of very practical constraints, for example getting the right staff in the right office of the MOH administration to help you figure out why this register (rather than the one anticipated) is the right one to get your denominator from.) Even assuming that you do get at measures, which can be considered valid, what happens if you were so far on the left end of the spectrum (B1 or B3) that local stakeholders are unsure of you, what you have measured, and what this even means? What is going to be the value given to your measure? Even for capacity measures, B1 or B3 are going to be uncomfortable spots to be in, unless you have the power to impose an audit and make your own rules.

If we go back to ownership, and from the perspective of why you wanted to measure it, you certainly won’t be sending actionable signals to those constituencies, even if you feel good about your measure. Like a tree falling in the proverbial forest, you might provide a valid indicator, but if no one is here to believe its signal, was it worth it?

You consequently are forced to move toward more participatory engagement of stakeholders to first determine what measures are meaningful and then define them operationally. You have to push through and over B6, and this takes you to B7.

You now have measures for which there is “buy-in” and cultural translation from stakeholders. Presumably, to have gotten such a buy-in you have developed with them a purpose, an action-orientation for your measures. Nobody claims that getting there is necessarily an easy road, but you are essentially working and measuring from some inside-the-system perspective. Having built rapport and a clarity of purpose along with some trust, your external “expert” voice will both carry more weight, as well as be kept in check by actors in the process you are measuring.

Your data might come to inform the stakeholders, but of course now your biggest concern is the external validity of your measure. If you present your findings you will dread questions from researchers with those letters after their name. The question is, having moved from B5 to B7 and gained buy-in and internal validity of your capacity or ownership questions, was it worth it if the external validity of your measure is now challenged?

My best answer at this point is, yes. And here’s why:

1-     The sole reason why you wanted to measure ownership in the first place was to engineer change with these same stakeholders. The first option (B5), leaving these stakeholders to wonder what it is that you measured does not help your goal. If measurement is here to guide change management, what is the value of a better measurement which means little to the change managers?
2-     Your (B7) measurement may have—certainly has—flaws but if it serves to inform and guide an authentic process of planned change, you have a foundation to build upon. Managers deal with uncertainty every day anyway. You will have lost some of the precision on the details, but probably gained validity on the big picture. Actually, actors of the local system will have an incentive to help you improve your measures over time—measures inform change, but change also informs (better) measures. Your measurement expertise will now be able to support a management process, rather than chase “data use”.

By starting from B7 you can influence a change in the landscape, and the possibility of moving toward better, more reliable measures as you promote more ownership – and wasn’t that the goal to start with?[*] I offer two equations as a summary:

I. {Ideal (SMART) Measure} minus {Internalized Meaning} =  {Sexy Research, but No Signal for Change Management}

II. {Imperfect Internalized Measure} = {Sub-Optimal Signal for Change Management} plus {Potential for Improvement over Time}

So, II might be more conducive to guiding change, even if--and that is a clear risk--"sub-optimal signals" carry the risk of being misleading. Hence the need for solid M&E professionals to help us manage this risk.


In conclusion, why you wanted to measure ownership, in an imperfect ‘rugged’ world, must lead you to choose to lean toward the right hand of the spectrum, to respect the process, and through that process improve the quality of your measures, rather than aim for an illusory perfect measurement in search of meaning and later begging for "data use".

I do not dismiss the importance of finding good measures of institutionalization and other elements afferent to ownership, or the risk inherent to "imperfect measures". And my argument is not about being satisfied with qualitative stakeholders’ perceptions. It is about the process we need to use to produce metrics and what must come first. 

Let me summarize this complexity in one sentence: You do not measure ownership without the owners.

Guess, it is simple enough after all

We -- and this "we" must be a true "we" -- have our work cut out.



Eric

[*] For complexity geeks, the reason for which it is easier to go from B7 to B5 than from B5 to B7 is because the landscape we have drawn is not only rugged, but it is also ‘dancing’ and changing. Starting from B7 and engagement of stakeholders, you may see the landscape change so that more reliable signals can be picked up (toward B5) without losing the sense-making of stakeholders’ involvement. But if you start at B5, you may see the next peak get higher.

Last note on "dancing landscape", it seems that "country ownership" is falling off the PEPFAR lexicon... so, to be continued I guess.

Acknowledgement: I owe the concepts of rugged and dancing landscapes to Scott E Page's presentations and books.

Monday, June 2, 2014

Country Ownership and its Measurement - Part 3 of now 4

This post is long overdue. In Part 1, we considered a bit of the history behind the current emphasis on country ownership—at least it was current when I started; in Part 2, we spent a little time on the crucial question of why we would want to measure country ownership, and we discussed some implications:

When it comes to ownership, the old saying “if you can’t measure it, you can’t manage it” begs the question: “should you be managing someone else’s ownership?” Since the obvious answer is “no,” we need to realize that developing, cultivating, or allowing ownership happens within the tension of desired transitions in roles between different entities. If we avoid the pretense that we are somehow “objective” and outside of the game of this transition, as donors, technical advisors, or evaluators, Part 2 concluded on the following question :

If ownership grows or withers from the net result of an interaction, a dialogue, a transfer of resources, an exchange in capabilities, the negotiation of roles in decision-making, then what is the point of ownership measurement if it exclusively focuses on the recipient? If I am the provider of assistance, or the donor, or the policy adviser, does it make sense for me to try and measure the recipient's ownership without questioning my role in this process?

This last blog is an attempt at the what and the how of measurement. Remember that we are dealing largely with monitoring and evaluation of country ownership, rather than research. In the former case, information (signals) has to be produced in a time frame and with a frequency aligned to management processes so that it can be used to inform management decisions. This is very different from a research exercise.

I hope to convince you that how we measure country ownership cannot be dissociated from why we measure it. And this has profound consequences.

Let's start at the very beginning (Von Trapp, Maria. 1965). Measuring ownership is going to require solving a great many questions, but first and foremost it's going to require making two choices. These choices are always made, but they are not always explicitly made with respect to alternatives and opportunity costs: 
  1. The first thing we do with something complex is to break it down (reduce) it to elemental components. And we're going to have to decide which 'parts' of ownership we're going to focus on.
We saw in Part 1, how PEPFAR identifies four main components to ownership, namely political will and ownership, institutional ownership, capacity, and mutual accountability. I discussed briefly how there are circuitous relationships between these different things (ownership is part of stewardship, which is part of or the same as proper governance, which is part of capacity, which is part of ownership, for example). There’s also a nested Russian Dolls element as well (political ownership needs to translate into institutionalization of key values, processes, etc, which needs to translate into operational capacity and capability, which can be broken down into a number of capacity areas).

This is not a critique of any single assessment tool or conceptual model, but simply a recognition that, for one thing, we are trying to assess something which cannot be fully assessed, given that it involves layers of both institutional and human intentions, actions and reactions. We are trying to capture observable signals about something, which is at the same time social, psychological, institutional, and political. When trying to assess a whole by breaking it down, we gain in precision (reliability) in measuring specific pieces, but we lose in validity because of the glue and connecting pieces we are forced to discard as we proceed. It may be acceptable, even necessary, to do this, but we need to face that this is what we are doing.

There are other ways to look at ownership which are as valid as PEPFAR's approach. And much like capacity, each component of analysis needs to be broken down into more pieces. Here are some categories and sub-categories of analysis that have been proposed and used:
  • The four components above (political will, institutional ownership, capacity, and mutual accountability) are examined in terms strategy, resource allocation, operational planning, implementation, and M&E;
  • Other efforts have looked at 'readiness to implement country owned solutions' by considering country leadership, ownership and advocacy; conditions in the policy and planning environment; institutional dynamics: management, coordination and implementation; as well as the culture of learning and knowledge-based practices;
  • Mutual accountability relates to a host of relationships: citizen, donors, internal accountability, with a number of sub-domains, notably finances;
  • Ownership also translates at different system levels. In one of our efforts to assess ownership at district level, we had to consider operationalization of the national HIV plan at the district level (very close to the concept of institutionalization); institutional coordination of HIV care and treatment activities; and congruence of expectations between levels of the health system.
  • We have also tried looking at stages of transition in roles (shared or divided between "external" and "internal/country" planners, implementers). Elements under assessment included health planning; service delivery; specific management functions (finances, supply chain, laboratory, human resources; supply chain management; laboratory management); training; health information; and capacity building.
As you can tell, we were trying to save some of that glue and connecting pieces that fall off the side. The point of this list is to show the breadth of elements that can be considered, and the potential depth of each element, not to mention layers (geographic and institutional) of systems and sub-systems. We break down the concept to simplify our life, and then realize that it's "bigger on the inside" (Who, Dr., 2007).

Nonetheless, this is the first choice and the first step of awareness:
  • After we've mapped out the boundaries of our exercise / assessment / research, how are we going to break down the concept of ownership?
  • How much of the glue and connecting pieces are we ready to lose in the process?
  • How far down are we going to drill down?
  • What will be the level of effort required by our measurement effort depends on our answer to these questions.
  1. The second question is about our measurement process, and how 'emic' (from the inside) or 'etic' (from the outside) we will be in measuring ownership. Ok, in simple words: who's perspective are we taking in assessing ownership: that of external agents, or that of the purported owners? Are we trying to be objective or participatory?
You may have noticed that I gleefully dodged the issue of what type of metric we construct--this would require a longer treatment, and for now many approaches are on the table and none have been validated better than any other. Among the options:
  • Scorings and ratings about perceptions of ownership in various components;
  • Multi-item response indices based on discrete numerical scales (1-5; 1-10 usually), or anchored scales (where each numerical response corresponds to a descriptive textual "anchor" or "word-picture");
  • Additive scales based on meeting a number of criteria (yes/no);
  • Full qualitative analyses;
  • Etc.
It is important to pick one approach, design, validate and use it appropriately, but the question I want to deal with here is far more fundamental. Let's say that you've stuck with four components of assessment of ownership used by PEPFAR at some level of a national social service system (health district, school, social welfare department, etc.), the key question will be whether you take one of two paths:

a- Attempt to measure ownership [of a program, policy, or project, in an institution or system] as objectively as possible, without bias and influence from the agents inhabiting the structures being assessed; or
b- Measure ownership with the actors and stakeholders-subjects of the process of taking ownership.

Now the first temptation—I use the word temptation because it usually leads to sins of program design –the temptation is to go for, "we'll blend the two approaches, and get a middle of the road tool."

I want to use a metaphor to illustrate how the temptation of a ‘middle of the road’ approach can be self-defeating.

Imagine that the methodological space we must travel from one measurement approach to another (objective versus participatory) is a landscape that we are going to walk across. In Figure A below, this landscape is a flat plain, where we can easily move from one point to the next, and stop wherever appropriate. There is an objective end of the spectrum (some sort of reliable external audit) and a subjective one (maybe asking five key informants, "how much ownership do you think there is in the province on X? Thank you."). The path between the two ends is flat, so there is no cost, or very little, to moving and stopping along. If a compromise is needed, all we need to do is move the design cursor to the ideal fair and balanced position to enjoy the best of both worlds.



This would be very nice, indeed, and it’s a pleasant illusion when your livelihood depends on maintaining the illusion. But I suspect that most choices about measurement are made on a different type of landscape. Imagine that we are not faced with  a nice clean plain, but instead have to walk through a mountainous landscape, one that looks more like Figure B. The path between the two ends is rugged, with peaks (B4, B6, B8) and valleys (B3, B5, B7, B9). It’s hard to push over a peak, and once you do, you roll down very fast into the next valley. Consequently a small inflection in design one way or the other will lead to quickly losing a lot of the features of the previous design [side note for the geeks, the peaks are “tipping point”].

What is important--essential--to understand here is that if we most likely live in a B-type world, rather than an A-type world,  we don’t get to make small adjustments to our hearts’ content, but rather we observe (or ignore) that small adjustments to our methods have big consequences. As soon as we move across points B4, B6, or B8, what results is not a small change in the balance of our methods, but a jump which can be far more consequential than we think (i.e. from B5 to B7).
(Note that our discussion has focused only on the ownership question, leaving aside that of the change process, called transition. We’ve had interesting efforts working with PEPFAR/CDC trying to move forward on this, but I don’t think it’s quite cooked and ready to serve yet. Only to say that providing information about the two sides of the equation makes a lot of sense and only strengthens the argument for a learning approach.)





Stay tuned for the next and final (!) entry on this blog series focusing on country ownership which will explore the practical implications of this discussion.

Friday, January 17, 2014

Country Ownership & Civil Society

As Eric continues to hold us in suspense with his 3rd installment on ownership, here’s an interesting blog from Xavier Alterescu, and our colleagues at MSH, for your reading pleasure: vist this link

Xavier shares some key highlights from a multi-disciplinary consultation held in September 2012 in Washington, D.C. on Advancing Country Ownership: Civil Society’s Role in Sustaining Global Health Investments.


The full report on the findings of this important meeting on the role of civil society in advancing country ownership can be accessed on The American Foundation for AIDS Research (amfAR) website here.