Posted

Can’t see the forest for the trees? Handle alternative explanations systematically!

The topic of this blog post was inspired by my fantastic students, who were particularly interested in this issue during our last seminar. They raised many good questions and demanded clear guidelines, so I decided to write about this subject in my blog, and get into the details a little bit.

 I am talking about the hated alternative explanations that can ruin our beautifully built-up models and question the already detected cause-effect relationship between our program and the intended outcomes. But what are the alternative explanations more precisely, and how can we handle them? Shall we hate them at all?

When we perform program evaluation we are basically interested in three major questions: 1) Are there any changes due to the program/intervention? 2) Are the changes due to the program/intervention? 3) Are there unexpected changes too? The problem of alternative explanations is in connection with the second point. Imagine the situation that we confirm empirically, that the expected changes have happened. How comfortable it would be to lean back and happily present our satisfying findings to the board of the funder/think tank/ NGO, who hired us to do the evaluation. But as researchers, we always have to aspire to the highest level of validity and credibility. Can we be sure, that the changes happened as a consequence of the program? This unpleasant dilemma cannot be neglected, unfortunately.

There is now a broad consensus among evaluators, that capturing the complexity of factors is a must, and we have to follow a rigorous assessment procedure and apply sharp methodological tools. In other words, we have to systematically unveil other causal mechanisms too, that are unrelated to the intervention but can result in the change we observed.

The easiest way to handle alternative explanation is to use Randomized Controlled Trial, or True Experiment. Here we apply two sample groups, an experiment and a control group and assign units to the two groups randomly. This way the confounding variable causes not systemic but random differences in the two groups, so the unknown alternative explanation can be ruled out. An alternative can be the Quasi Experimental Design, when randomization is not applicable (e.g. due to ethical reasons).

But what if we cannot perform an experimental design, or simply we want to know what the confounding factors are? How can we identify interfering variables?

One method for investigating possible alternative explanations is process tracing. It was originally used to provide theoretical explanations of historical events, however, nowadays, it is increasingly being used within monitoring and evaluation, because it can be fruitful when a particular contribution to change is hard to assess. It is more a vague concept than a strict method and can be applied in different ways in different circumstances. The major steps include proving the change itself, documenting the processes leading to the change and phrasing several different hypotheses, each with its own causal explanation that can be examined separately. As for the research design, the range of relevant methods is wide; qualitative and more complex methods can be also applied, like interviews, observation, case studies, surveys or desk research.

If it is about alternative explanations, participative methods can be especially appropriate, like force field analysis, where drivers and resistors are listed by a small group of key informants or general elimination methodology that incorporates key informant interviews or brainstorming to identify as many alternative explanations as possible. Participative methods promise the unique value of giving access to the organizational knowledge, thus can extend the range of hypotheses, and also provide additional aspects related to the decisions we make about the concrete testing procedure. Optimizing our work might be critical because investigating alternative hypotheses and collecting evidence from multiple resources can be time-consuming, and there is a danger that the task may become too great.

 However, it is worth extending the analysis and to apply a holistic approach, because finally, it enables organizations to demonstrate accountability for the results, and alternative explanations, like chance, correlated independent variable or selection on the dependent variable can be ruled out.

One of my students also asked, what happens, if the client does not like the found alternative explanation? With a lot of efforts, we might prove, that the program was not really effective, and other causal mechanisms prevailed. It is always the evaluator’s responsibility to unveil the truth and delivering the findings. On the other hand, a good consultant also helps the organization to overcome troubles and gives practical recommendations that contribute to the development and organizational learning.

Recommended readings:
Barbara Befani, B. Befani. Clearing the fog: new tools for improving the credibility of impact claims. International Institute for Environment and Development
Collier, D. (2011). Understanding process tracing. PS: Political Science & Politics, 44(4), 823-830.