Development programs are routinely evaluated based on their usefulness and effectiveness. But in a rapidly changing world, how should evaluation methods themselves be judged?
This was the topic of a recent panel discussion at Carleton University hosted by the International Program for Development Evaluation Training, the World Bank and IDRC.
According to evaluation expert Michael Quinn Patton, organizations need to profoundly rethink their self-assessment strategy.
“Much of what evaluation has been doing is simplifying complex reality, rather than engaging it,” he said. Patton cited a high-profile effort to eliminate polio as a textbook example: a laser-like focus on preventing polio diverted resources from other health programs, he says, actually making people more susceptible to polio infection. Now aid workers are implementing a hybrid approach: maintaining a focus on polio, while working to strengthen the overall health system.
Since outcomes are often unpredictable, Patton argues that adhering to strict performance indicators cannot guarantee success. Engagement and understanding of the local context is critical.
Patton also says that project recommendations should not be rooted solely in past experience, which is unlikely to repeat itself. Development practitioners will likely be faced with unexpected results. “Yet,” he said, “you rarely see any evaluator who has done field work to uncover possible unintended circumstances.”
A culture of inquiry
Mary Chinery-Hesse, deputy director of the International Labor Organization, noted how globalization is changing the field of evaluation— for better and for worse. Greater interconnectedness of the evaluation community is professionalizing the field, but at the same time leading to a brain drain as the best evaluators in developing countries leave to join large development agencies.
Indran Naidoo, deputy director general of monitoring and evaluation in South Africa’s Public Service Commission, also reflected on the changes triggered by globalization.
“The divide between the North and South is blurring…You have pockets of excess in poor nations and vice versa,” he says. In this environment, broad generalizations of development problems often miss the mark. Like Patton, Naidoo agrees that effective evaluation demands solid, nuanced research.
One way to avoid generalizations or flawed presuppositions, notes Patton, is to maintain a “culture of inquiry.” To avoid the trap of groupthink, Patton recommends that organizations surround themselves with independent advisors and that evaluation thinking and practice permeate different levels of an organization.
“IDRC does this better than any other organization I’ve worked with,” he says. “It is the first organization that made evaluative thinking part of their mission.”