Impact of research on policy and practice

January 31, 2011
John Young, Overseas Development Institute (ODI)
The Tanzania essential Health Interventions Project (TEHIP) uses surveys to help health services focus on the most common health problems affecting mothers and young children.

 

It is difficult to feed research-based evidence into policy and practice. This article discusses which capacities need to be strengthened to increase the impact of research on policy.

The world is changing rapidly in ways that often affect poor countries most. Economic, climate and population changes over the coming decades will have enormous implications for the challenge of reducing poverty by threatening access to food and water, worsening migration pressures and possibly increasing the chances of conflict. New research is essential for finding ways to prevent or mitigate the impact of these changes.

Donors are already spending over US $2 billion annually on development-related research. Yet there is widespread recognition that research alone is not enough. For research to have any impact, the results must inform and shape policies and programmes, and be adopted into practice.

Aids prevention billboard in Abidjan, Ivory Coast. UN AIDS researchers are starting to take a new approach to data, revealing dramatic evidence that the AIDS epidemic in Africa has started to reverse three decades of hard-won development gains.Research donors increasingly acknowledge this. The UK Department for International Development (DFID), for example, will double spending on development research from US $200 to $400 million per year over the next five years, and will invest equally in generating new knowledge and working to ensure it is used in policy and practice.

The challenge of maximising the impact of research on policy and practice is not unique to multilateral and bilateral donors. Civil society organisations in developed and developing countries are not only engaged in practical programmes delivering services and strengthening systems to combat poverty directly, but are increasingly engaged in work to foster better development policies and programmes. Effective use of research-based knowledge is vital for both tasks.

This article outlines why it is so difficult to get research-based evidence into policy and practice. It provides some examples of what seems to work, describes a practical approach to developing effective strategies and identifies some of the capacity issues that need to be addressed.

Why is it so difficult?

Research results often need to be contested, debated and tested again before a consensus can be reached on recommendations for policy and practice. Even then many obstacles remain. Policy processes are very rarely linear and logical. Simply presenting research results to policymakers and expecting them to put the evidence into practice is very unlikely to work. Although most policy processes do involve a sequence of stages from agenda-setting through decision-making to implementation and evaluation, they rarely take place in an orderly fashion. Many agents are involved in affecting the process directly, and in trying to influence each other. While the whole process of policy has been described as ‘a chaos of purposes and accidents’, I prefer to use the terms complex, multifactorial and non-linear.

Research-based evidence often plays a very minor role in policy processes. A recent ODI study of factors influencing chronic poverty in Uganda found that only 2 of 25 were researchable issues. In a talk on evidence-based policymaking at ODI in 2003, Vincent Cable, a senior member of the UK parliament, said that politicians are practically incapable of using research-based evidence because, among other things, few are scientists, and they don’t understand the concept of testing a hypothesis. In another ODI meeting, Phil Davies, then deputy director of the governmental and social research unit in the UK Cabinet Office, described how policymakers tend to be more heavily influenced by their own values, experience, expertise and judgement, the influence of lobbyists and pressure groups and pragmatism based on the amount of resources they have available rather than by research-based evidence. In developing country contexts, national policy processes are often disorted by international factors. Donor policies, for example, can be hugely influential in highly indebted countries.

Researchers wishing to maximise the impact of their work have to attract the interest of policymakers and practitioners and then convince them that a new policy or different approach is valuable, and foster the behavioural changes necessary to put them into practice.

Research-based evidence can contribute to policies and practices that have a dramatic impact on peoples’ lives. One example is the Tanzania Essential Health Interventions Project (TEHIP), in which the results of household disease surveys were used to inform the development of health services focusing on the most common conditions, especially those affecting mothers and young children. This contributed to a 43 and 46% reduction in infant mortality in two districts of rural Tanzania between 2000 and 2003. Another example is the Decentralised Livestock Services in the Eastern Regions of Indonesia Project, in which a careful combination of pilot field-level projects, institutional research and proactive communication contributed to a 250% increase in farmer satisfaction with livestock services. Success stories quoted in DFID’s new research strategy include a 22% reduction in neonatal mortality in Ghana by having women begin breastfeeding within the first hour after birth and a 43% reduction in deaths among HIV positive children using a commonly available antibiotic.

These and other case studies from around the world illustrate the complexity of engaging with policy processes. There is no simple blueprint for what will work. What works in one context may not work in another. But it does appear that research projects and programmes are more likely to be successful when they:

  • focus on current policy problems and have clear objectives;
  • engage closely with policymakers throughout the process, from identifying the problem, undertaking the research itself and drawing out recommendations for policy and practice from the results;
  • understand the political factors which may enhance or impede uptake and develop appropriate strategies to address them;
  • invest heavily in communication and engagement activities as well as the research itself and build strong relationships with key stakeholders.

Individual champions and opponents frequently play a major role, as does serendipity – or chance.

The implications of this are that engaging with policy requires more than just research skills. According to Simon Maxwell, director of ODI, if researchers want to be good policy entrepreneurs, they also need to synthesise simple, compelling stories from the results of the research. They need to be good networkers to work effectively with all the other stakeholders involved in the process, good engineers to build programmes that can generate convincing evidence at the right time and political ‘fixers’ who know who is making the decision and how to get to them. Or they need to work in multidisciplinary teams with others who have these skills.

A practical approach

Based on more than five years’ experience providing advice to researchers, bilateral and multilateral development organisations and NGOs, ODI’s Research and Policy in Development (RAPID) has come up with an iterative approach to developing a strategy to maximise the influence of research-based evidence on policy and practice (see Figure 1). It draws on concepts of complexity, on outcome-mapping tools developed by the International Development Research Centre (IDRC) and Tools for Policy Engagement assembled and developed by the RAPID programme itself. It has been field tested through more than 30 workshops and training courses worldwide.

To use research-based evidence for promoting a specific policy or practice, the first step is to map the policy context around that issue and identify the key factors that may influence the policy process. RAPID has developed a simple checklist of questions to accomplish this, including questions about the key external agents, the political context itself, the research-based evidence and the other stakeholders who can help.

The second step is to identify the key influential stakeholders. RAPID’s Alignment, Interest and Influence Matrix (AIIM) can be used to map agents along three dimensions (see Figure 2): the degree of alignment with the proposed policy (on the y axis), their level of interest in the issue (on the x axis) and their ability to exert influence on the policy process (on the z axis – or by otherwise indicating their degree of influence on the 2-dimensional matrix). Agents that are highly interested and aligned should be natural allies and collaborators. Those who are highly interested but not aligned need to be brought into alignment or somehow prevented from creating obstacles. Prompting enthusiasm among powerful agents that are highly aligned but not interested can increase the chance of success. Prompting enthusiasm among agents that are not highly aligned risks creating more tensions.

The third step is to identify the changes needed among the key stakeholders if they are to support the desired policy outcome. IDRC’s Outcome Mapping approach emphasises that long-term impact only occurs through behavioural change that surpasses the lifetime of the project. Focusing on those agencies that it is possible to influence, it is important to describe as precisely as possible their current behaviour. Equally important is to describe the behaviour necessary to contribute to the required policy process (the ‘outcome challenge’) and to monitor the short- and medium-term intermediate behaviours (or ‘progress markers’) to ensure that priority stakeholders are moving in the right direction and responding to the programme’s efforts.

Having identified the necessary behavioural changes, the fourth step is to develop a strategy to achieve the milestone changes in the process. There are many strategic planning tools that can be used for this. Force Field Analysis is a flexible tool that helps identify the forces supporting and opposing the desired change and suggest concrete responses (see Figure 3). The forces can be ranked first according to their degree of influence over the change, and then according to the degree of control it is possible for the project team to exert over them. Activities can then be identified to reduce the negative forces and increase the positive ones. Sometimes it is not possible to influence agents directly and it is necessary to target others who can do so. This might mean rethinking the priority stakeholders. More sophisticated tools also exist for visualising strategies and actions for example strategy maps.

The fifth step is to consider whether the project or programme has the necessary capacity to implement the strategy. Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis is a well-known tool that can be used to identify whether a project has the necessary resources to achieve its objectives, and that also recognises the potential impact of external influences. Complexity theory conceptualises competence as an evolving set of systems, processes and skills to enable agents to make the right decisions and act, rather than as a predetermined set of capabilities. Competency frameworks can be used to map the existing and the desired competencies needed to influence policy and to track progress toward achieving them.

The sixth and final step is to develop a monitoring and learning system. This is not only to track progress, make any necessary adjustments and assess the effectiveness of the approach, but also to learn lessons for the future. Recording the results of these planning steps, noting the attainment of progress markers and achievement of improved competency levels, and keeping simple logs of unexpected events should allow the team to produce and use knowledge about policy content, context, the strategy and activities, outcomes (behaviour changes), the skills, competencies and systems necessary. Crucial to the collection of knowledge are sharing it and using it. Intranet systems can be very useful, but sometimes the most basic face-to-face or telephone interactions can produce the best results. Understanding how people learn is also important.

Capacity development

Most of RAPID’s work to date has been focused on building capacity at the individual level, partly through workshops and training courses, but also through longer-term partnerships and collaboration on national and global action research projects. RAPID has also been instrumental in creating two worldwide communities of practice of organisations and individuals keen to learn from each other about how to do it:

  • the Evidence-based Policy Development Network, which now has 20 core member organisations and over 400 people working to promote evidence-based policies across Asia, Africa and Latin America, and
  • the Outcome Mapping Learning Community, which provides an online platform for outcome-mapping practitioners to learn new skills, share ideas and showcase good practice.

Substantial improvement in the use of research-based evidence in development policy and practice also requires effort at the institutional level. The aim is to improve organizational structures, processes, resources, management and governance issues so that local institutions are able to attract, train and retain capable staff. At the system level, effort should be made to improve national and regional innovation environments. A recent review of research donor approaches to capacity development identified a wide range of approaches to achieve this improvement including:

  • research partnerships between Northern and Southern research institutions/universities;
  • institutional support for universities in developing countries (particularly in sub-Saharan Africa);
  • support for national research councils;
  • funding for developing country institutions to access research and technical services of developing country partners;
  • supporting communities of practice among researchers and policymakers working on a specific development problem or sector;
  • supporting policymakers to become more aware of research-based evidence and more discerning consumers of it; and
  • collaborative regional Masters and PhD programmes.

But donors need to adopt a more joint approach, both with each other and with different elements of the system. The informal International Forum of Research Donors provides an opportunity for research donors to start doing this, and many donors are developing more integrated approaches. DFID’s Research into Use Programme, for example, uses an innovation system approach that includes work to the strengthen the capacity of the poor to articulate demand, work to develop the information markets that serve them and work to explore innovative ways in which to supply information.

Conclusion

Improving the uptake of development research into policy and practice is not straightforward. Policy processes themselves are complex, multifactorial and non-linear. What works in one context may not work in another. A blueprint approach is unlikely to work. Successful examples tend to include common ingredients: a clear focus on current policy issues, political awareness and close engagement with policymakers, substantial investment in communication and engagement and cultivating local champions and seizing unexpected opportunities. But the recipe – the relative amounts of each ingredient and the order in which to blend them – is often unique for each situation.

Like the most creative cooks, good policy entrepreneurs make it up as they go along through an iterative series of steps, paying great attention to the results of each. Or, as Albert Hofmann, the Swiss chemist who discovered the properties of LSD by unintentionally absorbing it through his skin, wrote, ‘It is true that it was a chance discovery, but it was the outcome of planned experiments, and these experiments took place in the framework of systematic pharmaceutical, chemical research. It could better be described as serendipity’.

Capacity development to promote greater use of research-based evidence in development policy and practice requires effort at individual, organisational and institutional level for all stakeholders – research providers, research users and intermediary groups.

This article appears with permission from Capacity.org, a web magazine-cum-portal intended for practitioners and policy makers who work in or on capacity development in international cooperation in the South.

For more information: j.young@odi.org.uk