Image

INSTITUTIONAL ASSESSMENT

A Framework for Strengthening
Organizational Capacity for
IDRC’s Research Partners

Charles Lusthaus
Gary Anderson
Elaine Murphy

Image

Institutional assessment : a framework for strengthening organizational capacity for IDRC’s research partners. Ottawa, ON, IDRC, 1995. xiii + 67 p.

/Research centres/, /institution building/, /institutional framework/, /capacity building/, /evaluation techniques/ — /IDRC/, /methodology/, /environment/, /motivation/, /organizational development/, /measurement/, /manuals/, references.

Contents

Foreword

ix

Preface

xi

Introduction: IDRC and Capacity Building

1

IDRC’s Mission: “Empowerment through Knowledge”

1

IDRC and Institutional Capacity Building

1

Assessing Performance and Capacity

2

Developing an Institutional Profile

5

A Learning Partnership

5

No Blueprint for Evaluation

6

Institutional Assessment Methodology

6

A Framework for Profiling Organizations

10

Constructing the Institutional Profiling Process

11

Key Forces in the External Environment

15

Introduction

15

Administrative/Legal Environment

15

Technology Environment

16

Political Environment

16

Economic Environment

17

Social and Cultural Environments

17

Stakeholder Environment

17

Linking Environmental Forces to Key Questions

18

Data Gathering

19

Administrative/Legal Environment

19

Technology Environment

19

Political/Economic Environment

20

Social and Cultural Environment

20

Stakeholder Environment

20

Data Gathering Methods and Sources

21

Relevance to Capacity and Performance

21

Organizational Motivation

23

Introduction

23

History

23

Mission: Stated and Perceived

23

Mission as Tool

24

Culture

24

Culture as Motivator

24

Incentives

25

Information to Gather to Assess Motivation

25

Mission

25

Culture/Organizational Incentives

26

Linking the Mission and Culture to Performance and Capacity

26

Organizational Capacity

29

Introduction

29

Strategic Leadership

29

Leadership

30

Strategic Planning

30

Governance

32

Structure

33

Niche Management

34

Human Resources

35

Other Core Resources

36

Infrastructure

37

Technology

38

Finance

38

Program Management

39

Research Program Planning

40

Research Program Implementation

40

Research Program Monitoring and Evaluation

40

Research-Supporting Services

41

Process Management

41

Planning

42

Problem-Solving and Decision-Making

42

Communications

43

Monitoring and Evaluation

44

Inter-Institutional Linkages

45

Networks

46

Partnerships

48

External Communications

48

Organizational Performance

51

Introduction

51

Performance in Moving Towards Mission (Effectiveness)

52

Performance in Relation to Efficiency

53

Performance in Relation to Ongoing Relevance

54

Measurement

54

Conducting an Organizational Assessment

56

Sources of Data

57

Performance as It Relates to Capacity

57

Conclusion

58

Bibliography

59

Index

65

Acknowledgements

We would like to thank the IDRC Evaluation Unit which initiated and helped bring this project to completion, with special thanks to Terry Smutylo, Tracey Goodman, Fred Carden, and Anne Bernard. They provided advice and thoughtful feedback and were generous with their time throughout the research and writing stages of the study.

We also offer our thanks to many of our colleagues at Universalia: to Marie-Hélène Adrien, Tom Blacklock, Geraldine Cooney, Steve Gruber, Rob Nixon, and Margot Rothman – who were involved in discussing ideas and testing models – and to Peter Bracegirdle and Carroll Salomon – who worked carefully through each draft and added valuable refinements all along the way.

Exhibits

Exhibit 2.1 Framework for assessing research institutions

11

Exhibit 3.1 Questions typically asked about the environment

19

Exhibit 3.2 Methods of gathering environmental data

21

Exhibit 4.1 Suggested data gathering methodologies

26

Exhibit 4.2 Questions typically asked when assessing organizational motivation

27

Exhibit 5.1 Components of capacity in research institutions

29

Exhibit 5.2 Components of strategic leadership

30

Exhibit 5.3 Questions typically asked in assessing strategy

32

Exhibit 5.4 Questions typically asked about governance

33

Exhibit 5.5 Questions typically asked in assessing organizational structure

34

Exhibit 5.6 Questions typically asked in assessing niche management

35

Exhibit 5.7 Questions typically asked in assessing human resources

36

Exhibit 5.8 Other core resources

36

Exhibit 5.9 Questions typically asked in assessing infrastructure

37

Exhibit 5.10 Questions typically asked in assessing technological resources

38

Exhibit 5.11 Questions typically asked in assessing financial resources

39

Exhibit 5.12 Components of program management

39

Exhibit 5.13 Organizational processes

42

Exhibit 5.14 Questions typically asked to assess planning resources

42

Exhibit 5.15 Questions typically asked to assess problem-solving and decision-making

43

Exhibit 5.16 Questions typically asked to assess communications

43

Exhibit 5.17 Questions typically asked to assess monitoring and evaluation capacities

45

Exhibit 5.18 Methods of linking institutions

46

Exhibit 5.19 Questions typically asked about inter-institutional linkages

48

Exhibit 6.1 Typical indicators of performance in research institutions - effectiveness

53

Exhibit 6.2 Typical indicators of performance in research institutions - efficiency

53

Exhibit 6.3 Typical indicators of performance in research institutions - ongoing relevance

54

Exhibit 6.4 Basic steps in organizational assessment

57

Foreword

Since its creation in 1970, IDRC has funded research aimed at improving the wellbeing of people in the poorest countries and has striven to build the indigenous capacity to do such research. Our dominant modes of funding have been individual research projects and the training related to those projects. During the 1980s international development agencies began to acknowledge that while project support has many advantages, this mechanism of assistance can, under certain circumstances, have negative effects on organizations receiving it. The lesson learned was that more attention needs to be paid to the institutional context of the project. Around this time IDRC began to place more emphasis on broader, institutional needs by funding programs and research-supporting services such as management, dissemination and training. We also began experimenting with more integrated, institution-focused grants.

Now, in the mid-1990s, there are pressures and opportunities to push the evolution in donor thinking further. For IDRC, this impetus is coming both from the South and from Canada. After years of experience with difficult political and financial environments and often with less than helpful donor policies, Southern organizations are now more directive in determining the kinds and conditions of funding they receive from internal and external donor agencies. In Canada, there are fiscal and accountability pressures on government and public agencies to demonstrate performance more thoroughly and for a wider, more critical audience than ever before. In response, IDRC is looking for ways to be demonstrably more effective in working with its Southern partners, so that maximum benefit is derived from each dollar spent. A serious problem in this regard, which this publication aims at addressing, has been the lack of tools for monitoring and assessing organizational capacity.

This book is intended to assist both external and internal efforts to strengthen organizations and to provide a framework for documenting the effects of such efforts. Still at the formative stage, it is a working document for assessing institutional capacity: ready to be tested in a variety of situations, and readily adaptable in light of the testing. This framework combines existing knowledge in a new way to yield a comprehensive approach for diagnosing and documenting the strengths and weaknesses of the kinds of institutions IDRC works with. This approach is descriptive rather than prescriptive, and the relative importance given to the various factors in the framework, and the way they are assessed, will depend on the particular contexts in which it is used. Possible applications range from internal self-assessments to external evaluations by a funding agency, and from comprehensive assessments of every aspect of institutional functioning to the assembling of a few key impressions during brief visits.

Possible users could include: a new institution or one at a turning point wanting to take stock and formulate a plan for addressing weak areas or gaps; a consortium of organizations wishing to either select or set up an institution to play a specific role; a donor looking to give support in the areas of greatest need or to assess the effects of ongoing support; and an institution preparing itself for funding requests or negotiations.

The framework presented in this book, once tempered through field testing, will move us towards three goals: helping IDRC be more effective in targeting its investments and in reporting on the results; helping our partners create and maintain institutions well adapted to serving the needs of the world’s poor; and, on a global scale, adding to the tool kit available for making international aid more responsive to its intended beneficiaries. To these ends, and on behalf of the authors, I invite your feedback on the approach presented in the pages that follow.

Terry Smutylo
Director, Evaluation Unit
Corporate Affairs and Initiatives Division
IDRC
P.O. Box 8500
Ottawa, Canada
K1G 3H9

Preface

This book is a joint undertaking of the Evaluation Unit of the International Development Research Centre (IDRC) and Universalia Management Group. Its original purpose was to help IDRC program officers and other personnel strengthen their understanding of the Centre’s partner institutions. Towards that end it provides a framework and a common language with which to approach institutional evaluations. The framework is now being applied with success in a variety of situations around the world, and IDRC hopes that it will be of interest and use to other donor agencies.

Development agencies like IDRC are beginning to think of the monies they disburse as investments, and to view the researchers, projects, and institutions they choose to support as an investment portfolio. These choices are truly investment decisions, and value for dollar is an important measure of both individual and institutional performance in fulfilling mission and objectives.

While strengthening the capacity of organizations has always been the desired end result of IDRC’s involvement, the prevailing investment mode has been — and continues to be – project support. Recently, as part of IDRC’s internal reorganization, a discussion has ensued on how to disburse IDRC funds most effectively. Questions have been raised about the effectiveness of short-term project support in isolation from the broader institutional context, while interest is growing in modes of integrated support that address larger organizational needs. Consensus is building that IDRC must clarify its concepts of institutional capacity and how best to strengthen it.

To redress any “capacity gaps” in funded institutions requires taking a close look at what conditions might be constricting performance or output. The framework set out over the following pages is meant to serve as a guide to profiling IDRC’s partner institutions so as to generate data that will permit research-based funding decisions.

The framework touches on four main dimensions:

• key forces in the external environment,

• organizational motivation,

• components of organizational capacity, and

• aspects of organizational performance.

Important considerations within each dimension are suggested as the focus of organizational assessments. Probing these should contribute to an in-depth understanding of the organization.

It is hoped that a systematic process of organizational analysis will help IDRC target resources to areas of greatest need in selected partner institutions and ultimately result in wiser investments. As time goes on, such analyses could serve to document progress resulting from IDRC’s and other donor institutions’ investments in capacity strengthening.

In the spirit of partnership, a driving force in IDRC’s mission and culture, it is recommended that key personnel in IDRC-funded institutions receive this guide, become familiar with the framework, and use it to inform their own self-studies or to help structure their own formal organizational assessments.

The strengthening of capacity is a complex, problem-solving process, and one for which there is no single formula for success. Many approaches can and have helped research institutions in the developing world gain momentum. Just as there is no one formula for strengthening capacity, the assessment process itself must be robust enough to capture the emerging reality of capacity in development.

To develop this framework for IDRC, the authors surveyed recent literature on performance and capacity building and examined several models currently being used to evaluate research centres worldwide (see Bibliography). The social science literature dealing with the constructs of organizational capacity and performance is quite scanty as pertains to research institutions. In the absence of definitive academic work, we relied more heavily on practical experience and observations to gain insight into the workings and outputs of research institutions. Moreover, Universalia Management Group has carried out organizational assessments worldwide for well over a decade, primarily for the Canadian International Development Agency. Our framework reflects what we consider the best ideas and techniques from all of these sources. (Note: While we understand the formal distinction between an “institution” and an “organization,” the former being an organization that has become an accepted part of the social fabric, nonetheless we use the two terms interchangeably, in more colloquial fashion, to represent any of the research partners receiving IDRC support.)

Some of the ideas in this framework (for example, “niche management”) are just now being talked about and implemented in North American institutions. They have emerged from the authors’ long experience in both the literature and practice of examining whole organizations, and they are presented here as part of a total package of considerations important to organizational relevance. Depending upon the specific research organization, it is our hope that IDRC and its partners will extract from this framework the concepts that are appropriate to the institution’s stage of development and context, and adapt and adjust these to fit each assessment process.

Key Concepts and Assumptions in this Guide

Performance

The performance of organizations can be conceived as falling within three broad areas: performance in activities that support the mission (effectiveness), performance in relation to the resources available (efficiency), and performance in relation to long term viability or sustainability (adaptability).

Capacity Strengthening

Capacity strengthening is an ongoing process by which people and systems, operating within dynamic contexts, learn to develop and implement strategies in pursuit of their objectives for increased performance in a sustainable way.

This page intentionally left blank

Chapter 1
Introduction: IDRC and Capacity Building

IDRC’s Mission: “Empowerment through Knowledge”

For IDRC and its partners in the South, the goal of the development process is empowerment. IDRC assumes an explicit relationship between the generation of knowledge and development, and the Centre annually invests over $80 million in research institutions worldwide to help build and enhance indigenous research capacity.

Each researcher and/or research setting receiving IDRC support is unique and driven by its own specific set of circumstances. Some researchers are within universities, some are independent but have university links, and others are associated with community-based, non-academic centres. One of the Centre’s distinguishing policies is that, within Centre program priorities, research institutions and researchers in the South set the agendas and make key decisions regarding the areas of research and the specific research questions to be pursued.

IDRC and Institutional Capacity Building

Leading researchers and development theorists agree that creation of effectively performing institutions is central to a country’s development. The phrase “institutional capacity development” is used within the international donor community to capture the intent of a wide assortment of strategies used by donors to help strengthen Southern institutions. It is widely believed that through building institutional capacity, both the partner nations and the international donor community can obtain good value from investment dollars. Furthermore, focusing on institutional capacity permits investors to measure the cost-effectiveness of investment choices through examining a broad range of performance criteria.

In addition to its project support, IDRC has frequently supported the capacity development of its partner institutions by providing equipment, training, and improved management systems. Since the Centre’s 1987 review of institution-strengthening approaches, the IDRC has increasingly moved beyond direct support of research to fund such research-complementing activities as:

• technical training programs

• small grants programs

• procurement of journals

• limited capital development

• administrative and management systems

• sabbatical study leaves

• regional networks and workshops

• consultancies

• information-handling systems

• libraries

• non-research staff development programs

• program/project evaluations

• core grants for operating expenses

Enhancing the capacity of institutions to carry out research-supporting functions provides an interesting and potentially important avenue for IDRC investment activities. It holds promise both as a way to fulfil IDRC’s mission and as a methodology that can enhance the efficiency and effectiveness of IDRC disbursements.

Assessing Performance and Capacity

While institutional capacity development is strongly assumed to be beneficial, there has been relatively little systematic analysis of institutional capacity and its growth subsequent to intervention. Organizational capacity is a complex phenomenon involving multiple variables; both the literature of institutional capacity development and the history of evaluation practice are replete with attempts to conceptualize and measure capacity. The methodology for assessing organizations in general and research organizations more specifically remains in early developmental stages, however. Governments of countries including Norway, the Netherlands, Great Britain, and Australia are presently experimenting with approaches to evaluating the research institutions they support.

IDRC and other donor institutions have for decades been conducting program and project evaluations. The fact is, our methodologies and approaches for conducting these evaluations are much further along than are those for conducting institutional assessments.

Clearly, some configuration of the key variables of organizational capacity does make a difference in institutional functioning and performance. Donors need a way of evaluating these to learn the circumstances of where and how to invest.

Institutional performance is of central importance to capacity. Generally, it is the need or desire to change performance that drives people to engage in institutional evaluations. Performance can be conceived as the tip of the iceberg, the fruits of organizational capacity made visible to the outside world. In the case of research organizations, these fruits are the research and training products and services as well as changes within the organization itself, such as its organizational learning and adaptiveness over time so as to maintain relevance. The organization’s underlying capacity either supports or impedes its performance; thus, an examination of the performance of funded institutions can be a tip-off to weaknesses (as well as strengths) in underlying capacity.

As IDRC develops a more strategic approach to institutional strengthening activities, proportionally greater resources may be directed to broader forms of institutional support. To direct these resources effectively, IDRC will need to approach the measurement of performance and the diagnosis of institutional need more systematically than it has in the past. In the next chapter we will propose a model for IDRC to use in assessing institutions in which it has invested. The model is presented as a framework for IDRC program officers and other personnel to use to assess client institutions’ performance and to develop a profile of institutional capacity.

Any diagnostic approach must be sensitive enough to identify areas that are progressing well, and to reveal capacity gaps – those institutional deficits that are restricting outputs or compromising the quality of research and training activities. (Such deficits might include, for example, the lack of ability of investigators to access needed information in relevant journals, inadequate means for researchers to attend international conferences in their fields, an inability to access training in needed research techniques, or inadequate operating funds with which to keep laboratories supplied.)

The aim of our model is to guide IDRC in identifying issues and collecting information that will be helpful in devising strategies to enhance institutional capacity and performance. It is hoped that the data emerging from this process will be used to enlighten funding decisions and to document any growth in institutional capacity that can be ascribed to IDRC’s investments.

Because of the uniqueness of each institution receiving IDRC support, the evaluation framework is not meant to be prescriptive. Using the recommended strategy as a guide, each institution must engage in its own analysis and formulate its own conclusions. The process of institutional evaluation advocated here should further empower those involved by helping them learn about their organizations and about strategies for supporting them.

We recognize that IDRC’s resources are finite and that the Centre is a relatively minor investor in global development and indeed, within some of its client institutions. However, by initiating a comprehensive assessment of partner organizations (which could be undertaken by multiple partners) and by directing support to areas that could dramatically improve institutional capacity, IDRC can continue to assume a leadership role in promoting sustainable development. Moreover, by encouraging the process of self-reflection which assessments inevitably entail, IDRC will help its partners develop into organizations with the capacity for ongoing learning.

This page intentionally left blank

Chapter 2
Developing an Institutional Profile

A Learning Partnership

Institutional evaluations have been described as “processes which use concepts and methods from the social and behavioral sciences to assess organizations’ current practices and find ways to increase their effectiveness and efficiency” (Universalia, 1993).

The social science constructs used by IDRC to conceptualize the complex processes of institutional growth and development are “institutional capacity development,” “institutional strengthening,” and “institutional performance,” As discussed in Chapter 1, it is essential for IDRC to learn what areas of an institution to invest in (institutional strengthening/capacity development) and the returns from these investments that can be expected (institutional performance).

For IDRC’s purposes, institutional assessments should be conducted as learning exercises for both donor and recipient institutions. They should be designed to diagnose areas of need so as to guide capacity building efforts. In the best sense, an evaluation serves as a reforming process, seeking ways to make the institution stronger and better.

A learning model of evaluation goes beyond the summative approach which measures the total impact of an organization’s programs, products, and services. IDRC’s approach ideally integrates these results with the techniques of formative evaluation, in which evaluators become involved with helping the organization become more effective in meeting its goals. Beyond merely observing and collecting data, IDRC would like to work alongside people in Southern partner institutions, learning with them how best to influence the development and performance of the organization.

To have meaning and credibility for the Southern organization, the process of developing an organizational profile should be conducted in partnership with individuals having intimate, day-to-day knowledge of the institution, particularly those in a position to act on the evaluation results. By evaluating in partnership, the means to understand and strengthen the institution can spring from practical realities and experience. Moreover, those working inside the institution stand to benefit from self-examination. Undergoing assessment can serve as an organizational stimulant.

No Blueprint for Evaluation

Institutions are normative structures. They are grounded in societies and thus can hardly be understood outside of their contexts. For this reason there can be no specific blueprint for conducting institutional evaluations nor for knowing ahead of time all of the issues that bear on institutional functioning. And since institutions are socially constructed, complex systems, neither the means nor the ends of the evaluation process can be fully known prior to implementation.

An evaluation methodology that relies on pre-determined instrumentation assumes that the social reality of an institution functions independently of the various environments and stakeholder groups, and yet these forces undoubtedly have a formative influence on institutional performance.

Just as IDRC’s personnel must go through considerable learning to know how to work with and relate to certain institutions, so IDRC must be supportive of the knowledge development process inherent in conducting each institutional evaluation, for the process as well as the outcomes will likely be in flux. Institutional assessments require experimentation and the continuous correction and adaptation of plans to keep pace with institutional complexity. IDRC’S own organizational culture indeed supports such a learning process approach.

Institutional Assessment Methodology

There are many good texts on project and program evaluation, not to mention research methodologies and ways to ensure reliability and validity of data. We do not want to attempt to duplicate that work here without the space to do it justice, so we have annexed a short bibliography of useful sources. These are important subjects, however, and form the foundation of sound institutional evaluations. Thus, while we have incorporated fundamental concepts in this text, we suggest that you look more carefully at the background sources.

(1) Specificity vs. Generalization

There is a strong temptation, when engaging in institutional evaluations, to over-generalize the issues (“all organizations should...”) or to apply, blanket-style, the latest prescriptions of the day (Don’t all institutions need programs in “Total Quality Management?”). But by nature, each institution is unique, grounded in a particular history and housing a distinctive culture. Each institution’s mission is unlike that of any other institution and is designed to serve complex and unique stakeholder needs. Circumstances and needs evolve continuously, thus institutions are never static entities.

The uniqueness of an institution does not of itself defeat or invalidate generalization. It does, however, necessitate the carrying out of analytical groundwork so that a proper understanding of the mission, culture, and context will become a lens through which performance is viewed. The ideas and concepts dealt with in each institutional evaluation should flow from and reflect the institution’s own ideas and its approach to these ideas — indeed the institution’s own way of knowing about itself.

(2) Choosing Institutional Issues to Explore

The various conceptual frameworks in use for evaluating organizations suggest diverse issues to explore in the course of evaluations. While the names of categories or areas differ slightly, many models share similar content, with some more comprehensive than others. At the close of this section we will propose a framework developed specifically by IDRC’s Evaluation Unit for profiling organizations. The framework notwithstanding, it is important to reiterate that the issues inherent in each institutional profile must be institution-specific, and their examination must be negotiated with key insiders so as to meet the needs of end users. Also, choices of issues must be congruent with the limitations of the evaluators’ resources and interest, i.e. examining the whole institution may be unfeasible.

For example, measuring the performance of a research institution is a central issue, but little agreement exists as to the meaning of performance or its measurement. Thus we need to develop the precise meaning of good performance for each institution. Fortunately, there are generally accepted constructs (such as effectiveness and efficiency) that can be used as a basis for determining institutional performance. However, specific criteria cannot be determined a priori but must be negotiated – for example, the relative importance of papers published in peer journals, the number of research grants, per unit costs, client satisfaction, the amount of contractual research conducted for clients, the number of patents produced, the amount of external support garnered, the success of those trained at the institution, and so on. Beyond performance issues, organizational capacity issues are similarly diverse and complex.

Finally, institutional issues to be explored are subject to shaping by the data that are available. The lack of valid data can be a constraint to evaluation, and making up data deficits can be an expensive process.

(3) Creating a Credible Design

Because of the complexity of the concepts and issues being discussed and the inherent interest of researchers in questions related to research design, design is an important issue. Institutional evaluations lend themselves to many of the most recent advances in methodologies from the social sciences, management and economics. They are less well served by experimental or quasi-experimental designs.

The most useful designs are descriptive and analytic, incorporating elements of historical time-series analysis, case study methodology, and frequently comparative analysis. They attempt to foster in-depth understanding based on a solid foundation of descriptive data. The challenge is often in data interpretation which can only be fruitful when people believe in the data themselves.

(4) Who Collects Data?

The agents of data collection in the evaluation process are generally (1) peer review, (2) self-study, and (3) external experts. For evaluating research quality, peer review is widely considered the best method. Self-study is a methodology growing in popularity, particularly in the non-governmental organization (NGO) community. Recent work in Canada using on-site analysis has provided both a method and methodology to support institutional self-study. When both these approaches are augmented by the evaluative expertise of outside consultants, the combination can provide a rigour of design and methodology that strengthens and adds objectivity to the exercise.

Evaluation on the basis of experts’ assessments is currently the most common method used by higher education and research centres, however, it is often not the most effective method for assessing a whole institution in all its complexity. Experts are defined as independent and distinguished peers of the same profession, or administrators who examine an institution or unit with the help of documents and possibly a prior internal report and undertake on-site visits. Faults of this approach are that it tends to be overly selective in the issues examined and often ignores what the science of institutional evaluation can contribute. In some fields, accreditation standards and procedures that rely on visiting panels of outside experts provide thorough and valid institutional analyses.

(5) Sources of Data and Types of Instrumentation

Both quantitative and qualitative data are normally utilized in institutional evaluations, depending on the issues being explored. Sources can be both internal and external to the institution. A combination of qualitative and quantitative data is important, for unless tempered by other measures, quantitative measures considered in isolation can erode confidence in the evaluation process. By weaving qualitative with quantitative information, a deeper understanding of the institution will be achieved.

Certain quantitative indicators currently in vogue are justifiably criticized because they merely skim the surface of performance and are subject to over-interpretation. One example is the practice of counting the number of research papers published as a means of judging output, without considering their influence (as revealed in citation indexes) or their timing or relevance (i.e. the point of career of the researcher or the developmental progress of a new research group).

Quantitative data are important, however. These take many forms, ranging from counts and other descriptive statistics to ratio variables such as measures of unit cost or productivity. All such data should conform to the best available standards of reliability and validity.

Qualitative data has many forms and diverse sources. These include observational records of the research setting and its ambience, data from interviews and group discussions, and written data ranging from letters of clients to formal questionnaires and inventories on the organizational culture. These forms of data can be gleaned from individuals inside the institution as well as from peers and clients external to it.

(6) Interpretation of Data

One of the most difficult aspects of an evaluation is making judgments about the data, i.e. whether performance is “good,” In general, the organization must decide what types of performance should be measured and what standards are acceptable in their environment. Investors must ultimately decide whether or not the levels of performance that exist (or are potential) are worth the level of investment.

Since there are at least two main institutional interests involved in the institutional evaluation process (IDRC’S and the organization’s) and possibly others, the probability exists that many interpretations could arise from the same data. Therefore, it is important to take these potential differences of interpretation into account at the design stage.

In general, judgments about data are made by using four main decision-making tools: (1) benchmarking (using best practices to compare data), (2) reliance on experts’ opinions, (3) criterion measures (deviation from specific, stated goals and objectives), and (4) measurement of statistical differences (often with the use of tests of statistical significance). Using one or more of these tools, evaluators North and South must interpret the evaluation data collected.

It is ultimately the organization’s responsibility to accept or reject the analysis and judgments and decide whether to commit to making organizational change. IDRC must interpret and react to the data and the institutional response to the data in light of its own institutional objectives.

(7) Institutional Scope and Stage of Development

Institutional assessments typically generate an array of complex information, all of which potentially contributes to understanding the performance and developmental progress of an organization. Clearly, the data must be contextualized and the limitations of both data and process acknowledged.

Data considered in isolation of context can be misleading. For proper interpretation, many results need to be placed into social, political, economic, and historical perspective and screened through the institutional lens. For instance, new institutions differ from more venerable ones in that their normative structures are not yet integrated into the national, regional, or local cultural systems. Some institutions are local in scope rather than international and should be assessed from this perspective. All institutions, whether local, regional, national, or international, will need to have their stage of development considered (as will sub-units within the institution), for given the nature of the research endeavour, it undoubtedly takes time to generate positive results.

(8) Costs: Expectations and Limitations

The expense of a full-blown institutional evaluation is a major issue. Collecting valid evaluation data entails a comprehensive process that can be difficult, time-consuming, and costly. Without such data, institutions must rely on the perceptions of experts, and the credibility of external people can become a focal issue. A large number of trade-off decisions need to be made by IDRC, the research institution, and other partners in the evaluation. Expectations need to match the scope of the exercise. Trade-off decisions need to be explained if they materially affect the validity or reliability of the data; limitations should be clearly identified.

A Framework for Profiling Organizations

IDRC’s Evaluation Unit has constructed a framework to help IDRC personnel achieve greater understanding of organizations funded by the Centre. Following this approach will help clarify important issues and guide the collection of data that will inform decisions about enhancing institutional performance and capacity.

In brief, the framework encompasses the following areas, each of which will be discussed in forthcoming chapters:

Forces in the External Environment

• Administrative/legal

• Technological

• Political

• Economic

• Social and cultural

• Stakeholders

Institutional Motivation

• History

• Mission

• Culture

• Incentives

Institutional Capacity

• Strategic Leadership

• Human Resources

• Other Core Resources

• Program Management

• Process Management

• Inter-institutional Linkages

Institutional Performance

• Movement towards Mission

• Efficient Use of Resources

• Relevance

Key forces in the environment which have a bearing on the institution’s performance must be understood. These could include the host country’s science/technology policy, the level (or lack) of basic infrastructure services such as electricity and water, or pressing social problems in the country which shape action research. The strategic environment is dealt with in Chapter 3.

Donors are interested in seeing the clear-cut results of their investments. Thus, their natural tendency is to intersect an organization at the level of “performance,” made visible through products, programs, and services. But before assessing an institution’s outputs, it is first necessary to gain an understanding of institutional motivation: its mission and goals, and insofar as possible, its culture and organizational incentives. These drive performance from within, and a performance assessment must address how well the organization is fulfilling its mission. Institutional motivation is discussed in Chapter 4, in which key concepts and potential indicators for use by IDRC are suggested.

For those wishing to examine the key components of institutional capacity which underlie performance, the complex area of organizational capacity is covered in Chapter 5. Six main areas of institutional capacity are detailed (strategic leadership, human resources, other core resources, program management, process management, and inter-institutional linkages) and components within each of these areas are discussed.

Performance is seen in the visible outputs of the research institution, namely its research and training products and services. Our framework asserts that performance is a function of the interplay of an institution’s unique motivation, its organizational capacity, and forces in the external environment.

Ways to approach performance are discussed in Chapter 6. Guides for conducting selected aspects of institutional evaluation have been described in a series of companion documents derived from this framework. They can help delineate approaches for organizational assessments lasting one to two days as well as for large-scale assessments.

Exhibit 2.1 Framework for assessing research institutions

• Understand the organization’s environment

• Determine organizational motivation

• Examine key areas of organizational capacity

• Measure organizational performance

Constructing the Institutional Profiling Process

For the institutional profiling process to become a learning experience for all parties, it is necessary for the key players to create and agree upon an appropriate model at the outset. Components of the profiling process include creating partnerships, developing terms of reference, utilizing a workplan, participating in data collection and analysis, obtaining evaluation feedback, validating the results, and developing action plans. Each is discussed below.

(1) Creating Partnerships

Partners in an organizational assessment initiated by IDRC are, of course, the Centre and the particular research organization. Additional partners might include other interested donors or granting organizations – in fact, any legitimate participant with a stake in the process, including those who might help fund it.

(2) Developing Terms of Reference

Each organization is unique, with its own mission to fulfil and its own stakeholders to satisfy. The terms of reference (TORs) of each evaluation will vary according to the situation (including the interests of the partners, above) and should be negotiated at the outset between IDRC and those within the partner institution in a position to effect organizational change.

The TORs describe the broad areas upon which the partners intend to focus, and each evaluation will need to have defined information needs. For example, will the spotlight be solely on performance? What is the time span in which performance will be considered? Will underlying institutional capacity be considered as well? Which areas of capacity? Who is doing what in the course of gathering data, i.e. what tasks fall to external experts and what might be topics for self-study? Finally, what will the budget be for the evaluation effort?

(3) Utilizing a Workplan

A specific plan should be set in writing, detailing the steps of how the terms of reference will be carried out. The workplan is the point at which partners come to agreement and formalize a contract regarding their working relationship. In the workplan, specific questions are identified, methodologies are settled upon, and values are clarified.

Factors to be negotiated include the specific types of data to be collected within each area and appropriate indicators of performance (which are only suggested in this guide and need to be refined and further developed, as befits each situation). It is essential that all parties agree on fair and legitimate indicators, otherwise the assessment process will have little credibility or positive potential for reform.

Value judgments will ultimately need to be imposed upon the performance indicators, and these, too, will need to be negotiated. For instance, how much published research constitutes an adequate output? What dollar figures attached to external funds garnered or research contracts are considered healthy?

(4) Participating in Data Collection and Analysis

Once the types of data to be collected are decided upon and delineated in the workplan, concerns typically arise about the complexity of the information and of the large measure of time and expense it will take to amass and analyze it. Approaches to data collection and analysis are custom-tailored for each institution based upon the type of data that is available and the financial feasibility of the effort, in accordance with the budget. Much can be done internally, drawing on existing management and administrative practices.

(5) Feedback

After the profiling process, transmitting the results of the exercise to interested stakeholders (both within the organization and external to it) is an essential step. Employing multiple media to get the message out is generally more successful than relying on people to read the written report. The main issue is to ensure that those who need to learn the results actually hear the feedback. Effective methods to convey information include formal and informal talks and workshops, which can be ongoing during the profiling process.

(6) Action Plans

Once the profiling process is complete, strategies to address the findings can be incorporated within the organization’s strategic planning process. Indeed, they may help to inspire it.

This page intentionally left blank

Chapter 3
Key Forces in the External Environment

Introduction

No organization can exist in a vacuum; each is set in a particular country and region to which it is inextricably linked. This setting provides multiple contexts that influence how the organization operates and how and what it produces. Thus, the concept of “external environment” is an important consideration for IDRC as it attempts to understand the research institutions it supports. An analysis of the external environment is an attempt to understand the forces outside organizational boundaries that are helping to shape the organization.

Forces outside the institution’s walls clearly have considerable bearing on that which transpires within. The external environment can provide both facilitating and inhibiting influences on organizational performance. Multiple influences in the immediate or proximal environment form the boundaries within which an organization is able to function; these influences likewise shape how the organization defines itself and how it articulates what is good and appropriate to achieve.

Key dimensions of the environment that bear on the institution include the administrative/legal, technological, political, economic, and social and cultural contexts, the demands and needs of external clients and stakeholders, and relations with other pertinent institutions. Some examples of environmental considerations – that will be important to IDRC when profiling an institution –are detailed below.

Administrative/Legal Environment

The administrative and legal environment in a country provide a framework within which an organization operates. In some countries this environment is very restrictive and has significant impact on all aspects of the organization; in other countries the administrative/legal context is more permissive. Understanding the administrative/legal environment is essential to determining if organizational change can take place. The administrative context within which the organization operates may be shaped by a unique combination of forces, including international, governmental, non-governmental policy, legislative, regulatory, and legal frameworks. An organization is affected by the policy or regulatory context that gave rise to it. This includes specific laws and regulations that support or inhibit the institution’s development.

Several specific dimensions of the administrative environment should be examined:

Whether there are constitutional restrictions on the organization: An assessment should first determine whether the organization is part of a government ministry or department, and whether it is under federal or provincial jurisdiction.

Whether specific regulations govern the goals and structures of the organization: It is important for IDRC to know if the organization has a specific mandate and/or a specific structure that has been imposed.

Whether there is a legislative mandate that restricts leadership of the organization: It is helpful to understand any parameters that have been set around who can lead an organization. This includes identifying the governing body of the organization, and understanding how its members are selected, and further understanding who has the mandate or authority to set goals for the organization and develop curriculum.

Technology Environment

Both the types and the level of technology in the society give insight into understanding an institution. Institutions dealing with Western paradigms are dependent on the state of national infrastructure, e.g. power, water, transport; those which concentrate on indigenous research paradigms may have totally different dependencies. Thus, it is important to understand the level of relevant technology in the institutional context and whether such technology is defined by computer literacy or by highly developed indigenous methods of verbal and nonverbal communication. It might also be helpful for an assessment to include a consideration of the process by which new technology comes into use, both to understand how difficult it is to acquire needed research technologies and to develop an appreciation for the society’s willingness to embrace both new knowledge and change.

Political Environment

At a general level, IDRC needs to understand the relationship between governmental strategy or development plans and the institution. Several specific dimensions of the political context should be scrutinized:

The extent to which government and its bureaucracy supports and contributes resources to the institution: It is imperative that IDRC and other funding agencies know whether significant governmental inputs are anticipated to support increased staffing, maintenance, or other recurring costs typical in research projects. The political context usually entails resource trade-off decisions at the government level.

The extent to which the political system is stable or poised to undergo significant change: This factor is vital; the foreign policy context and its effect on IDRC should also be considered.

Whether the political context of the institution directly involves the legal context: Some institutions require specific legal status to operate, to receive external funding, and to import equipment in support of research.

Economic Environment

In the economic environment, the organizational analysis should centre on those aspects of the economic system that directly impact the type of project being considered. For example, inflation, labour laws, and opportunity costs for researchers in public institutions directly impact organizational activities. Clearly, a country under a structural adjustment regime or one that is expecting to undergo restructuring presents an investment context that IDRC needs to understand. Countries with foreign currency restrictions represent different environments for institutions than countries without them, for such restrictions have ramifications for research, e.g. for equipment procurement and maintenance. It is important for IDRC to know how the organization the Centre is supporting is affected by these and other economic forces.

Social and Cultural Environments

Social and cultural forces at local, national, and often regional levels have profound influence on the way organizations conduct their work and on what they value in terms of outcomes and effects. For example, the mores of an indigenous culture have a bearing on the work ethic and on the way in which people relate to one another. Undoubtedly, the most profound cultural dimension is language. The extent to which organizational members can participate in the discourse of the major scientific language will determine the extent to which research efforts focus inwardly or contribute to regional and global research agendas. Understanding the national/regional/local values toward learning and research provides insight into the type and nature of research that is valued. For example, what is the relative priority placed on contract research in partnership with local clients, e.g. testing products and procedures with indigenous populations, as opposed to sharing information with academic peers internationally, or generating bio-statistical data that will shape national or regional policy? Arriving at these priorities involves culture-based decisions.

Stakeholder Environment

Although research institutions tend to be driven by the research mission and the process of achieving it, all institutions are dependent for their survival on various groups of stakeholders. The stakeholder environment consists of those people and organizations external to the research institution who are directly concerned with the organization and its performance. Examples of stakeholders are suppliers, clients, sponsors, donors, potential target groups, and other institutions doing similar or complementary work. An organizational analysis seeks to learn the identity of these groups in order to assess their potential impact on the organization. Because of its international interdependent dimension, contemporary research relies on institutional relationships, and these need to be understood. Thus formal and de facto relationships with universities, government departments and agencies and other research institutions both within and outside the country need to be understood.

Influences from these multiple environmental contexts can become major facilitating or constricting forces on the institution as it works to accomplish its mission. In the extreme, these forces can keep an institution alive artificially; conversely, they can thwart organizational survival.

Linking Environmental Forces to Key Questions

For IDRC to make effective investments in institutions, it needs a full and fair understanding of the organizational milieu and its bearing on organizational functioning. Only in this way can IDRC help support organizational efforts to overcome elements in the environment that may be impeding organizational performance.

The preceding section suggested a range of considerations for attempting to reach an understanding of the external environment. However, it is plain that the amount of data one could gather is enormous. In order to focus the environmental scan, organizational assessments tend to gather data around four basic questions that cut across various components of the external environment:

1. What are the major forces affecting the organization?

The major categories of forces described in the previous section need to be integrated into some sort of environmental profile. This profile can take various forms, but whatever form it takes, the profile should identify and characterize the main forces acting on the organization.

2. How predictable are the external forces that affect the organization?

How stable are the social, political, and economic forces in the institution’s immediate environment? A variety of factors can make the external environment unstable, therefore affecting the quality of organizational performance and the type of investment that IDRC might want to make.

3. How friendly or hostile is the external environment?

Clearly, the more hostile the external environment, the more the institution needs to respond to it, the more difficult it is to carry out work, and the more defensive the institution must become. A government that withholds funds, bureaucrats who prevent equipment from being imported, an IMF regime that reduces the purchasing power of staff – each of these environmental factors directly affects the organization and should be factored in the assessment.

4. How resilient is the organization?

Institutional resilience essentially relies on the autonomy of the institution within its environment. How dependent are the programs on external events and stimuli? Some institutions exist in complex environments in which their autonomy is subject to many forces, while others are less vulnerable. The more externally dependent or reliant an institution is for its programs, services, and performance, the more sophisticated and capable it must be about managing the external environment.

The institution’s reputation is a major defense against such external forces. IDRC should understand the perceptions of reputation held by the major stakeholders. Groups such as the research community, government legislators, government bureaucrats, and granting agencies all have perceptions of the research institution and its outputs. Each group has different criteria and influence, and these diverse “influencers” all contribute to the organization’s reputation. Obviously, the stronger the organization’s reputation and the more broadly based its support, the more resilient the organization will be regarding threats of all kinds, including reduction in financial support.

Exhibit 3.1 Questions typically asked about the environment

1. What are the major forces affecting the institution?

• Are the major issues political, financial, linguistic, cultural, technological?

2. How predictable are the external forces that affect the institution?

• Is the situation as it has been or are there recent or impending changes that will affect it?

3. How friendly or hostile is the external environment?

4. How resilient is the institution?

• To what extent do the mission and the programs of the research institutionrely on the institution’s ability to link to its external environment? In other words, how dependent or independent is it regarding this environment?

• How diversified are its reference groups, both quantitatively and qualitatively

Data Gathering

The following are key issues to consider within each of the institution’s important environments.

Administrative/Legal Environment The policy environment

What, specifically, characterizes the country’s policy environment in this field, e.g. education, science/technology? Is an appropriate level of support given to the sector? Does the institution have a focused national role and function and links to national or sectoral programs?

The legislative system

To what extent is the country’s legislative system stable and functional? Do the laws that govern relationships function rationally, and is conflict arbitrated in a reasonable way, freeing individuals from extreme corruption or conflict? What are the wage laws and salary structures which directly affect the institution? For example, are university salaries tied to teacher or civil servant salaries? Do wage rates differ significantly between public organizations and private organizations?

Technology Environment

Is the technology needed to carry out the organization’s work supported by systems in the wider environment, e.g. maintenance systems?

What is the process by which new technology comes into use in the society? Does this make it difficult to acquire needed research technologies? Does it hinder the ability of the society to adopt the results of research?

Political/Economic Environment

Overall, what is the value placed on research by the nation? Specifically, do national authorities support the institution through large-scale support (such as operating funds)? Are decisions about allocations heavily political?

The political bureaucracy

To what extent are government bureaucrats able to carry out decisions? On what basis are resource allocations made? Does the bureaucracy facilitate or retard the development of the organization? For instance, are the rules governing the institution so stringent that donor participation is made difficult or impossible? (For example, must money from outside the country be administered through the country’s External Affairs Department rather than go directly to the institution? Does the country serve as gatekeeper of technology, inhibiting the transfer of equipment from one country to another?)

The history and amount of IDRC support and the goals of this support

What is the amount and nature of other donor support: Who, external to the country, is investing in the country, in this type of institution? Is there potential for coalitions or joint funding of projects by donors? Why has IDRC chosen to support this institution? What is the present mode of IDRC intervention: (project support, multiple projects, other)? Why was this mode of intervention chosen? What are the goals of IDRC support?

Social and Cultural Environment

Do cultural values support the free intellectual exchange of ideas? Are they positive towards the value of the area of study and the work produced by the institution, for example, scientific knowledge? Information pertinent to women’s studies? Are the country’s human resources adequate to support the institution’s work, e.g. qualities of the labour pool, demographic trends?

Stakeholder Environment

Do each of the institution’s stakeholders have an interest in expecting/demanding that the research institution make satisfactory progress in carrying out its mission? Do strategic decision makers in the organization understand the specific demands that each stakeholder group is making on the organization? Awareness of the market segments served and the products/services produced to serve them comprises a “reality test” for the organization.

Does the organization adequately attempt to understand other organizations in the environment (local, regional, national, international) with a bearing on its niche? For example, what is the potential for losing employees to similar organizations offering better salaries? The potential for constructive collaborations and other partnerships that might enhance output? Are adequate networks and systems in place linking this organization to other organizations so as to enhance/support research or training products/services?

Data Gathering Methods and Sources

Obviously, the external environment within which institutions operate is large and complex, and culling data from this environment requires the ability to separate the important from the less important. It is critical that the organizational assessment capture the impact that the environment is having on the motivation, performance, and capacity of the organization.

The first place to search for pertinent data is an existing “environmental scan” of the organization that may have been carried out by the organization itself. As part of strategic planning, it is common nowadays for organizations themselves to undertake environmental scans. If a recent scan has been carried out, this will be of great assistance. If not, the evaluators must attempt to identify, with the assistance of key organizational members, the external factors (e.g. social, political, economic) that are most supportive as well as most troubling to the organization. These factors will form the starting point for discussion and analysis.

Exhibit 3.2 Methods of gathering environmental data

• Ask for existing environmental scans for the institution

• Obtain scans from other research institutions in the country

• Review recent studies by the World Bank and other donor institutions

• Read contextually (e.g. newspapers, magazines, historical analysis)

• Interview key informants inside the institution about the external factors influencing the institution

• Interview key informants outside the institution to understand how the external environment affects internal operations

• Ask those involved about key legal and governmental regulations that influence the institution (e.g. patent laws, development plans, labour codes)

• Collect and analyze data on the evolution of government and donor support

• Ask researchers about prevalent values regarding learning and research

• Analyze development plans and key policy documents

• Collect and analyze data on resource allocation trends for research and development in the country and region.

Relevance to Capacity and Performance

Both performance and capacity are heavily influenced by the external environment.

Performance is contextual, for it is the values of key organizational stakeholders that determine the short-term and long-term reputation of the organization. For example, government officials who see little evidence of immediate impact might view the research institution quite differently than does the research community, which applies international scientific norms as their referent. Local community residents might regard the institution as a helpful resource, but the scientific community of the country or region might find its work out-of-date. Understanding the external environment therefore helps to contextualize the understanding of performance.

With regard to capacity and its development, the institution’s context is an intervening variable in many management choices. For instance, the usefulness of a particular organizational strategy or structure can be directly influenced by the organization’s external environment. The extent to which resources are available is influenced by the external environment, as are the internal policies and procedures deployed by an organization to control these resources. The nature and type of inter-institutional linkages are similarly affected by the environment. Ultimately, the external environment influences the choices an organization makes regarding its programs, types of outputs, and the standards of judgment that are appropriate and acceptable by which to measure its progress in fulfilling its mission.

Chapter 4
Organizational Motivation

Introduction

Organizations, like people, have different rhythms and personalities. In the first place, each has a different purpose, or mission. Some are highly motivated by the opportunity “to do good” while others are driven to perform by other forces, including the personal ambitions of key players. Moreover, each institution has a unique working ambience or climate that is an amalgam of purpose, history, and personality. The organizational concepts that motivate and drive the institution include its mission and its internal culture and organizational incentives as well as the widespread values and beliefs about the role the institution plays in society.

History

An organization’s history is charted in its important milestones – the story of its inception, its rate of growth, awards of achievement or distinction, and notable changes in structure or leadership. While the evolution or history is often expressed through formal documents such as the charter, stated goals and objectives, and plans (strategic or otherwise), it is also told in an unwritten collection of important stories or legends that can be highly motivational to organizational members. For instance, accounts of the organization’s triumphs and achievements and memories of important obstacles overcome are often woven into a proud tradition to uphold.

Mission: Stated and Perceived

An organization’s mission is its raison d’être. It speaks to the questions: Why does this organization exist? Whom does it serve? By what means does it serve them? Those seeking to learn the mission of an organization often find they are dealing with two entities: that which is written down (the mission statement) and that which is conceived by organization members.

The mission statement is the written expression of the basic goals, characteristics, values, and philosophy that shape the organization and give it purpose. It seeks to distinguish the organization from others by articulating its scope of activities, its products/services and market, and the significant technologies and approaches it uses to meet its goals. By expressing the organization’s ultimate aims – essentially, what it values most – the mission statement provides members with a sense of shared purpose and direction. The long-term goals enshrined within it serve to inspire the organization’s strategic planning and major activities. These goals also form the basis for evaluating organizational performance.

Besides the organizational mission that is formally written down is the perceived organizational mission. Often the latter does not correspond to the stated mission, being out-of-date or even misconstrued. But the perceived mission is nonetheless a powerful behavioral driver for those in the organization. One task of an organizational assessment is to assess the degree to which the formal mission statement is understood and has been internalized by members of the organization, i.e. the congruence of perceived and stated missions.

Mission as Toot

Not long ago, it was common for mission statements to gather dust on the shelf. They were largely symbolic documents and seldom referred to. More and more, however, organizations have realized the importance of making the mission statement a “living statement,” When formulated and used strategically, a mission statement is a powerful tool which communicates the organization’s fundamental verities to internal and external stakeholders. Used in this way, the mission statement becomes a driving force of the organization and a yardstick for measuring its accomplishments.

Culture

While the mission statement formally articulates organizational purpose, it is the organization’s culture that gives life to the mission and helps make its realization possible. Culture is the sum total of the values, beliefs, customs, traditions, and meanings related to mission fulfilment and developed over the history of the organization that make it unique, govern its character, and drive the organization.

Within the culture reside the organization’s distinguishing characteristics. The culture embodies the collective symbols, myths, visions, and heroes of the then-and-now. For instance, culture finds expression in the collective pride (and even embellishment) of the accomplishments of individuals. Values important to the organization are illustrated through stories about past successes and failures; these form a living history which guides managers.

The nature of research is such that researchers frequently reap the rewards only in the long term. Involvement in the research endeavour requires uncommon persistence and dedication. Certain aspects of the culture of research institutions serve to sustain and motivate those bent on a profession requiring painstaking work. These aspects include a learning climate, intellectual values, a sense of belonging, a sense of ownership for work done, and an acceptance of delayed rewards. Undoubtedly, one of the most attractive incentives for researchers is the opportunity to lead an active intellectual life.

Culture as Motivator

Organizational culture is a powerful motivating force: by embodying the values sanctioned by the organization, the culture frames the boundaries of acceptable attitudes and behaviour and creates a shared ethos. For instance, the culture helps determine the extent to which members of the organization will – and are expected to – extend themselves to fulfil tasks. Indeed, the culture can cause individuals to use or to push the very limits of organizational capacity. (“They said it was impossible, but we made it work!”)

Cultural values express what people believe the organization wants to happen. When individuals join an organization, besides learning about its formal aspects, they spend much of their time being socialized into the “informal organization,” namely, the culture. It takes time to absorb the organizational culture, for it generally cannot be spelled out in a document or directive.

In sum, an organization’s culture is the attitudinal and behavioral representation of the mission. Culture helps define its members’ attitudes and actions regarding tasks, roles, people, power, and change. It provides a framework through which the organization can acknowledge internal problems and resolve them, and analyze external challenges and meet them.

Incentives

An organization’s competitiveness depends in part on its ability to create an environment that motivates and stimulates its personnel. In addition to cultivating a culture of knowledge, research institutions must continually seek ways to keep research staff motivated. Organizational incentives refer to the way an organization’s system of rewards and punishments either encourages or discourages behaviours – in the case of research institutions, productivity and creativity. Incentives are important to individual research careers and to overall organizational success and can help compensate for the inherent uncertainty, a dearth of specific or immediate products, and the long-term nature of results inherent in the research enterprise.

Some of the incentives within knowledge-based institutions in developing regions include the social value placed on scientific knowledge, and the importance of peer recognition to investigators. Scientific creativity has been found to flourish in an atmosphere that encourages wide communication and external stimulation, and that allows researchers to decide what to investigate. Appropriate remuneration is another important incentive – i.e. not too much less than what the researcher could earn in the productive sector or with the government.

Information to Gather to Assess Motivation

In carrying out an organizational assessment, the organization’s mission and the culture that drives the mission are important variables to consider. With nearly all evaluation activities, multiple data sources help improve the reliability and validity of the findings. This is particularly true in gathering data to assess the mission and culture of the institution. The evaluator’s goal is to understand the underlying dynamics of the organization – the extent to which organization members are motivated to work towards organizational goals and aspirations.

Mission

To start, it is important to understand the evolution of the organization as expressed through its formal documents, charter, stated goals and objectives, and plans (strategic or otherwise). Have the mission and goals been updated in the recent past? Do organizational members feel included in the updating process? Is there a formal mission statement? Do organizational members know what the mission is? Important organizational milestones also help profile the institution’s developmental progress. Gathering concrete data related to the organization’s mission helps contextualize these sometimes abstract concepts.

Obtaining such information helps provide insight into whether or not staff and stakeholders have a vision of the organization that is congruent with the stated mission.

Culture/Organizational Incentives

At the heart of gathering information about the institution’s mission is attempting to ascertain what drives organization members to strive for organizational goals; thus, closely aligned with the mission is the underlying culture of an organization. Unlike a mission or goal statement, which can be written down and analyzed, the culture of an organization represents the belief, values, and organizational incentives that drive individual members. In this respect it represents the collective subconscious of an organization.

There is no simple way to gather data on the organization’s culture. Some organizational evaluators use survey instruments; others use less formal interview and observational techniques. Regardless of the technique utilized, it is critical to arrive at a full understanding of organizational motivating forces. For example, it is important to understand if an organization is being driven by the belief that it should be staffed by a national or international staff. It is important to understand the extent to which the organization values basic research or community service. It is important to see if administrative rules dominate in the struggle between research productivity and bureaucratic formalities. Clearly, every institution has its own mission and cultural aspirations. It is the organizational evaluators’ responsibility to uncover and analyze these aspirations.

Exhibit 4.1 Suggested data gathering methodologies

• Organizational observations by experienced, knowledgeable external observers.

• Interviews of individuals and/or small groups about what drives the organization.

• Surveys which take a reading of culture by having members identify what they

• perceive as dominant beliefs, attitudes, and values in the organization.

• Scrutiny of selected organizational documents and promotional literature to see

• how the organization perceives itself and how it describes itself to others.

Linking the Mission and Culture to Performance and Capacity

In 1982 Peters and Waterman in In Search of Excellence reminded us of the importance of the relationship between mission, vision, values, and performing organizations. It stands to reason that an organization whose members passionately strive to improve their work has a higher probability of achieving its goals than one without such committed individuals. As long as its goals are appropriate, such organizations tend to be successful.

In profiling an institution, judgments need to be made regarding the extent to which a commonly held “mission and culture” is facilitating or detracting from performance. If the mission of the organization is outdated and its researchers unclear about organizational directions, IDRC might deem it appropriate to work

with the institution in developing more suitable organizational motivation. In this way, the Centre would go beyond merely assessing culture to initiate a culture-building process as a means of improving organizational health. The intervention would aim to create a culture focused on and appropriately directed by the organization’s goals, as reflected in its strategic plan. This is discussed further in the “Strategic Leadership” section of Chapter 5.

Exhibit 4.2 Questions typically asked when assessing organizational motivation

• To what extent is there a clear mission that drives organizational members’ behaviour?

• How does the organization’s mission relate to IDRC’s goals?

• To what extent are the research institution’s values compatible with those of its partner institutions and major donors?

• To what extent have organizational members adopted the mission and feel that it is one that they ascribe to?

• Is the mission updated and linked to a set of goals?

• Are the goals appropriate to the mission?

• What are the key values and beliefs that drive organizational members’behaviour?

• To what extent are the senior researchers guided by mission and goals?

• Do new staff embody the mission?

In IDRC, the prevailing ethos has been to seek out motivated and bright individual researchers and to support them in building their capacity to carry out research. (The Centre has also gone beyond the project mode to support entire departments and institutions, but it has done this less frequently.) Arising from IDRC’s commitment to developing organizational capacity, the Centre is searching for motivated institutions that hold the promise to improve their performance. While support can be provided to help build motivation and capacity, in most organizations, helping to build a culture that supports excellence is a long-term and difficult intervention. IDRC would need to consider the extent of the resources available before attempting to intervene with this variable.

This page intentionally left blank

Chapter 5
Organizational Capacity

Introduction

Since 1970, IDRC has stressed that investment choices should focus on building the capacity of indigenous organizations and institutions to solve their development problems. The Centre’s recently defined strategy for the 1990s (“Approaches to Strengthening the Institution”) seeks to ensure sustainable organizational development through a focused and holistic effort to build the capacity of its funded partners.

The experience of IDRC and other agencies indicates that creating wider change at the organizational level is conceptually and practically a more difficult and complex undertaking than is project support. At the centre of this complexity is our embryonic understanding of institutions and of building organizational capacity.

Our framework for viewing organizational capacity entails six main, interrelated areas that underlie an institution’s performance: strategic leadership, human resources, other core resources, program management, process management, and inter-institutional linkages. Each of these areas contains various components (detailed in the table below) which range in importance among institutions.

Exhibit 5.1 Components of capacity in research institutions

Strategic Leadership:

Leadership, Strategic Planning, Governance, Structure, Niche Management

Human Resources:

Research, Teaching, Managerial Staff, Technical/Support Staff

Other Core Resources:

Infrastructure, Technology, Finance

Program Management:

Planning, Implementing, Monitoring

Process Management:

Problem-solving, Decision-making, Communications, Monitoring and Evaluation

Inter-Institutional Linkages:

Networks, Partnerships, External Communications

Strategic Leadership

Strategy refers to all those activities that set the course for the organization and help keep it on course, in service of its mission. Strategic leadership is associated with risk, with vision, and with ideas. It is the process of setting clear organizational goals and directing the efforts of staff and stakeholders alike toward fulfilling organizational objectives. Strategic leadership of the institution involves developing ways of procuring essential resources, inspiring organization members and stakeholders to perform in ways that attain the mission, and adapting to or buffering external forces.

Exhibit 5.2 Components of strategic leadership

• Leadership

• Strategic Planning

• Governance

• Structure

• Niche Management

The outcome of strategic leadership is aligned direction and action. A strategically led institution will be continuously engaged in the process of changing, adapting, and following a path that makes sense to its members and to the external stakeholders who fund the institution or confer reputation.

Leadership

Leadership can exist at many places inside the organization, both formally and informally. Formal leadership is exercised by those appointed or elected to positions of authority; it entails activities such as setting direction, providing symbols of mission, ensuring that tasks are done, and supporting resource development.

Informal leadership is exerted by persons who become influential because they possess special skills or resources valued or needed by others; examples of informal leadership include spearheading the reorganization of the professional library or initiating an innovative, multi-disciplinary approach to a research problem.

The more broadly that constructive leadership is assumed by members of the organization, the more vibrant and creative the organization.

Strategic Planning

Strategic planning refers to the pattern of calculated responses to the environment, including resource deployments, that enable an organization to achieve its goals. It entails formulating and implementing activities that lead to long-term organizational success. The strategic plan is a written document setting out the specific goals, priorities, and tactics that the institution intends to employ to ensure good performance.

The development, implementation, and monitoring of institutional strategies can emerge either centrally or within decentralized units. The issue for the organizational assessment is whether or not a realistic strategy is helping to guide decisions throughout the organization.

In a research organization, strategic planning is generally a participatory process that helps engender shared commitment to organizational directions. Formulating strategy begins with identifying and/or clarifying goals and objectives and determining methods for reaching them. It involves exploring the fundamental questions: What are the major services that we offer? Who are our clients and what services do they want us to provide? Do our researchers agree with organizational direction? What new directions should we be moving toward?

As detailed in Chapter 3, each element of strategy (objectives, activities, and resources) is constrained by political, social, technological and economic environmental variables, particularly in public organizations. For instance, in certain research institutions the science/technology policy of the government is a vitally important variable. Strategic planning thus typically includes a scan of both opportunities and constraints presented by the environment.

A central issue in the survival of an organization is acquiring core resources in the vital areas of funding, infrastructure, technology, and personnel. Leadership in this domain means anticipating and capitalizing on opportunities in the external environment that might yield or support needed resources. It also means predicting threats to organizational resources and intervening (typically, politically) to insure that organizational performance and survival are safeguarded. This level of leadership generally transpires between the senior executive of the organization and the governing body.

Resource acquisition entails constantly being on the look-out to create opportunities that will augment the organization’s resources. This can be accomplished through forming new alliances and partnerships and by forging new ways of thinking about generating resources.

For strategies to become operational, they need to be communicated, explained, processed, and revised according to feedback from stakeholders, both internal and external. From the board on down, all members of the organization need to work toward making the institution’s strategy a reality. Implementing strategy requires matching resources and activities to objectives and, if required, scaling activities to fit resource constraints (human, financial, technological, infrastructure).

Exhibit 5.3 Questions typically asked in assessing strategy

• Is there an organizational strategy?

• Is it known by the board of governors, senior managers, researchers, and other staff?

• Is the strategy generally accepted and supported in the organization?

• Has the strategy helped clarify priorities, thus giving the organization a way to assess its performance?

• Is it used as a way of helping to make decisions?

• Is the strategy an impediment to capacity-building or improved performance?

• Is the strategy one that supports issues of equity?

• Is there a process for clarifying and revising the organization’s mission and beliefs, for working on its goals, and for understanding its clients and users?

• Is there a process for scanning the environment in order to consider potential threats and opportunities?

• Is the governing body active in acquiring and protecting core resources?

• Does the organizational strategy identify the opportunities and constraints regarding core resource areas?

• Does the organization lobby effectively in its actions to secure core resources?

• Do senior board and management officials understand their roles in core resource acquisition?

• Is there a process for monitoring application of the strategy?

• Is there a similar process for understanding client and stakeholder requirements and changes?

Governance

The board of directors and constitution provide the legal and policy framework and direction for organizational functioning. Governance can be conceived as the point at which the external and internal environments meet. A good board of directors has its finger on the pulse of both environments; it assesses whether or not organizational initiatives are supportable, whether they meet development goals nationally and/or regionally, whether the organization is responding appropriately to important forces and trends in the field of endeavour and within the wider environment, and whether it is meeting the needs of those it serves.

At the governance level, policy issues are discussed and resolved in a timely manner, organizational policies are set, and capital and operating budgets are approved. The power and politics of the organization inevitably reside here, for the governing structure is often a forum for airing internal demands and resolving them within funding realities. Strategic direction and priorities, stakeholder representation, equity, external environmental forces (both positive and negative), as well as core resources all concern the governing body.

In research institutions, the governing body must strive to create a framework that allows experts within the organization to have the resources they need to remain on the leading edge of their fields. For instance, the board might approve the organization’s acquisition of a new technology and related staff training by affirming its supportability in terms of relevance to the core mission and to the demands and needs of constituents.

Exhibit 5.4 Questions typically asked about governance

• Does the governing structure both clarify and support organizational direction?

• Does the charter provide an adequate framework for carrying out the mission of the organization? Is it adequate for dealing with the external forces challenging the organization?

• Does the governing body scan the external and internal environment in order to understand the forces affecting the organization?

• Does the governing body respond appropriately to important environmental trends and influences, be these social, political, or economic? For instance, are both quality and equality issues reflected in the minutes and discussions? Does the governing structure support principles of equity?

• Does the governing structure operate effectively and efficiently?

Structure

The structure of an organization is the system of working relationships arrived at to divide and coordinate the tasks of people and groups working toward a common purpose. Most people visualize an organization’s structure in terms of the familiar organigram. However, structure is far more: It involves the division of labour including roles, responsibility, and authority, as well the coordination of labour into units and inter- and intra-unit groupings. Structure must be assessed to see if it is facilitating or hindering movement towards the mission and goals.

The task of creating appropriate and manageable work units or departments has challenged managers and students of organizational development for decades. We now realize that the “ideal” structure is the one that best fits the situation. At issue is whether or not the organizational structure supports or inhibits the capacity of the organization to perform its work.

In looking at the structure of a research centre, we are interested in (a) departments’ or other groupings’ understanding of their roles in the organization, (b) whether they have the authority to carry out their roles, and (c) whether they are accountable for their work.

Coordination is the process of linking specialized activities of individuals or groups so that they can and will work toward common ends. The coordination process helps people to work in harmony by providing systems and mechanisms for understanding and communicating one another’s activities.

In research perhaps more than in any other endeavour, where innovation and productivity are key, interdisciplinary teamwork is a competitive advantage. Entire networks are being formed in which the best minds collectively tackle difficult research problems, with each contributor bringing his or her special perspective and expertise. The ease with which the research institution facilitates interdisciplinary approaches to research projects is an indicator of organizational health.

Many variables influence organizational structure. History, organizational goals, strategy, governance, funding (and other) pressures from the external environment, the specific fields of research, and technology all play a role in influencing the type of structures that exist.

Another important structural consideration is the manner in which authority is shared. Organizations range from the decentralized to the centralized, from the highly participatory to the dictatorial. In assessing the organization’s functioning, determining which model is better becomes a matter of judgment, for in actual fact, the appropriateness of the model depends upon the situation and context.

Exhibit 5.5 Questions typically asked in assessing organizational structure

• Are the organization’s mission and goals supported by its structures?

• Are roles within the organization clearly defined, yet flexible enough to adapt to changing needs?

• Are departmental lines or divisions between groups crossed easily, particularly in cases when collaboration would mean an improved product? Or are departmental lines jealously guarded, serving as an impediment to collaboration?

• Is structural authority used to further issues of equity?

• Does staff have linkages with/access to other researchers and units in the organization that are important to their work?

• Are there coordinating mechanisms which facilitate access to other researchers or research units within the organization?

• Can staff create important coordinating units with ease?

• Are efficient means for coordinating staff and units fostered and encouraged?

• Are there clear lines of accountability (individual, group, and organizational)?

• Do people have the authority to set agendas that support accountability?

• Are there efficiently functioning work groups?

• How centralized (vs. de-centralized) is decision-making? If highly centralized, does this model appear to be having negative consequences such as impededproductivity, low morale, etc.?

• Who bears responsibility for performance? Does this structure make organizational sense and facilitate the work?

Niche Management

In today’s global society, the success of a research institution is in part predicated on being able to establish a unique role within the society. Niche management entails carving out a particular area for the organization in the “marketplace” that matches its particular expertise. In the private sector, the marketing function evaluates an organization’s image or position in the marketplace and reaches strategic decisions concerning target markets, services, and products. This model is not so far afield for research institutions, which also depend upon a client system for support – namely government funders, industrial contractors, and the general public (i.e. taxpayers). For the research institution’s survival, appropriate clients must be cultivated and the research products and services must meet their needs.

A research centre’s niche helps clarify where it stands in relation to the constellation of other local, regional, national, and international research organizations. The organization’s position helps determine the level and types of funders that can help it build capacity.

Niche management is an organizational function that forces managers to look beyond internal matters to consider the wider environment and the broader issues of our time. If this function is neglected, the organization’s ability to adapt to the changing global situation will be severely limited.

Within the area of niche management, external communications are important. These will be targeted to stimulate funding (e.g. research grant proposals, requests for donor funding) or to stimulate awareness and interest regarding the services, products, and capabilities of the organization (e.g. annual reports, research reports, and newsletters to stakeholders).

Exhibit 5.6 Questions typically asked in assessing niche management

• Has the organization defined a marketing program in which the philosophy, mission, goals, and resource strengths of the organization are matched with the needs of the market groups selected for service?

• Is equity served through this niche? For example, are women and other under-represented groups served within the niche?

• Does the organization seek information about the products and (research) services that clients want?

• How do potential clients or customers know or find out about programs/-services?

• What promotional information about the research organization is generated and communicated to stakeholders?

• Does the organization appear to have sufficient financial support from outside the organization? If not, could a lack of aggressive marketing or promotion, resulting in a lack of awareness, be the cause?

• Does the organization seek a larger share of customers, clients, funders, or other constituents through the collection of systematic client and product information market research?

Human Resources

The human resources (HR) of an organization consist of all staff (research, teaching, managerial, and technical/support staff) engaged in any of the organization’s activities. It is well-recognized that the human resources of any organization are its most valuable asset. This is particularly true in research centres, where the people required to do the core work of the organization are highly trained individuals. IDRC has long been committed to supporting the continuing professional development of researchers in the Centre’s partner institutions.

The HR management function is charged with planning and controlling this resource to make sure that peoples’ needs are met. This is not merely an altruistic function, for it is highly likely that staff who are reasonably comfortable with working conditions and stimulated by the environment will be productive.

Managing human resources requires forecasting the demand and supply of staff needed to carry out the activities of the organization. HR management also entails keeping records of human resources so as to permit the creation of a more equitable employment system.

Besides assessing staffing needs, some of the specific tasks involved in HR management include recruiting and hiring the best people possible, creating an assessment system that rewards people and helps keep them in the organization, and providing for the ongoing learning and career development of employees.

Exhibit 5.7 Questions typically asked in assessing human resources

• Are the right people in the right jobs in the organization?

• Is adequate HR planning occurring? Does the organization forecast, recruit and select human resources effectively?

• Is there an adequate HR policy in place? Does the organization keep personnel records? Is there a performance assessment system in place?

• Is the workforce reflective of a fair gender and equity policy?

• Is equity dealt with appropriately, particularly as relates to issues of selection and promotion?

• Are the learning/professional development needs of staff provided for?

Other Core Resources

Whether a government or a private sector enterprise, whether a self-contained institution or a department within a larger institution, the research entity needs well-managed resources. Having treated human resources separately, above, due to IDRC’s special commitment to their development, we have grouped the other essential resources into three areas: infrastructure, technology, finance. Strategic leadership entails developing systems for their planning, acquisition, and control.

Exhibit 5.8 Other core resources

• Infrastructure

• Technological Resources

• Finance

Throughout the development literature, studies point to deficiencies in internal management capabilities. Stories abound about poor resource management – for example, equipment remaining in crates and getting ruined before it is used and buildings falling into disrepair due to the absence of maintenance systems.

The capacity to manage resources is crucial not only to the performance of institutions but also to organizational survival. As IDRC engages in the organizational assessment process, it is likely that assessments of the current status of resource management will provide insights into how future resources or grants will be used.

Infrastructure

Infrastructure refers to the basic environmental conditions which enable work to transpire – for example, reasonable space in a building equipped with adequate lighting, clean water, a dependable supply of electricity, and transportation to and from work. In the North we take these conditions for granted, for we have the wealth and the governmental structures to support adequate infrastructure. In certain developing countries in which IDRC works, some of these fundamental conditions are missing.

Each Southern institution has its own array of assets and liabilities with respect to infrastructure resources, and the positive and negative points in each represent the starting points for information-gathering. If an organization has its basic infrastructure in place, this area will represent a small component of an assessment; if infrastructure is debilitated, however, with electricity and water found to be problem areas, then infrastructure will become a major concern.

As part of understanding capacity, one has to consider the extent to which inadequate infrastructure interferes with the functioning or the potential functioning of a specific research institution. Most of the time, deficiencies in one or more elements of infrastructure do not interfere with day-to-day work; however, at some point, work will be impacted. Typically, the crux of the infrastructure issue is maintenance, which suffers due to the lack of recurrent budgets providing for upkeep.

As technology becomes more and more sophisticated, basic infrastructure will play an increasingly important role in the type of organizational support that IDRC and its partners can provide. For example, sensitive scientific equipment cannot tolerate intermittent electrical supply, so acquiring a generator may be necessary. And if water quality is poor, purification may be required or a new well may need to be drilled to rectify the situation.

Exhibit 5.9 Questions typically asked in assessing infrastructure

• Does the organizational strategy identify the opportunities and constraints regarding infrastructure?

• Are the buildings and internal services (e.g. water, electricity) adequate to support and facilitate daily work?

• Is there adequate transportation to and from work for employees?

• Are communications systems (hardware) functioning at the level required?

• Are there adequate maintenance systems and procedures that are supported by a maintenance budget?

• Is building and equipment maintenance being managed? Is infrastructure being managed?

• Is adequate planning ongoing to address infrastructure concerns? Is an individual or a group responsible?

Technology

The technological resources of an institution encompass all of the equipment, machinery, and systems, including library information system hardware and software, that are essential to the research and training function. It is important to keep in mind that the instruments of technology are merely tools for enhancing research endeavour: ideas must inspire the technology.

The technological resources of a research centre must be appropriate to the type of work the organization is doing and must keep pace with the emerging ideas in each discipline.

Inappropriate technology can drive significant gaps between Southern and Northern research institutions, particularly in the hard sciences and engineering. Simply put, it is difficult to publish in the leading scientific journals using old technology. And in all disciplines, lack of access to the sophisticated means of accessing information used by colleagues worldwide will mean that institutions will have difficulty building the networks required for global research.

Assessing the appropriateness of organizational technology is a complex endeavour. Providing technology without developing the corresponding ability to use it is a waste of valuable resources. In general, one has to assess the ability of the organization and its units to create realistic plans for technology and to manage against these plans. If the plans are either too ambitious or not ambitious enough, an organization can have difficulty. A clear understanding of the broader strategy of the organization and of the requirements of the field is needed in order to assess the appropriateness of a given technology.

Exhibit 5.10 Questions typically asked in assessing technological resources

• Is adequate technological planning occurring?

• Overall, is the organization’s level of technology appropriate to carry out its functions?

• Does any one unit seriously lag behind the others in the level of technology needed to carry out its work? Why?

• Is access to international information provided to all units through library and information management systems?

• Are there adequate systems in place for managing the organizational technology?

• Are there adequate information technologies in place to manage the organization?

Finance

Financial management includes the prediction of financial resource requirements (operating and capital budgets) and cash management as well as the financial accounting function. Good management of budgeting and financial record-keeping is critical to overall organizational functioning. It enables essential information to be provided to the board and to those managers responsible for organizational resources. Good financial management also inspires confidence in funders who are interested in financial accountability and sound financial management.

Financial statements are a barometer of organizational health. Sound internal financial procedures regarding the administration of the organization’s operating funds and likewise, of individual program grants, offer assurance to donors that their monies are being directed properly. Of particular interest, when scrutinizing an organization’s financial system, is assessing what information the financial system can provide to decision makers.

Overall, important organizational goals should be supported by the budget. For example, if international exchange of information is an organizational priority, there should be evidence of funds allocated for electronic data systems, for hosting international visitors, and other related activities in support of this goal.

Exhibit 5.11 Questions typically asked in assessing financial resources

• Is there adequate budgetary planning?

• Are budget plans timely?

• Are they updated as financial information comes in?

• Are financial information reports provided to senior managers, the board, and funders?

• Are members of the governing structure involved in financial planning and monitoring?

• Are technology and human resources adequate to ensure a good financial control and information system?

• Are the auditors of the organization happy with the controls of cash and assets being utilized by the financial managers?

• Have the finances of previous grants been properly managed?

Program Management

A research institution’s ongoing programs of research are its central endeavour and indeed, its main “product,” Research-supporting services and ongoing training are also vital programs within the organization. Program management is the ability to develop and administer these programs in a way that supports the mission.

Program management is vitally connected with all other areas of organizational capacity, for ultimately, the strength of the organization’s strategic leadership, human resources, other core resources, process management, and intra-institutional linkages affect the quality of the institution’s programs. Program performance is highly visible outside the organization and is often the major focus of organizational assessments.

Good program management sees to it that proper weight is given to each facet of mission fulfilment. For instance, if producing research and conducting ongoing training are both stated priorities, each should receive commensurate resources.

Exhibit 5.12 Components of program management

• Planning

• Implementing

• Monitoring

Research Program Planning

The planning function within research program management includes the following tasks:

• Identify and assess research needs – their relevance to national plans and priorities and any gaps in existing programs.

• Set goals and strategies; identify focus areas and activities.

• Develop plans that

- are consistent with needs, strategies, and areas of focus,

- address constraints and opportunities, and

- take into account technical and organizational capabilities.

• Account for technological, economic, social, and environmental aspects to ensure applicability of research outputs.

• Find/create opportunities for funding that is secure, diversified, and sustainable.

• Review, revise, and approve plans/budgets.

• Generate and review research proposals; submit to and negotiate with funding agencies, sponsors, clients.

• Assimilate reviewers’ comments; approve proposals, activities; allocate resources.

Research Program Implementation

Research program implementation entails some or all of the following tasks:

• Implement research objectives.

• Provide technical, administrative, and logistical support to projects.

• Identify and meet training needs.

• Disseminate/use research results, as appropriate.

• Maintain linkages with policy makers, research disseminators, and other users.

Research Program Monitoring and Evaluation

Monitoring and evaluating research programs are necessary elements in the planning cycle. These activities involve:

• Establishing performance measurement indicators and processes.

• Monitoring technical quality and scientific progress and providing feedback to researchers.

• Administrative and financial monitoring and reporting.

• Reviewing/revising procedures and resources; taking corrective measures or terminating.

• At project completion, evaluating:

- objectives – their overall relevance, adequacy, appropriateness, and degree of achievement

- cost effectiveness of activities

- quality of outputs produced (relevance, adequacy, and appropriateness vis-à-vis objectives)

- activities required to maximize utilization of outputs

- lessons learned

• Based on the assessment, identifying follow-up courses of action.

Research-Supporting Services

Research-supporting services in the organization which must be planned for, implemented, and monitored include:

• External linkages with relevant actors, decision-makers, and policy-makers

• Information and materials management

• Financial and administrative services

• Field-testing and disseminating research outputs (farm, community, and commercial trials, patents, marketing)

Process Management

Taking a vision and making it a reality through smooth-flowing, daily work in an organization is largely dependent on the ongoing “processes.” These are the internal management systems – the many mechanisms that guide interactions among people to ensure that ongoing work is accomplished rather than hindered or blocked. They include planning, communication, decision-making, problem-solving, monitoring, and evaluation. Every piece of work in an organization goes through these systems.

People interact to accomplish their work, and the way that organizational processes are set up dictates the tone of the interaction that takes place. If the processes of problem-solving, decision-making, and communication are all working, the outcome is that the organization is learning and accomplishing a great deal.

Process management takes place at every level of an organization. Boards of governors must know how to plan, problem-solve, and make timely decisions. If they are deficient in these areas, organizational direction is often hampered. These same processes are at work throughout the organization, albeit at more operational levels. For instance, project units and departments need to be able to set direction and create mechanisms to carry out activities in service of this direction.

Exhibit 5.13 Organizational processes

• Planning

• Problem-solving and Decision-making

• Communications

• Monitoring and Evaluation

Planning

Planning is the organizational process that helps predict how organization members will behave. The strategic plan sets the overall direction and, at operational levels, planning becomes the process by which strategy is translated into specific objectives and methodologies to accomplish goals. It entails optimally engaging resources of time and people (e.g. developing time-lines and schedules).

Policy and procedure development are special types of plans setting out courses of action for organization members. In research organizations, the degree to which plans, procedures, and policies are explicit varies considerably across the organization. Organization members need enough direction to know what to do to support the organization’s mission and goals. The planning of policies and procedures should provide this direction adequately at all levels of the organization: for projects, for departments, and for the organization as a whole.

Exhibit 5.14 Questions typically asked to assess planning resources

• Is adequate – or too much – planning and policy and procedure development occurring in the research institution? (at all levels, from the governing board to departments and individual projects)

• Is the process of planning contributing to the strategic direction of the organization?

• Do plans provide adequate direction to organizational members?

• Are plans, policies, and procedures generally followed? Why or why not?

Problem-Solving and Decision-Making

Plans, policies and procedures set the course for organization members, but these systems do not cover the wide assortment of actions and behaviours that people are asked to assume. This is particularly true in research institutes, where the performance of many activities relies on the creativity and personal judgment of researchers.

Problem-solving and decision-making are two interacting and mutually reinforcing processes that must function well at every level of an organization. These processes entail the ability to define important problems, gather the data to frame the issue, create a set of alternatives to deal with the problem, decide on solutions, create the conditions to carry out decisions, and monitor these decisions and the problem’s progression. Timeliness is a key element in this process: Organizations must be able to identify important issues and act in a timely fashion.

Exhibit 5.15 Questions typically asked to assess problem-solving and decision-making

• Is the implementation of work at various levels of the organization smooth-flowing or blocked? If blocked, are inadequate problem-solving and decision-making processes the causes?

• Are performance gaps and opportunities identified in sufficient time to resolve them to the benefit of the individuals involved and the productivity of the organization?

• Are there decision-making mechanisms in place?

• Are decisions made in a timely manner?

• Are adequate organizational problem-solving and decision-making skills found on the governing board and within the ranks of senior managers?

• Are problem-solving and decision-making adequate in departments and for important projects?

Communications

The exchange of information and the achievement of shared understanding among members of an organization are vital goals of the internal communications function. In research institutions, continuous communication, both formal and informal, about ongoing activities is a must.

Internal communications can serve as the glue holding an organization together; alternatively, they can break it apart – for both information and misinformation constantly flow in organizations. Accurate information is vital to keep employees informed as well as motivated: Aside from the specific information needed to carry out work, organization members also need information that makes them feel part of an important effort and a wider purpose. The organization must create mechanisms that help its members gain both types of information. Coordinating committees, newsletters, and meetings of various sorts all provide vehicles for transmitting correct messages. (Communications with external constituents will be dealt with below in the section on “Inter-Institutional Linkages.”)

Exhibit 5.16 Questions typically asked to assess communications

• What are the main vehicles of internal communications?

• Do people in the organization feel there is adequate, ongoing communication about the organization’s activities?

• Do staff members receive information related to the organization’s mission and about progress in fulfilling the mission?

• If information circulating in the organization about activities becomes distorted, are there corrective mechanisms to remedy this?

• Do people have easy access to those in the organization with whom they must deal? Can they communicate easily with them?

Monitoring and Evaluation

Monitoring and evaluation are the processes used by organizations to collect and use feedback. Theoretically, monitoring and evaluation are linked to planning and decision-making. In this context, feedback should permit comparisons of what has actually happened with what was planned and with the organization’s overall goals.

Monitoring and evaluation complement each other in several ways. Monitoring can help clarify program objectives, link activities and inputs to those objectives, set quantitative performance targets, collect data routinely, and feed results directly to those responsible. Evaluation looks at why and how results were or were not achieved, links specific activities to overall results, includes broader outcomes that are not readily quantifiable, explores unintended results, and provides generalizable lessons for adjustments to programs and policies to improve results.

Monitoring is the ongoing process of gathering, analyzing and reporting data on how an organization, department, or project is doing, for the purpose of managing and identifying problems at an early stage. Ideally, it is administratively light, part of the management process, and uses a small number of selected performance indicators. Designing a monitoring framework often helps to clarify objectives and program priorities. Data can be used to take corrective action to improve performance or to re-align activities to suit goals.

Monitoring is most often used in the financial arena to assess how well an organization is doing in relation to the planned budget. Increasingly, with the advent of better management information systems, organizations are creating monitoring processes to track progress in other crucial aspects of their work.

Evaluation is typically a more comprehensive, summative process. It identifies factors that facilitated or hampered achievement of results and may trace the contribution of these results to broader objectives. Evaluation involves making judgments about the merit or worth of an activity at a given time, during or after implementation. It answers questions of relevance, effectiveness, and impact. For instance, should the research centre continue to support the women’s entrepre-neurship centre and at what funding level? How can cooperation with the extension agency be improved? Is adequate attention being paid to gender dimensions in the research? What is the expected rate of return from this research? Was the research methodology/design appropriate to the research problem? Are people using the new technology; is it beneficial to the community?

Organizations may use their own staff as evaluators (internal assessment) or evaluators from outside the organization (external assessment). Both approaches can work, depending on the methods used to design and carry out the study and on the level of commitment in the organization to learning from the assessment exercise. The existence of regular formal or informal mechanisms for reviewing and using assessment findings is an indication of the extent to which they are valued in the organization.

Evaluations tend to require more resources and to be methodologically more complex than monitoring activities. Thus they occur less frequently and focus in greater depth on specific issues and activities. In the organizational assessment process, the important issues are (1) whether monitoring and evaluation are encouraged or discouraged, and (2) what use is made of the data these processes provide.

As organizations become more and more concerned about institutionalized learning – how individuals and the organization as a whole can improve and grow in knowledge – the processes of monitoring and evaluation become increasingly important. Attention is being paid to how data generated from these processes can be used for learning, improvement, and change. The assessment of monitoring and evaluation activities in an organization can be an important component of organizational learning.

Exhibit 5.17 Questions typically asked to assess monitoring and evaluation capacities

• Are there policies and procedures that guide evaluation and monitoring?

• Are resources assigned to monitoring and evaluation?

• Are monitoring and evaluation valued at all levels in the organization as ways to improve performance?

• How are data obtained and used to monitor and evaluate the organization’s units and activities?

• Are data gathered through organizational monitoring and evaluation activities utilized?

• Do evaluation plans or performance monitoring frameworks exist?

• Are evaluation results mentioned in strategy, program, policy and budgetary documents?

Inter-Institutional Linkages

For research organizations engaged in creating and utilizing knowledge, it is vital to cultivate contacts with other institutions, organizations, and groups of strategic importance to the work. These may be potential collaborators and collegial bodies, potential funders, or key constituents. Formal links with others can result in a healthy exchange of approaches and resources (including knowledge and expertise) and can serve as an important reality check.

Keeping up with advances in pertinent fields of research is of crucial importance to research organizations. This means having access to wide-ranging sources of up-to-date information within each discipline. New information and technology of importance in the field bear directly on the organization’s program management, from the choice of research topics to pursue to the types of training and services the institute will provide.

IDRC has been particularly strong in helping institutions capture information from beyond their boundaries. The Centre has vigorously supported libraries, information systems, and now, institutional networks and linkages to achieve this purpose and enable partner institutions to use scarce resources wisely.

Exhibit 5.18 Methods of linking institutions

• Networks

• Partnerships

• External Communications

The research endeavour requires external collaborative linkages of many types: finding colleagues who share intellectual interests with whom to exchange and test ideas; linking with others able to fund research; sharing scarce resources (for example, libraries) with colleagues in other institutions; visiting other research institutions; and participating in external advisory committees for other organizations are all outreach activities.

Researchers have always found ways to communicate with their colleagues, whether in their own country or elsewhere in the world. Historically, contacts have occurred through attendance at conferences and through telephone and written communications, but these methods can be time-consuming and/or costly. Today, more accessible computerized networks are emerging to facilitate communication among investigators, enabling them to share data and experiences. Computer networks are indeed becoming a new organizational form. They are non-hierarchical, have no boundaries, and are easy to access. On the other hand, participating in these networks requires a commitment of resources.

IDRC has been a leader in supporting the networking of researchers in the developing world. Networking has reduced the isolation of researchers spread across wide geographical areas and has allowed researchers to stay in contact with colleagues around the world.

Networks

Networks are defined as groups of individuals or organizations that share a common interest and exchange information or resources in various forms on a regular or organized basis. Networks are effective ways to overcome the isolation of working in undeveloped research environments. Computerized information networks, in particular, have become particularly valuable facilitators of communication among investigators, enabling them to share data and experiences on-line. Indeed, in certain fields, participating in these networks is essential to keep up with fast-breaking developments; both participation and maintenance require a steady commitment of resources.

The advantages of scientific networks include the ability to pull together a critical mass of resources to address a particular research area; to serve as “institutional surrogates” for researchers in poor research environments; to coordinate the use of regional research resources; to transfer knowledge and expertise between countries, thereby broadening the national base of knowledge and experience; to reduce duplication of effort; to achieve economies of scale; and to allow contributions of greater impact through facilitating multi-country projects.

On the down side, networks can be costly to coordinate, the administrative tasks can be daunting, non-productive networking activities can proliferate, and networking activities sometimes compete with (rather than build on) national research priorities.

Since its inception, IDRC has funded a wide variety of networks and network-related activities. It has initiated networks itself; responded to requests from developing country institutions for network support; and it has joined with other donor agencies in creating and supporting research and research-supporting networks. These networks have enabled members to share information, germ plasm, technologies or research methodologies, and combine efforts in order to solve problems of mutual concern. IDRC has come to see networking as an indispensable tool in the efficient pursuit of scientific research and technological adaptation for development purposes. The centre has found networks to be a highly adaptable mechanism for linking and meeting the needs of researchers in developing countries.

The form a network takes depends on its members’ needs, the resources and capacities available, and the kind of contacts established. Networks tend to evolve as participants learn more about each other, build relationships, and discover opportunities. In IDRC’s experience, networks move towards higher levels of integration and collaboration as they mature. The process reflects growth in research capacity, in mutual confidence, and in the flow of benefits from the network.

The literature abounds in advice on how to promote successful networks. Some important considerations:

Membership

Network members must share a common problem or objective and be able to jointly define a common approach or strategy for finding solutions. They should have long-term commitment as well as the technical competence to contribute to finding a solution. Weak members should be balanced by strong members; both formal and informal training can be provided through the network.

Direction

Participatory governance is the key to ensuring that the network continues to serve the shared interests of its participants. Leadership for the network can be provided by an advisory group or steering committee which defines the network’s research agenda, cooperatively plans how to use shared resources, and fosters a climate of trust among members.

Structure and Organization

A resilient, responsive structure is essential to facilitate communication, coordinate activities, manage resources, and ensure equal opportunity and the equitable distribution of benefits among network participants. Roles of network members and of structural units such as the coordinator, the steering committee or advisory group, project leaders, consultants, and network members must be well-defined and known to all. Roles must be able to evolve as the network matures.

Donor Support

Setting up and coordinating network activities require a long-term commitment of external resources to supplement the contributions of national participants. Research networks typically take two to three years to begin functioning effectively. Viability can require funding and effort for ten years or more.

Relationship to National Research Systems

While network structure and programming should reflect research priorities at the national level, it is unrealistic to expect national programs to re-allocate large amounts of their resources to fund network activities. Hence external support is necessary to augment the funding, resources, and staff that national research systems are able to commit to the network. Attention must be paid to the division of labour and responsibilities, and the flow of benefits, between international and national members.

Partnerships

Over the past decade, new alliances, consortia, and partnerships have formed in both the developing and developed world to enable like-minded organizations to come together and share resources to achieve common goals and objectives.

Partnerships can develop between funders and institutions, as often occurs when Northern NGOs want to support a particular type of work within a research institute. Or they can occur between two similar institutions, as found in the linkage arrangements between Northern and Southern institutes, or among Southern insitutions. Partnerships can also be formed between an organization and its local stakeholder groups, as is often seen in health and agricultural research centres.

External Communications

Formal and informal communications with key external players and constituents are vital to help foster important linkages. A continuous flow of information to the outside world keeps those in the wider environment informed, be they the general public, identified constituents, or specialized technical audiences.

Exhibit 5.19 Questions typically asked about inter-institutional linkages

• To what extent is the research institution linked to the external world of colleagues, of clients, of markets (users)? Are these relationships active? Are they beneficial?

• Are existing networks supported financially? technically?

• Do existing networks effectively respond to the needs, shared interests, and capabilities of participants?

• Have networks had an effect on the way the organization functions? Why? Why not?

• Are there fruitful, ongoing partnerships with external organizations that bring new ideas and/or resources to the research institution?

• Is the research institution communicating information about its work to external stakeholders, including the general public?

In research, there is a continual need to communicate results – in the hard sciences, to remain credible in the field and competitive for funding, and in the social sciences, to contribute up-to-date information to the process of policy formulation.

External communications can take many forms. Indeed, they consist of any appropriate means to converse with the outside world. Besides journal articles, proven ways of communicating the organization’s work to the wider public are newsletters and promotional materials crafted to create awareness and interest in the organization’s work. Research reports and annual reports of activities serve to raise the organization’s profile and, by keeping important stakeholders informed, can play an important role in linking the organization to the wider community.

This page intentionally left blank

Chapter 6
Organizational Performance

Introduction

In our framework for profiling an organization, overall performance is seen as a function of the interplay of the organization’s unique motivation, its organizational capacity, and forces in the external environment.

Over the past 30 years there have been many attempts to define performance generally and to apply performance concepts to various organizational types. A number of ideas emerge from the organizational performance literature:

• In all organizations, performance relates to organizational purpose.

• Performance also needs to reflect achievements relative to the resources used by the organization.

• Performance must be considered within the environment in which the institution does its work.

The first component reflects the organization’s mission, the second component reflects how well the organization manages its resources, and the third, its adaptability within the context of external forces.

Within research institutions, the quantity and quality of research produced is fundamental to the achievement of the mission. But a research institution’s performance must also encompass aspects of organizational functioning that are the necessary underlying conditions for researchers to be productive.

To apply traditional assessment terminology to research organizations, organizational performance must integrate the concepts of “effectiveness” and “efficiency,” That is, the organization must be able to meet its goals (effectiveness) and to do so with an acceptable outlay of resources (efficiency). Vitally important as well, particularly to IDRC and other Northern granting agencies, is the Southern organization’s sustainability over the long term (ongoing relevance). The organization must be able to develop and implement strategies which will ensure research performance over extended periods of time. To do so, its activities and services must remain realistic and connected to stakeholder needs. For when an organization’s services and activities are not relevant or are too far-reaching and costly, organizational survival is at risk.

In summary, the performance of institutions can be conceived as falling within three broad areas: performance in activities that support the mission (effectiveness), performance in relation to the resources available (efficiency), and performance in relation to long term viability or sustainability (ongoing relevance).

Performance in Moving Towards Mission (Effectiveness)

A research organization’s performance is made visible through the totality of the research (and sometimes training) activities it generates in pursuit of the mission. These outputs and effects are the most discernible aspects of organizational performance. IDRC and others who support the endeavours of institutions are naturally interested in these outputs, which are seen as the tangible results of investment dollars.

Ideas associated with the performance of research organizations in fulfilment of their missions vary considerably. Each interest group or stakeholder may have a totally different conception of what counts. For instance, scholarly researchers might define performance in terms of the number of refereed articles, whereas senior administrators might define performance as the quantity of financial resources brought into the research centre through grants. Donors might define performance in terms of the beneficial impact of findings or activities on indigenous groups.

Researchers themselves seldom speak with one voice on such matters. Is applied research as valued as theoretical research? Are agronomic practices adapted to local farmer’s needs more or less valued than high yield export technologies? Are publications valuable in themselves, or should they only be considered in relation to citation indices by other researchers?

Although few organizations have performance data readily available about their research and training programs and services, it is not difficult to develop mechanisms and approaches for gathering performance data about these outputs. The information used by organizations can take the form of input data (e.g. the number of people or students served), process data (e.g. the number of research projects in progress), output data (e.g. the number of articles accepted for publication), or impact data (e.g. the number of patients impacted by the application of a particular medical technique).

While it is relatively easy to develop an information system to help institutions assess their performance, it is far more difficult to obtain consensus on the merits of particular performance indicators. It is more difficult yet to arrive at value judgments regarding acceptable levels of quantity and quality for each performance indicator. At issue is, how does the specific institution define “good” performance, and, perhaps most fundamental, does good performance move the organization towards attaining its mission?

Exhibit 6.1 Typical indicators of performance in research institutions -effectiveness

Effectiveness

• number of publications accepted by refereed journals

• number of citations

• number of patents and other intellectual property

• software developed

• collaborative links with other researchers

• external funds/contracts received

• number of requests for information/participation related to national or regional development initiatives

• interest/recognition of research results by other institutions

• demands for input to government policies

• number of people served (for action research)

• health, educational benefits

• peer ratings of relevance of research

• conferences attended in which papers/posters were presented

• client satisfaction

• social/economic effects (as per mandate)

• number of students supervised

• number of trainee researchers supervised

• origin of students and trainees (country, institution)

• links with higher education institutions

• number of publications in which students are co-authors

• students’/trainees’ assessments of training environment

Performance in Relation to Efficiency

In today’s economy, research institutions must not only be able to provide exceptional research and teaching services, but they must also be able to provide them within an appropriate cost structure. Tight times have meant that performance is increasingly judged by the efficiency of the organization, e.g. the cost per service, the number of outputs per researcher, publications per person per year, average value of grants per person. Whatever the overall size of the unit, performing organizations are viewed as those which provide good value for the dollars expended.

Exhibit 6.2 Typical indicators of performance in research institutions -efficiency

Efficiency

• ratios of internal and external funding

• comparative organizational costs for research, training, and other services

• overhead/program cost ratio

• number of outputs per researcher (publications per year, average value of grants per person)

• costs per client served

• costs per publication

• costs vs. benefits

• publication rates per staff

Performance in Relation to Ongoing Relevance

Institutions in any society take time to evolve and develop, but over time they must institutionalize in ways that consolidate their strengths. While all organizations ultimately face internal and external crises, the survivors are those that succeed in adapting to changing contexts and capacities. Partly because of their relatively short organizational histories, and because of widely differing environmental contexts, research organizations in the South have varied dramatically in their ability to become institutionalized in society. Moreover, no organization is protected from the vagaries of being out of date, irrelevant, and subject to closure. In this volatile context, organizational performance relates to the ability of the organization to keep its mission, goals, programs, and activities aligned with its key stakeholders and constituents. Issues of organizational survival are broad in scope, ranging from the reputation of the organization in the wider community to the effects of the organization’s programs, services, and their management on staff morale.

Exhibit 6.3 Typical indicators of performance in research institutions - ongoing relevance

Ongoing Relevance

• relevance of work to national development

• relevance of work to field

• relevance of services to users

• support earmarked for professional development

• number of old and new financial contributors (risk of discontinuance, leverage of funding)

• organizational innovation and adaptiveness (appropriate changes to needs, methodologies)

• institutional reputation among key stakeholders

• number of new services and programs

• changes in services and programs related to changing client systems

Measurement

Four major questions permeate the performance literature and should be considered by IDRC when formulating an approach to evaluating its partner institutions:

1. What areas of performance should be measured?

In the framework implicit in this section, performance of an organization should be assessed in three domains: efficiency, effectiveness, and adaptability. Identifying appropriate performance areas in all three domains that are key in a particular organization is a crucial step for both IDRC and the partner institution at the outset of the assessment process.

Regarding the effectiveness of research output, organizational goals and priorities provide the starting point for performance measurement. Performance indicators can and should include both quantitative and qualitative measures. A matter of some concern is that certain institutions have exclusively adopted numerical measures (e.g. the number of publications in peer-reviewed journals, the number of citations per author, the amount of money received by the organization for contract research, the number of patents earned, the number of students receiving graduate and post-graduate training, and so on.) In IDRC’s view, qualitative judgements by stakeholders on the impact of research production are equally vital data.

IDRC also needs data on the management of institutions – the efficiency domain. There are many approaches to such analysis, ranging from financial audits to surveys of organizational culture.

As well, there should be some analysis of the organization’s ability to adapt to changing conditions. Organizational priorities, either written or inferred, transcend the individual programs and services being provided and include broad issues vital to organizational survival. The extent to which issues on this dimension will be measured is a matter for negotiation.

2. How should performance be measured?

Once “what to measure” has been decided, how to conduct the actual measurement is the next consideration. Which components within various performance areas should be measured, what kinds of data are appropriate to collect, and how should this be done? The consensus of the international evaluation community is that multiple sources of information, including a mixture of qualitative and quantitative data should be employed in order to obtain an adequate and valid understanding of performance.

Some areas are more difficult than others to measure. For instance, while productivity is relatively simple to assess using numerical data, more abstract performance concepts such as creativity or adaptiveness elude clear-cut measurement. (Their measurement is not impossible, however, as observable qualities can be delineated for both.) The costs of various measurement methodologies is another crucial consideration. Performance measures are politically sensitive and must be open to careful scrutiny. Surveys can be laborious and expensive to construct and administer.

3. When should measurement be conducted?

Timing is an important consideration in evaluating research institutions. The conduct of research is, by nature, a slow-moving, laborious process. But practical considerations often dictate short-, medium-, and long-term performance measurement strategies. These needs do not alter the fundamental character of the research endeavour, however, and this should always be taken into consideration in evaluating performance.

Historical trends, be they within the institution as a whole, in the evolution of the research group, or in the career of the individual researcher, all influence research output. When measuring the present (i.e. recent) performance of research centres, it is important to consider the historical context of performance for each of these entities. A sketch of the evolutionary progress of the organization or of individual groups within the organization or of the career of a particular researcher can be revealing. Considerations such as whether any of these is in nascent stages, or whether significant milestones have occurred in the organization or in the field, have a bearing on output; sensitivity to such contextual issues will enable more thoughtful interpretation of performance.

On a more practical level, the timing of the organizational assessment process should respect built-in organizational cycles. Assessments should not be conducted at a time of the year when grant applications are due, when staff are unavailable due to vacation season or attendance at conferences, and so forth.

4. What standards ought to apply?

Once data are obtained, issues of performance standards arise, namely, what constitutes “good” or “acceptable” research and training activities? For a fair assessment of specific research organizations, the level of acceptability for each performance indicator should be negotiated on a case-by-case basis between IDRC and the partner institutions. Abstract norms arrived at in isolation from real environmental and historical events are inappropriate to apply to any research institute.

Over time, it might be useful for IDRC and other international granting agencies to develop a data base of normative information about the performance indicators and standards that a wide variety of research centres worldwide have adopted. For example, what are reasonable expectations regarding the number and type of publications? For non-core project funding? For training researchers? Such cumulative, normative data would perhaps help future evaluators (as well as those within the research institution) make judgments. Unfortunately, the state of our knowledge presently causes us to rely on expert judgment as the primary tool for setting standards.

Conducting an Organizational Assessment

There are various reasons for conducting organizational assessments. They are conducted to ensure the organization fulfilled its terms of agreement, or to review a request to provide institutional funding. They might be conducted when there is a change in the nature of a funding agreement to multi-year or core funding, or in order to review organizational changes which might affect its eligibility for funding. They are often conducted to satisfy stipulations that an evaluation must take place.

Although the steps taken in conducting an organizational assessment may vary from case to case, there is a general sequence of activities that take place. The following is a list of the basic steps in an organizational assessment. These steps may be carried out by a team or combination of individuals that includes evaluators, the donor, the organizational members, and various stakeholders.

Exhibit 6.4 Basic steps in organizational assessment

1. Determine purposes of the assessment.

2. Develop constructive working relationship between and among the research institution and the assessment team during the assessment.

3. Identify main issues for the assessment.

4. Identify main questions and sub-questions for the assessment.

5. Determine roles and responsibilities for evaluators, organizational members, IDRC personnel or other donor agency, and other stakeholders.

6. Develop and write terms of reference.

7. Prepare costing for the assessment.

8. Identify and select evaluators.

9. Develop workplans.

10. Implement workplan.

11. Monitor quality control measures to ensure collection of reliable and valid data.

12. Provide ongoing feedback.

13. Draft the report.

14. Identify general lessons learned.

15. Communicate conclusions (i.e. debriefing sessions, written reports, workshops).

Sources of Data

Much pertinent information may already exist in the institution in one form or another, and all potential sources should be mined. Some suggestions of where to look:

• organizational documents: financial statements, annual reports, strategy documents, and so on

• bibliographic citation analyses

• reviews of research by scholarly groups

• interviews of key informants that affect organizational activities

• surveys appraising the organization’s reputation

• CV’s of staff

Performance as It Relates to Capacity

Performance and capacity are interrelated concepts. Organizational performance arises from the use of capacity. Assessing performance also leads us to areas where capacity needs building (the subject of Chapter 5).

It is important that organizational performance be viewed as more than the sum of organizational products. Performance should have a synergetic quality. Institutions ideally give back to society outputs whose value is greater than the total resources invested; they are organized to realize these gains, and they should be accountable for providing added value to the investments made in them.

Assessment of performance occurs informally, on an ongoing basis, whether or not the organization engages formally in performance assessments. Such assessments can be driven by various stakeholders and clients. For instance, governments may decide to increase or decrease funds to an institution in part because of perceptions of the organization’s existing or potential performance. Clients can decide to use or not to use the services of a research institution because of their own assessments of its performance. Stakeholders often either implicitly or explicitly link funding to performance and perceived capacity.

By making the informal formal, IDRC, in addition to guiding its own funding strategies, is supporting the development of more transparent and open institutions. By approaching an assessment as a learning process conducted in partnership with its funded institutions, IDRC is fostering their adaptability and sustainable development.

Conclusion

This book has outlined a framework with which to assess the capacity and performance of an organization within the context of the organization’s motivation and its unique environment. Experience with a wide range of research institutions worldwide suggests that understanding the environmental context is fundamental to a sympathetic analysis of how an organization operates. The environment may present difficult constraints, yet the organization may still be doing important and relevant work. Environmental analysis leads to a determination of capacity and performance relative to the context.

The organization’s motivation relates in many ways to the environment, but supersedes it in the sense that many successful organizations rise above the constraints of their context. Through leadership and collective vision, such organizations are able to gather resources and produce quality research despite a non-supportive context. Such organizations are often nourished by external funding, which makes analysis and understanding of the context and motivation essential if IDRC and other donors are to invest strategically.

Because performance is relative to an organization’s basic capacity, the analysis of capacity sets the stage for understanding organizational performance. Capacity is a quantitative notion, whereas performance is both absolute and relative. Performance needs to be assessed in qualitative terms, quantitative terms, and in terms that relate performance to basic organizational capacity.

Given sufficient time and resources, external experts can do a good job of assessing organizations. Such assessments might serve IDRC’s and other donor’s short-term needs, but the process can be far stronger when the organization becomes a partner in the assessment and participates in the analysis. Ideally, the process contributes to the development of a learning organization capable of improving its own performance through critical self-analysis.

For use In field work

IDRC

Short Guide for Institutional Assessment

This Guide is intended to provide a framework for rapid institutional assessment during brief (one to two day) visits to an institution. For in-depth assessments, more comprehensive instruments are available from IDRC.

The Guide provides some key concepts for you to reflect on as you analyze the institution’s environment, motivation, capacity, and performance. Use these concepts in writing your institutional assessment report.

Data sources

Think about your data needs as your visit progresses. In the assessment process, attempt to:

ImageMeet a suitable spectrum of people and record their names

• Administrators

• Researchers/teachers/support staff

• Clients/stakeholders/institutional representatives

• Government officials

ImageObtain available key documents

• Institution handbook/calendar/prospectus

• Mission statement

• Annual Report/financial reports

• Program descriptions

ImageObserve relevant facilities

• Buildings/grounds

• Laboratories

• Teaching areas

• Program or project sites

ImageObserve the dynamics among people

• Nature of meetings with you; who attends; who presides

• Processes for teaching and learning

• Nature of dealings with institution’s clients

• How research is conducted; dominant paradigm

The Institution’s Environment

Every institution is affected by its external environment: its region, country, part of the world. Six of the major influences are noted below. Characterize the institution’s environment using the following guidelines.

ImageDescribe and assess the administrative/legal environment within which the institution operates:

• policy

• legislative

• regulatory

• legal

ImageDescribe and assess the technological environment within which the institution operates:

• form of government

• distribution of power

• access to government resources

• allocation decisions

ImageDescribe and assess the external political environment within which the institution operates:

• infrastructure

• utilities

• technological literacy

• information technology

• links to national issues

ImageDescribe and assess the economic environment within which the institution operates:

• GDP, inflation, growth, debt

• IMF conditionality

• wage/price structure

• community economics

• hard currency access

• gov’t. funding distribution

ImageDescribe and assess the social/cultural environment within which the institution operates:

• norms

• values

• attitudes in society

• literacy

ImageDescribe and assess the major stakeholders of the institution:

• clients

• donors

• beneficiaries

• volunteers

• government bodies

• other institutions

What is the impact of these environmental forces on the mission, performance and capacity of the institution? In what ways is the environment friendly or hostile? What are the major opportunities and risks resulting from the environment?

Institutional Motivation

No two research institutions are alike. Each has a distinct history, mission, culture and incentive/reward system, which are all powerful motivators of behaviour. Characterize the level of institutional motivation as determined by the following components.

ImageAnalyze the institution’s history

• Date and process of founding

• Major historical achievements/milestones

• Major struggles

• Changes in size, growth, programs, leadership, structure

• Associations with IDRC, with other donors

ImageUnderstand the institution’s mission

• Evolution of the mission statement

• Role of mission in shaping organization, giving it purpose, giving it direction

• Institutional goals

• Types of research/research products that are valued

ImageUnderstand the institution’s culture

• Attitudes about work and working

• Attitudes about colleagues, clients, other stakeholders

• Attitudes towards women, gender issues

• Values, beliefs, customs, traditions affecting mission fulfilment

• Underlying organizational norms that guide operations

ImageUnderstand the institution’s incentive/reward structure

• Key factors, values, motivations to promote productivity

• Intellectual freedom, stimulation, autonomy

• Remuneration, grant access, opportunity for advancement

• Peer recognition, prestige

How does motivation affect institutional performance? In what ways do the history, mission, culture and incentive system positively and negatively influence the institution?

Institutional Capacity

Institutional capacity underlies an institution’s performance. Capacity is understood as the six interrelated areas detailed below. Characterize the institutional capacity using the following conceptual guidelines.

ImageAssess the strengths & weaknesses of strategic leadership in the institution:

• Leadership (managing culture, setting direction, supporting resource development, ensuring tasks are done)

• Strategic planning (scanning environment, developing tactics to attain objectives, goals, mission)

• Governance (legal framework, decision-making process, methods for setting direction, external links)

• Structure (roles and responsibilities, coordinating systems, authority systems, accountability systems)

• Niche management (area of expertise, uniqueness, recognition of uniqueness)

ImageAssess the strengths & weaknesses of the following systems, processes or dimensions of human resources (managerial, research, teaching, technical/support staff):

• Human resource planning (recruiting, selecting, orientation)

• Training and professional development (performance management, monitoring and evaluation)

• Career management (record-keeping, merit)

• Compensation (wage rates, incentives)

• Equity (gender, minority issues)

ImageAssess the strengths & weaknesses of other core resources:

• Infrastructure (facilities, equipment, maintenance systems, utilities)

• Technology (information, communication technologies, levels of technology needed/acquired to perform work)

• Finance (Planning, managing and monitoring, cash flow and budget, ensuring an accountable and auditable financial system)

ImageAssess the strengths & weaknesses of program management of research, teaching and service programs in the institution:

• Planning (identifying needs, setting objectives, costing alternatives and developing evaluation systems)

• Implementing (adherence to schedules, coordination of activities)

• Monitoring (systems for evaluating progress, communicating feedback to stakeholders)

ImageAssess the strengths & weaknesses of process management in the institution:

• Planning (identifying needs, looking at alternatives, setting objectives and priorities, costing activities and developing evaluation systems)

• Problem-solving and decision-making (defining problems, gathering data, creating alternatives, deciding on solutions, monitoring decisions)

• Communications (exchanging information, achieving shared understanding among organizational members)

• Monitoring and evaluation (generating data, tracking progress, making judgments about performance, utilizing information, changing and improving organization, program, etc)

ImageAssess the strengths & weaknesses of inter-institutional linkages:

• Networks (type, nature, number; utility, recruitment of appropriate members, coordination, participatory governance, management structure, technology, donor support, participation of national research systems, cost-benefit, sustainability)

• Partnerships (type, nature, number; utilization, cost-benefit, needs met, sustainability)

• External communications (type, nature, number; utilization, frequency, cost-benefit, needs met)

How does institutional capacity affect institutional performance? What are the overall strengths and weaknesses of the institutional capacity?

Institutional Performance

Every institution should attempt to meet its goals with an acceptable outlay of resources while ensuring sustainability over the long term. “Good performance” means the work is done effectively, efficiently and remains relevant to stakeholders. Characterize the institutional performance by answering the following questions:

ImageHow effective is the institution in moving towards fulfilment of its mission?

• Research performance (major achievements, general level of research productivity defined according to the institution’s mission and values, utilization of results)

• Teaching performance (training researchers, serving clients’ learning needs)

• Service performance (development of community activities, support to research community, transfer technology)

• Policy influence

ImageHow efficiently are resources used?

• Stretching the financial allocations

• Staff productivity (turnover, absenteeism, research outputs)

• Clients (program completion rates, long term association with institution)

• Administrative system efficiency

ImageHas the institution kept its relevance over time?

• program revisions

• adaptation of mission

• meeting stakeholders needs

• adapting to environment

• reputation

• sustainability over time

• entrepreneurship

How well is the institution performing?

Prepared for IDRC by Universalia Management Group

Bibliography

Alvarez, B. (1994). The success of researchers and the capacity of research institutions in developing countries, Ottawa, ON: International Development Research Centre.

Alvarez, B., & Gomez, H. (1994). Laying the foundation: The institutions of knowledge in developing countries. Ottawa, ON: International Development Research Centre.

Amoussou, J. (1991). Appui institutional en matiere de formation et de recherche. Dakar: Centre de recherches pour le developpement international.

Anderson, G. (1990). Fundamentals of Educational Research. New York: Palmer Press.

Anderson, G., & Lauwerys, J. (1978). Institutional Leadership for Educational Reform: The Atlantic Institute of Education. Paris: UNESCO.

Anderson, R.S. (1993). Improving research effectiveness in IDRC. Ottawa, ON: International Development Research Centre.

Ballesteros, J. (1991). La internacionalizacion de la economia. Bogota: Universidad Javeriana.

Barbe, J.P., & Pearce, D.W. (1991). Valuing the environment: Six case studies. London: Earthscan Publications. Paris: OECD

Barnett, S., & Engel, N. (1982). Effective institution-building: A guide for project designers and project managers. Based on lessons learned from the AID portfolio (USAID Program Evaluation Discussion Paper No. 11). Washington, DC: USAID.

Becker, S.W. (1975). The Efficient Organization. New York: Elsevier.

Berlage, L., & Stokke, O. (1992). Evaluating development assistance: Approaches and methods (European Association of Development Research and Training Institutes, Norsk Utenrikspolitisk Institutt, EADI book series 14, Norwegian foreign policy studies, No. 77). London: Frank Cass.

Bernard, A.K. (1991). Consortium Graduate School of the Social Sciences: The process of building an institution. Ottawa, ON: International Development Research Centre.

Bezanson, K.A. (1992, October). Capacity building for change. Proceedings of ECO-ED Conference. Ottawa, ON: International Development Research Centre.

Bhagavan, M.R. (1992). SAREC model: Institutional cooperation and the strengthening of national research capacity in developing countries (Report No. 1/1992). Stockholm: Swedish Agency for Research Cooperation with Developing Countries.

Black, R. (Ed). (1980, April). Mechanisms for strengthening applied research institutes in developing countries (Conference Report). Denver: Denver Research Institute.

Blase, M. (1986). Institution Building: A source book, Columbia, MO: University of Missouri Press.

Bracegirdle, P. (1992). The social relations of education in UNO’s Nicaragua (1990-1991) Canadian and International Education, 21(2), 23-39.

Brinkerhoff, D.W. (1991). Improving development program performance: Guidelines for managers. Boulder, CO: Lynne Rienner.

Broadbent, K.P. (1985, October). Institution building with development aid: The role of IDRC. Proceedings of the International Association of Marine Science Libraries and Information Centres Eleventh Annual Meeting, Williamsburg, VA. Ottawa, ON: International Development Research Centre.

Brunner, J.J. (1991). Investing in knowledge: Strengthening the foundation for research in Latin America. Ottawa, ON: International Development Research Centre.

Burke, W.W., & Litwin, G.H. (1992). A causal model of organizational performance and change. Journal of Management, 78(3), 523-545.

Campbell J.P., and others. (1970). Managerial behaviour, performance and effectiveness. New York: McGraw-Hill.

Carpenter, Vivian L. (1990, Fall). Improving accountability: Evaluating the performance of public health agencies. Government Accountants Journal, 39(3), 43-54.

Canadian International Development Agency. (1990). Standards for Bilateral Project Evaluations. Ottawa, ON: Author

Clark, N., & McCaffery, J. (1979). Demystifying evaluation: Training program staff in assessment of community based programs through a field operational seminar. New York, NY: World Education.

Deming, W. Edwards. (1982). Quality, productivity, and competitive position. Boston: MIT Center for Advanced Engineering Study.

Drucker, P. (1994, November). The age of social transformation. The Atlantic Monthly, 274 (5), 54-80.

Drucker, P. (1991, November/December) The new productivity challenge. Harvard Business Review, 69 (6), 72.

Edgcomb, E., & Crawley, J. (1993). An institutional guide for enterprise development organizations. New York: The Small Enterprise Education and Promotion Network.

Esman, M. J. (1972). Administration and development in Malaysia: Institution building and reform in a plural society. Ithaca, NY: Cornell University Press.

Fetterman, D.M. (1984). Ethnography in educational evaluation. Beverly Hills, CA: Sage Publications.

Fisher, A.H., Jr. (1991). A quick way to tell if your organization is meeting its goals. Nonprofit World (NWR), 9 (3), 25-28.

Fowler, A. (1992). Prioritizing institutional development: A new role for NGO centres for study and development. London: Sustainable Agriculture Programme, IIED.

Gibbons, M., & Georghiou, L. (1987). Evaluation de la recherche. [Evaluation of research: a selection of current practices]. Paris: OECD.

Clyde, H.R., & Virulh, S. (1985). Institutional links: An example in science and technology. Unpublished working papers. University of Ottawa, Institute for International Cooperation no. 852, Ottawa, ON.

Groth, R. H., Brown, K.G., & Leslie, L.L. (1992). Research activity in major research universities: An alternative ranking system. SRA Journal, 23 (4), 23-33.

Guba, E.G., & Lincoln, Y.S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.

Harvey, J. W., & McCrohan, K.F. (1988). Voluntary compliance and the effectiveness of public and non-profit institutions: American philanthropy and taxation. Journal of Economic Psychology, 9 (3), 369-386.

Head, I.L. (1990, June). Notes for an address by I.L. Head, President IDRC, to the Nigerian Institute of International Affairs. Lagos, Nigeria. Ottawa, ON: International Development Research Centre.

Higgins, J. C. (1989). Performance measurement in universities. European Journal of Operational Research (EJO), 38 (3), 358-368.

Hudson, J., Mayne, J., & Thomlison, R. (1992). Action oriented evaluation in organizations: Canadian practices. Toronto, ON: Wall & Emerson.

IDRC. (1987). Approaches to strengthening research institutions (Discussion paper). Ottawa, ON: International Development Research Centre, Office of Planning and Evaluation.

International Service for National Agricultural Research, Den Haag. (1986). Review of the Nigerian Institute for Oil Palm Research (Report 37 to the Ministry of Science and Technology of the Federal Republic of Nigeria). The Hague: ISNAR.

Israel, A. (1987). Institutional development: Incentives to performance. Baltimore, MD: Johns Hopkins University Press.

Johnson, J. G. (1992). Developing a hierarchy of measurements. Tapping the Network Journal (TNJ), 3 (2), 18-20.

Kaydos, W. (1991). Measuring, managing, and maximizing performance. Cambridge, MA: Productivity Press.

Kiggundu, M.N. (1989). Managing organizations in developing countries. West Hartford, CN: Kumarian Press.

Kiggundu, M.N. (1991). Towards a strategic approach to institutional support for research institutions in developing countries. Unpublished manuscript. School of Business, Carleton University, Ottawa, ON.

Kiggundu, M.N. (1994). Managing research institutions in developing countries: test of a model. Public Administration and Development,14, 201-222.

Kimberly, J.R., Miles, R.H., & Associates. (1980). The organizational life cycle: Issues in the creation, transformation, and decline of organizations. San Francisco: Jossey-Bass.

Kretlow, W. J., & Holland, W.E. (1988). Implementing management by objectives in research administration. Journal of the Society of Research Administrators (SRA), 20 (1),135-141.

Kumar, S., Jain, A., & Bruce, J. (1989). Assessing the quality of family planning services in developing countries (Working papers, Programs Division, No. 2). New York: Population Council.

Likert, R. (1958). Measuring organizational performance. Harvard Business Review, 36 (2), 41-51.

Lockheed, M.E. (1992). World Bank support for capacity building: the challenge of educational assessment. Washington, DC: World Bank.

Lusthaus, C. (1992). Administration of adult education. Unpublished manuscript. McGill University, Faculty of Education, Montreal, QC.

Lusthaus, C., & Anderson, G. (1985). Lessons learned about the support of third world institutions (CIDA Policy Paper). Ottawa, ON: Canadian International Development Agency.

Lusthaus, C., Anderson, G., & Murphy, E. (1993). A framework for conducting assessments of IDRC-funded research institutions. Montreal, QC: Universalia Management Group.

MacAulay, J.B. (1985). Indicators of excellence in Canadian science. Ottawa, ON: Statistics Canada.

Mintzberg, H., & Quinn, J.B. (1991). The strategy process: Concepts, contexts, cases. Englewood Cliffs, NJ: Prentice Hall.

Morales-G6mez, D.A., & Shaeffer, S. (1985). Building individual and institutional capacity in educational research and development. Ottawa, ON: International Development Research Centre.

Morales-G6mez, D.A. (1990). Issues on capacity building in education research: experiences from the perspective of IDRC. Ottawa, ON: International Development Research Centre, Social Sciences Division.

Morgan, P. (1993). Testing the capacity development framework – the Pakistan case (Paper prepared for CIDA Working Group on Capacity Development). Hull, QC: Canadian International Development Agency.

Murphy, J. (1983). Strengthening the agricultural research capacity ofLDC’s: Lessons from AID experience (Report 10, USAID Program Evaluation). Washington, DC: USAID.

Murphy, J. (1993). Monitoring and evaluation in agricultural research: Concepts, organization, and methods. ISNAR Informal Report. The Hague: International Service for National Agricultural Research.

Myers, R.J., Ufford, P., & Magill, M. (1988). On-site analysis: A practical approach to organizational change. Etobicoke, ON: OSCA Ltd.

Nyiira, Z.M. (1991). Research resources in national research institutions in Eastern and Southern Africa. Ottawa, ON: International Development Research Centre.

Ravallion, M. (1992). Poverty comparisons: A guide to concepts and methods (World Bank, LSMS working paper no. 88). Washington, DC: World Bank.

Rawkins, P. (1994). An institutional analysis of CIDA. In C. Pratt (Ed.), Canadian development assistance policies: An appraisal. Montreal: McGill-Queen’s University Press.

Schalkwyk, J. (1993). Capacity development: Women and sustainable development (Paper prepared for CIDA Working Group on Capacity Development). Montreal, QC: Author.

Senge, P.M. (1990). The leader’s new work: Building learning organizations. Sloan Management Review, 32 (1), 7-23.

Simpson, D., & Sissons, C. (1989). Entrepreneurs in education: Canada’s response to the international human resource development challenge. Ottawa, ON: International Development Research Centre.

Smutylo, S., & Koala, S. (1993). Research networks: Evolution and evaluation from a donor’s perspective. In C. Alders, B. Haverkort, & L. van Veldhuizen (Eds.), Linking with Farmers: Networking for Low-External-Input and Sustainable Agriculture (pp. 231 -247). London: Intermediate Technology Publications.

Treasury Board of Canada. (1991). Principles for the evaluation of programs by federal departments and agencies. Ottawa, ON: Comptroller General.

Universalia. (1994). Evaluation of SEAMEO-Canada program of cooperation in human resource development, phase II (Report to the Canadian International Development Agency). Montreal, QC: Universalia Management Group.

Universalia. (1994). CUSO organizational evaluation (Report to the Canadian International Development Agency). Montreal, QC: Universalia Management Group.

Universalia. (1985). Manager’s Guide to Institutional Evaluations. Hull, QC: Canadian International Development Agency.

Uphoff, N. (1986). Local institutional development: An analytical sourcebook with cases. West Hartford, CN: Kumarian Press.

Uphoff, N. (1992, December). Meta-methodological approaches to institutional development. Paper presented at International Symposium on “Sharing Experiences of Technical Cooperation: Institutional Development in Asia”, Tokyo, Japan.

USAID. (1982). Turning private voluntary organizations into development agencies: Questions for evaluation (AID Program Evaluation Discussion Paper No. 12). Washington, DC: Author.

Index

Action Plans 13

Adaptability xiv, 51, 54

Administrative/Legal Environment 15, 19

Capacity 11, 26, 29, 30. 32-39, 41, 45

-building 1

-gaps 3

and environment 21

and performance 2, 57

strengthening xv

Communications 43, 48

Core resources 36-38

Costs of Evaluation 9

Culture 24-26

Data

collection and analysis 12

institutional evaluation data sources 57

interpretation 8, 9

qualitative indicators 8

quantitative indicators 8

sources 8

Data gathering

adaptability/relevance 54

administrative/legal environment 19

culture/organizational incentives 26

effectiveness 53

efficiency 53

environment 19, 21

financial resources 39

governance 33

human resources 36

infrastructure 37

inter-institutional linkages 48

mission 25

monitoring and evaluation 45

motivation 27

niche management 35

planning resources 42

policy environment 19

political/economic environment 20

problem-solving and decision-making 43

social and cultural environment 20

stakeholder environment 20

strategy 32

structure 34

technological resources 38

technology environment 19

Economic Environment 17

Effectiveness 52, 53

Efficiency 53

Environment 10

administrative/legal 15

data gathering 19

data gathering methods and sources 21

economic 17

key forces 15

key questions 18

political 16

relevance to capacity and performance 21

social and cultural 17

stakeholder 17

technology 16

Evaluation framework 3, 9

Evaluation Methodology 6, 56, 57

costs 9

data collection 7

design 7

external experts 7

institution’s stage of development 9

instrumentation 8

interpretation of data 8

issues to explore 6

peer review 7

self-study 7

sources of data 8, 57

Feedback 12

Finance 38, 39

Gathering data

communications 43

Governance 32

History 23

Human Resources 35, 36

Incentives 25

Infrastructure 37

Inter-institutional linkages 45-48

Leadership 30

Learning

model of evaluation 5

organization 3, 58

partnership 5

Mission 23, 25, 26

Monitoring and Evaluation 40, 44, 45

Motivation 10, 23-27

and culture 24

and mission 23

Networks 46, 47

Niche Management 34

Partnership 5, 11, 48

Performance xiv, 11, 26, 51-54

and adaptability 54

and capacity 2, 57

and effectiveness xiv

and efficiency xiv

and environment 21

Performance measurement 54-56

Planning 42

Political Environment 16

Problem-Solving and Decision-Making 42, 43

Process Management 41-45

Profiling an institution 5, 7, 11

Program Management 39-41

Research Program Implementation 40

Research Program Planning 40

Research-Supporting Services 41

Social and Cultural Environments 17

Sources of data

qualitative 8

quantitative 8

Stakeholder Environment 17

Strategic Leadership 29, 30

Strategic planning 30

Strategy 31, 32

Structure 33

Technology 38

Technology Environment 16

Terms of Reference 12

Workplan 12

This page intentionally left blank

The Authors

Charles Lusthaus, Ph.D. is an Associate Professor in the Department of Administration and Policy Studies, McGill University, and a partner in Universalia Management Group, a Montreal-based management consulting firm. His expertise lies in the areas of educational administration, organizational theory, and institutional evaluation and change. Dr. Lusthaus is also Faculty Advisor to the Centre for Educational Leadership, McGill University.

Gary Anderson, Ed. D. is the Chairman of the Department of Administration and Policy Studies in Education of the Faculty of Education, McGill University, and the president of Universalia Management Group. Dr. Anderson has designed, monitored and evaluated education and training programs for more than a decade and is the author of Fundamentals of Educational Research (1990). He is an expert in institutional development and analysis, and has extensive experience in policy research and analysis.

Elaine Murphy, M. Ed. is a consultant with Universalia Management Group and the principal of Elaine Murphy and Associates, a communications firm. Ms. Murphy has extensive experience as a writer with a background in teaching and educational administration. For the past eight years she has researched and written a wide variety of publications for Canadian universities, federal and provincial agencies, and private sector businesses.

This page intentionally left blank

About the Institution

The International Development Research Centre (IDRC) is a public corporation created by the Parliament of Canada in 1970 to support technical and policy research to help meet the needs of developing countries. The Centre is active in the fields of environment and natural resources, social sciences, health sciences, and information sciences and systems. Regional offices are located in Africa, Asia, Latin America, and the Middle East.

About the Publisher

IDRC Books publishes research results and scholarly studies on global and regional issues related to sustainable and equitable development. As a specialist in development literature, IDRC Books contributes to the body of knowledge on these issues to further the cause of global understanding and equity. IDRC publications are sold through its head office in Ottawa, Canada, as well as by IDRC’s agents and distributors around the world.