Creating Practical Results Indicators For Development Projects

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Creating Practical Results Indicators For Development Projects as PDF for free.

More details

  • Words: 3,521
  • Pages: 12
56

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

Creating Practical Results Indicators for Development Projects and Programmes

An Indicator Assessment Guide Document 6

Prepared by Greg Armstrong

This document is intended to be used in the context of group analysis of indicators, during the Results-Based Management workshop. Updated, with links November 2009

For more information on RBM workshops and training, see www.rbmtraining.com

©Greg Armstrong 2007-2009

Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

57

Finding Indicators that really work We will be discussing indicator development and assessment for your projects and programmes tomorrow. What we will be looking for are indicators that will really work - not just indicators that look impressive, but practical indicators you can use to confirm results or redirect activities as you monitor progress in your own projects or programmes. In preparation, some of the ideas in this indicator assessment guide might help us work through the process. You might want to think about them before tomorrow’s session. Where Indicators fit in the Results Based Management Process Clearly, there is no point in trying to select indicators if results have not been properly identified, at least in a preliminary way. As the earlier discussions in this RBM workshop have made clear, before we can develop useful indicators for results, we need to clarify both the development problem we are trying to deal with, and our assumptions about what works, what does not work in development programming, and why. These initial steps lay the foundation for developing any workable indicator, and include answering a series of inter-related questions, which we have already explored in the first part of the workshop: 1. Problem identification: Why do we need to do anything? What is the development problem we are trying to deal with? 2. Clarifying underlying assumptions  What do we assume about the nature, causes, and effects of the problem?  What are we assuming needs to change, if we are going to solve the problem?  What are our assumptions about links between the problem and the development activities being planned - our implicit and often unacknowledged ‘theories of action”?  What assumptions are we making about the particular context or situation we are working in and its relation to the problem and to a solution? 3. Identifying possible results chains: What do we think are the relationships between available resources (human, financial etc); the activities we plan to undertake; and the short, medium and long term results we hope to see? The earlier discussions in this RBM workshop have reviewed a number of examples - to illustrate relatively simple ways of making sure that: Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

58

 We understand the problem before we define results;  We have an understanding of our own and other stakeholders’ assumptions about what is and is not likely to work in solving it, the problem;  We have identified the four basic types of results we can expect from these (and all) development activities, making sure we express them in simple, jargon-free language that everybody can understand. The next logical step is to explore and agree on indicators that will tell us something about the results.. Identifying Indicators is a Group Process Identifying, agreeing upon, and testing the validity of indicators is most effective as a group process. If it is done alone by anyone – senior policy maker, project manager or project implementer -- the result is almost always to produce indicators that are theoretically attractive, but ultimately unusable. Indicator development is, in a sense, a further process of making clear our understanding of the problem, our assumptions as to its resolution and what we think the changed situation (the result) will look like. By trying to define indicators, we are exploring the real, operational meanings of the results we identify. Often during the discussion of indicators, policy makers, project implementers and stakeholders will find that their perceptions of what makes a useful indicator will differ because there are underlying disagreements about the nature of the originally defined problem, different assumptions about what works, and different perceptions of what even the simplest of results statements mean. The only viable means of developing workable indicators, then, is to take the time to work through the range of potential indicators together, with as many of the stakeholders -- policy makers, managers, implementers and beneficiaries -- as we can. What is an indicator? An indicator is just information -- data, evidence, descriptors -- that can tell us whether a result has been achieved, or we are making progress toward it, what type result has been achieved, and what has not been achieved. A good indicator is key in programme, project and activity planning because it clarifies what we mean about results, and it keeps us focused on what we need to achieve. That is, essentially what results-based management is about.

Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

59

Indicators are not proof What is important to remember is that a results indicator is not necessarily definitive. If it were, it would be appropriate to call it a “proof”, as DNA (we are told - although even this is now questionable) might be called a proof of someone’s presence at a crime site. But an indicator is usually not as reliable as that. Most indicators, to continue the analogy, are imperfect, but reasonable suggestions, that something has occurred - hopefully something that shows progress toward an expected result. Because most indicators are imperfect, making a convincing case that progress is being made to the expected result almost always requires more than one indicator. Indicator Checklist: 8 steps to developing practical indicators for development results In determining what indicators we are going to use in monitoring and reporting on results, we need to consider both their technical utility, and the feasibility of collecting the indicator data. A technically beautiful indicator will be worthless if the data cannot be collected, or cannot be collected in a timely, cost-effective, consistent way and on a regular basis. Field experience in working through the indicator development process, for a wide range of projects -- parliamentary development, justice reform, education, rural development, health, conflict resolution, public service reform and environmental management -- has confirmed eight steps as necessary for assessment of the viability of indicators. Skipping any one of them, even if the answers might seem obvious, can cause problems in data collection and reporting in the long run. It is important to keep in mind that just because one member of a group thinks an answer to the questions raised about an indicator is obvious, this does not mean that everybody will agree. Working as a group, and letting the group work, are key to establishing realistic indicators with data that are collectable, useful, and, ultimately, convincing. 1. Technical validity of the indicator: Agree on the technical validity of the proposed indicators  will the suggested data actually measure or accurately describe the completed activities or results? 

While it is appropriate to collect data to determine whether and how activities have been completed in order to learn how variations in process affect results, there should nevertheless be a clear distinction between indicators designed to describe completed activities, and those designed to describe or measure changes produced by these activities -- the results.

Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009



60

Many quantitative indicators are likely to be accepted as technically valid without much dissent, primarily because many people: a)

Tend to see numbers as necessarily reflecting reality;

b)

Simply do not understand them, and do not want to lose face in asking for clarification, or

c)

Feel more comfortable with the idea of “measuring” or counting products as results than they do with “describing results” , interpreting what the meaning of the numbers is, and investigating the heart of results - what has changed.



But quantitative indicators should be very carefully examined. As we will discuss during the assessment process, much purported quantitative data is in effect just aggregated qualitative information, often hiding its own internal biases.



Qualitative indicators, although these may appear initially to be more problematic from a technical point of view, are necessary to demonstrate how process affects results.



If we remember that no single indicator is likely, on its own, to provide enough evidence to confirm achievement of a result, then we will be looking for multiple sources of information, including qualiative data, which can often provide much richer understanding of progress towards results, than can simple quantitative data.

2. Availability of a baseline for the indicator: A result is a change - and if we have indicators to tell us what the initial problem was, then we have a baseline, to tell us whether we have made progress in working toward results. We need to ensure that information for each indicator is currently available or could be collected in a cost-effective way within a relatively short period of time, to serve as the baseline for the indicator. 

If data cannot be collected for the baseline, the indicator’s utility for performance assessment purposes will be reduced. There are ways to mitigate this particularly through qualitative research, but this requires additional time and resources. It is always better to have actual baseline data than to try to “reconstruct” it (as many development projects do).



Difficulties in collecting data on indicators at the baseline stage may show that data collection will never be feasible.  If we can’t collect the data now, why do we think we will be able to collect it later?

Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

61

 And, if we cannot collect baseline data - then the indicator is relatively meaningless, and we should look for one we can actually use, in practice. 

This is true for qualitative data as much as it is for quantitative data. It is risky to assume that we will actually be able to have useful interviews with stakeholders (that is, interviews that actually give us useful data) a year from now or five years from now.  We need to test whether people will talk to us, and assess the clarity, reliability and validity of what they say, as part of the baseline data collection process.



We should not assume that documentary data will actually give us a baseline, unless we know when the documents were published, how and by whom data were collected, where they were collected, and when.

3. Accessibility of the indicator data: This is related to the issue of baseline indicator data. We need to ensure that the data are accessible, that they can, in fact, be collected in a cost-effective, timely and regular way. Information may exist, but if it is not accessible in a way that allows consistent collection, it should not be used as a results indicator. There are any number of reasons why the data may not really be accessible, including:  People we think will give interviews or allow themselves to be observed may, for their own quite legitimate reasons (e.g. culture, gender, fear) not do so.  Documents may not exist, may be restricted in circulation for privacy or security reasons or may be in an inaccessible format or language.  There may be problems of weather or geography that will make it difficult to collect indicator data on a meaningful and relevant schedule. 4. Relevance of the indicator: We need to ensure that the indicators relate to activities and results for the appropriate target group, in the time when the development activity is occurring, and in the geographic area we are targeting. Indicators should tell us something about the changes in understanding or actions which we expect, or hope for, in the specific groups affected by training or other interventions - over the short term, or over a longer period. Indicator data should also, where this is the point of the intervention, have specific relevance to the sub-groups within the wider community where we hope to see results -- women and children, or minorities, for example, as distinct from men within a broader “farming” community.  Superficial analysis can often miss the fact that information selected just because it is available, often does not tell us much about the groups we are interested in, the physical area we are Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

62

targeting, or the specific time in which our activities are happening and results are expected to occur.  Documentary data -- apparently easy to collect -- can often provide misleading data, if we do not examine it carefully. We will talk about examples of this in the workshop. 5. Research methods: It is easy to skip this step, but it is essential. We must identify the research methods that will be used to collect the indicator data. This is an important step in determining feasibility of data collection.  Will the indicator data be collected through surveys, documentary analysis, observation, interviews?  Will more than one method be used for an indicator data collection and, if so, will there be a difference in terms of importance of the data and the method, interpretation, sources of data, frequency of collection? 6. Frequency: To understand whether the indicator data will be useful, we need to understand who will use it, when they will use it, and if we can provide the data in a timely way. We should therefore specify the frequency of indicator data collection, analysis and reporting. This will depend on both the complexity of the research methods, and the availability of the people or groups identified to collect the information, interpret it and then report it in ways able to guide action. 

Some data can and should be easily collected quarterly or semiannually.



Other data will be difficult to collect - remembering the threats to accessibility - and data collection and reporting may only be practical on a yearly basis. Some may be available only in a summative way, at the beginning and end of the project or programme period. Such data may tell us something about the beginning and end of a project, but will not help much if we are trying to learn if we are making progress, incrementally, towards a result.



It is important to match the availability of people or groups with appropriate research skills to the frequency of data collection.

7. Responsibility: Who will collect the indicator data, who will interpret it, who will manage the data collection process, and who will use the indicator data for policy or management decisions? Following from #5 (research methods) and #6 (frequency of data collection), above, it is key to identify the specific people or groups who will be responsible for indicator data collection, analysis and reporting and determine if they have the research skills appropriate for the agreed research methods.

Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

63

 It is a common, but dangerous error to make assumptions about the availability of research skills relevant to collection of indicator data. It is important to verify, not simply to assume, the availability of people who know how to conduct and interpret interviews; people who know how to intelligently and thoroughly analyse documents and check the data trail in those documents; and people who know how to conduct surveys, and provide usable data from the surveys. 

If the people identified as responsible for collecting indicator data do not have the appropriate research skills, we can: 1. 2. 3. 4.

Provide training on research methods; Find other research methods; Find other (external) researchers; or Choose other indicators that require simpler research methods for which we know we have available expertise.

8. Cost: Determine how much it will cost to collect the indicator data. Money is not the only important resource: time and opportunity costs are vitally important. 

Cost of indicator data collection includes:  Our organizations’ staff time for doing the actual data collection;  Time for data analysis and reporting;  Money, for the logistics of the collection and for any external expertise required for either doing the research or training staff to do it.



Even if we can find appropriate people to collect data, the cost may be too high - in time, money or interference with other work to make this practical, given the realities of our organization’s capacity and budget. If this is the case, we may need to find another indicator.

Attached is an indicator development worksheet which may help in the work of assessing the viability of proposed indicators. Next steps When we have finished this analysis, the next session will introduce simple ways of organizing and reporting indicator data.

Results-Based Management for Development - Document 6 www.rbmtraining.com

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

64

Indicator Assessment Worksheet Results and Proposed indicator

1. Performance Indicators – technical validity

2. Baseline Indicators

3.Information Accessibility

4. Relevance & Beneficiaries

5. Methods

6. Frequency of data collection

7. Responsibility

 Will the data be useful to judge performance?

 Can information be collected soon for the baseline?

 Do you know where the information will come from?

 Who will be reached by the development activity directly?

 What research methods will be used to collect the information?

 Who will use the indicator data?

 Who will collect the data?

 How often will the data be collected and reported?

 Do these people have the skills and the time to do the job?

8. Cost of data collection and analysis

Result:  What will change because of the activity or programme? Indicator:

 Test this by collecting it.

 What is the proposed information that will tell us if the change is occurring?

 Is the information really accessible?  What are the potential barriers to data collection? -

language culture security geography physical availability format

Test accesibility by collecting data for the baseline.

Results-Based Management for Development - Document 6 www.rbmtraining.com

 Is this schedule feasible?  Who will benefit indirectly?  Do the indicator data relate to these people - or to a broader population?

 Who will analyse and report on the information?

 How much will it cost, in time or money for your own staff to collect and analyse indicator data?  How much will it cost if you have to train your staff?  How much will it cost if you have to hire external experts to do either data collection or training?  Are total costs for collecting and analysing data on this indicator reasonable, given your total budget, and the limitations on your staff’s time?

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

Results and Proposed indicator

1. Performance Indicators – technical validity

2. Baseline Indicators

Result 1:

Result 2:

Results 3:

Results-Based Management for Development - Document 6 www.rbmtraining.com

65

3.Information Accessibility

4. Relevance & Beneficiaries

5. Methods

6. Frequency of data collection

7. Responsibility

8. Cost of data collection and analysis

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

Results and Proposed indicator

1. Performance Indicators – technical validity

2. Baseline Indicators

Results-Based Management for Development - Document 6 www.rbmtraining.com

66

3.Information Accessibility

4. Relevance & Beneficiaries

5. Methods

6. Frequency of data collection

7. Responsibility

8. Cost of data collection and analysis

Creating Practical Results Indicators Greg Armstrong, Updated November 2009

67

Resources:  UNDP’s Oslo Governance Centre, Governance Assessment Portal, which has a number of papers on indicator development for democratic governance, human-rights based programming, local governance and public administration reform among others.  Other planning and management tools relevant to achieving and assessing development results, from the Overseas Development Institute.  The Monitoring and Evaluation News website which has a wide range of tools and discussion of issues relevant to the evaluation of development results, including indicators and RBM.  Examples of evaluations on a range of development topics by different donor agencies from the UNESCO Internal Oversight portal.  Four trusted professional resources for practical monitoring and evaluation, field management of governance, conflict resolution and rural development projects, all incorporating a results-based management approach.  Common definitions of results-based management terms.  A discussion of the compatibility of results-based management and participatory evaluations.  From SIDA, a publication on how Appreciative Inquiry could be used with the logical framework approach (which is commonly used in results based management), and another on the basics of the logical framework method.  More detail about the results-based management training from which this indicator development guide is derived.  Greg Armstrong’s profile, and curriculum vitae.

Results-Based Management for Development - Document 6 www.rbmtraining.com

Related Documents