Introducing evaluation

The word 'evaluation' has different meanings but all of them refer to assessing the value, or worth, of something. It has been defined by the Organisation for Economic Co-operation and Development as:

The systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision– making process....(OECD 2002)

This definition is helpful because:
- It emphasises the systematic collection of information. Information gathering must be purposeful, systematic and empirical. It specifies that a range of topics can be evaluated.
- It says that evaluation has a purpose and is part of a process. Evaluation itself is not an end in itself but is used to improve programmes and decision-making.

Evaluation is a specific type of applied research where the clear purpose of the exercise is to inform practice development and decision making. The process and methods used in evaluation are the same as those used in social research and, importantly, the same principles for assuring data quality apply. Social research methods provide a toolbox for the evaluator: each tool having a particular function and benefit, and tools should be chosen to meet the purpose and needs of the particular evaluation most appropriately. Detailed guidance for choosing tools is available.

What's special about evaluation?

Data collection and analysis is required for other procedures, not just for evaluation. This section introduces these other procedures and explains how they differe from evaluation. Sometimes people say that they are 'doing evaluation', when their activity is actually closer to research, monitoring or inspection.

Research

Research is generally seen as an academic exercise designed to produce knowledge. This may be useful in different ways, but not necessarily linked to any organisational or strategic objective. Such research is essential to test and develop theories and understanding that can form the basis of practice developments. The current research on desistance being undertaken in the UK is an example of such work.

Social research seeks to answer questions about the social world. This brings its own particular problems and potentials, some of which will be addressed at appropriate points within this handbook. Exploratory social research in particular generates understanding in areas where knowledge does not already exist, to address questions such as ‘why do people supervised in the community not keep appointments as required?’

Applied research is research that involves the same rigour as social research methods, but uses them to generate knowledge for a specific purpose, often to inform policy decisions. Evaluation is one kind of applied research, and adherence to the standards of rigour of social research is critical to the success of an evaluation. An example of the sort of question that would be addressed by evaluation, is ‘does this way of working increase offenders’ appointment-keeping?’

Monitoring

Monitoring is the systematic and continuous collection of data. The data collected is primarily focused on questions of process, but can include outcomes. It has been defined as ‘Keeping track of inputs and outputs – a rudimentary form of evaluation’ (Underdown, 1998). It is essentially a tool to assist management, whether management of an individual case or management of a project or service.

The sort of question addressed by monitoring is ‘what proportion of offenders are not keeping appointments?’. Progress towards the achievement of targets is frequently a component of monitoring, where it is referred to as performance monitoring. In this context the relevant question would be ‘has the target of 85% of offenders keeping their appointments been achieved?’ Key Performance Indicators are a very specific set of performance targets. Monitoring data can play an important part in process evaluation. Data that is routinely collected in this way can often be analysed in more detail and with different questions as a specific evaluation exercise.

Inspection

Inspection is a quality assurance process that aims to check that practice conforms to required standards, for example, ‘Are failures to attend appointments being followed up appropriately?’ Inspection checks that codes of practice are adhered to and minimum standards achieved. Inspection can be seen as a special kind of tightly focused evaluation; it frequently generates data that can inform wider evaluations. In England & Wales, inspection reports on a wide range of topics are available from Her Majesty's Inspectorate of Probation, and can usefully be used to obtain comparative data.

Types of evaluation

The broad purposes for evaluation, above, link to the most frequently cited types of evaluation: process evaluation, impact evaluation and outcome evaluation.
Process Evaluation focuses on the process of the intervention or way of working with the offender. Process evaluation is more concerned with the way in which a piece of work was carried out and how this may have affected the outcomes. Evaluation of programme integrity (delivering programmes as intended) is an important aspect of process evaluation. Another important area looking at whether programmes are attracting the people for whom they were intended (targeting).
Impact Evaluation addresses the immediate impact of the programme or way of working, essentially looking at positive changes in behaviour associated with offending such as changes in attitudes, improvements in social circumstances and compliance with the requirements of supervision. These factors are sometimes known as ‘intermediate outcomes’ and are important because they are directly addressed in interventions and programmes for offenders in the hope that they will reduce further offending.
Outcome Evaluation relates to assessment of the ultimate intended outcome of the programme or way of working, ie does it reduce offending or reoffending. Outcome evaluation attempts to demonstrate that the results of a piece of work or intervention meet the predetermined objectives. As such, it is concerned broadly with notions of cause and effect.
These three types of evaluation can be seen as three key stages for undertaking evaluation in the development of effective ways of working with offenders. Process evaluation is best undertaken in the early stages of implementation of a new piece of work to assess how best to deliver the programme successfully. Linked to this may be an impact evaluation to assess whether the successfully delivered programme has achieved its immediate objectives, together with an outcome evaluation to assess whether the successful programme implementation and impact then reduce offending.
It is not essential however that the different types of evaluation are undertaken in this order, but if they are the mechanisms of change and the best way of achieving that change is well understood, with each stage building on the last. Some of the ‘What Works’ evaluation in England & Wales has not incorporated each of these types, which has meant that although it is known whether a particular programme or way of working is reducing offending the means by which it achieves that is not evidenced. For instance, if a programme designed to improve the numeracy and literacy of offenders is shown to reduce reoffending without checking that literacy and numeracy have been improved and to what level it may be something else about the education which has reduced offending. In these circumstances it may be necessary to then undertake an impact evaluation to fill that knowledge gap.

The overall aim of most work with offenders is to help them to live an offending free life, and the ultimate focus of the evaluation of this is whether the work has reduced reconviction rates, certainly in England & Wales. An evaluation that merely answers this question has limited usefulness. It is important to have some understanding of the context of the outcome and of why that particular outcome occurred. Evaluation can address a range of questions in a variety of ways, and usually evaluation designed primarily to answer questions about outcome will also collect information to address supplementary questions.
Two other types of evaluation are cost effectiveness evaluation and realistic evaluation:
Cost effectiveness evaluation considers value-for-money aspects of programmes and ways of working: Are the outcomes of the program worth the inputs? Could similar results be achieved for less cost? Cost effectiveness is the relationship between the costs of delivering a programme and its effectiveness, expressed in non-monetary terms. This can be taken a stage further and become cost benefit analysis, where the costs of a programme are related to the value of the benefits measured in monetary terms. A monetary value has to be placed on the outcome measures, which in relation to work with offenders primarily focus on reduced offending (Dhiri and Brand 1999). An assessment of the number of offences prevented is made and a monetary value attached to each of these. Although wider social costs are acknowledged, it is generally undertaken at a simple level of costs to the criminal justice system.
Realistic evaluation is the term given to a model of evaluation presented by Pawson & Tilley (1997). Essentially, they argue that it is not enough to know ‘What Works’, and that the question that policy makers and service delivery staff need to address is ‘What works, for whom, in what circumstances?’ This guidance supports this model, which emphasises the role of theory and the development of theory as a means of improving understanding. Pawson and Tilley argue that evaluators need to think through the whole process of why a programme might work, including the inputs, the outputs and the context in which the work takes place. They argue that it is the interrelationship between the factors that creates mechanisms for change and that without understanding the nature of the mechanisms, it is very difficult to understand why a programme is successful or not. This model is useful as it enables some understanding of the effectiveness of programmes with different types of offenders.

Overview of the process

- Clarify the objectives of the evaluation: What is the purpose of the evaluation? Who is the audience? What questions will the evaluation address?

- Identify the data needed to best address the purpose and questions of the evaluation.

- Design the evaluation and data collection procedures to meet quality and ethical principles within the resources available.

- Collect and process the data.

- Analyse the data and interpret the results.

- Summarise and report the findings.