Quantitative analysis rarely uses raw numbers, but is more concerned with statistics such as averages and percentages, and with the statistical comparison of groups and sub-groups. It is first necessary to distinguish between different types of data, or ‘levels of measurement’.
Levels of measurement
A basic requirement for the application of appropriate statistics is an understanding of different levels of measurement, which will enable the correct identification of variable type and thus the correct choice of statistic. There are four levels of measurement: nominal, ordinal, interval and ratio. These levels represent a hierarchy, with nominal data being the lowest level of measurement and ratio data being the highest. The higher the level of measurement the more the numbers used have real meaning and the more statistical procedures are available for use.
Nominal, sometimes called categorical, variables are split into simple descriptive categories. Such categories can have numbers allocated as codes, but the number has no numerical meaning. A simple example is sex, where the two categories are male and female. If the two categories were coded 1 and 2, those numbers would merely identify the category. The range of statistical procedures for such data is very limited.
Ordinal variables have categories of data where there is a relationship or potential order between the categories such that come categories are higher or lower than others. A simple example here is in a pack of playing cards where the king has higher value than the queen, which in turn has higher value than the jack. The ace is interesting in that sometimes it has a higher value than these three cards, and sometime a lesser one. This is a useful reminder that such ordering is frequently related to the context within which the data occurs and is being analysed. A relevant example from probation practice is the classification of offence types. We sometimes place them in an order where some offences are more serious than others, for instance violence offences are often classed as more serious than theft offences, but always remembering that the ordering is not that simple, and that some theft offences can be more serious than some violence offences. The answers to questions on attitude scales are usually ordinal, with a range of five categories from ‘strongly disagree’ to ‘strongly agree’. These categories are given numbers, though the detail of the numbers does not matter: they can be coded 1,2,3,4,5 or 5,4,3,2,1 or 0,1,2,3,4 or even –2, –1, 0,1,2. The important feature is the order rather than the specific numbers, and demonstrates that numerical analysis of such data is limited, though there are a few more statistical procedures available.
Interval and ratio variables have real numbers. In both types the ‘intervals’ between the numbers are equal, such that the difference between the score 10 and the score 11 is the same as the difference between the score 69 and 70. Ratio scales have an additional property in that ratios between the numbers are meaningful. This can best be understood by considering temperature, which is interval measurement, and age, which is ratio measurement. In temperature, whether Centigrade or Fahrenheit, the difference between 20 degrees and 25 degrees is the same as the difference between 50 and 55 degrees, but it does not make sense to say that a temperature of 50 degrees is twice as hot as a temperature of 25 degrees. With age on the other hand, it would make sense to say that someone aged 50 was twice as old as someone aged 25. A simple means of identifying the difference is to consider whether a negative value would make sense. Interval measurement can have a negative value – sub-zero temperatures are common, whereas ratio measurement cannot – a negative age is not possible. For statistical purposes, the distinction between interval and ratio measurement is irrelevant.
The identification of appropriate levels of measurement is not without controversy, particularly with respect to the use of statistics on scores obtained from evaluation instruments. An interesting example here is IQ scores, which are frequently treated as interval measurement, but arguably can only be ordinal. We cannot be sure that the difference between a score of 70 and 75 is equivalent to the difference between a score of 140 and 145. However, most researchers treat psychometric and other multiple-item scales as interval (for a discussion, see Bryman and Cramer 1990 Chapter 4).
The aims of analysis of qualitative data are the same as those for the analysis of quantitative data: to make sense of the data collected and to produce robust results that can be substantiated as a valid representation of the real world and not the idiosyncratic ‘subjective’ perspective of the evaluator. A systematic approach to analysis is particularly important in qualitative studies. As with quantitative analysis, flaws in design cannot be redressed by analysis no matter how good or comprehensive it is. If a quantitative picture were required, open interviews would not be the best method to achieve it, and attempts at quantification at the analysis stage could be futile.
The analysis of qualitative data is time consuming and techniques are not so clear-cut as those for quantitative data.
Computer programs are available to analyse qualitative data, but they do require considerable input to establish appropriate coding frames for the analysis, and the task should not be underestimated. As outlined in the previous section, data collection and analysis are not such separate processes in qualitative work, and in fact it is important to undertake some limited analysis and reflection during the collection of qualitative data to inform the detail of the data that is being collected.
Although some analysis will have been undertaken during data collection there will remain a substantial task of analysis at the end of that process. Qualitative analysis is an iterative process, where data is worked, reworked and refined as the emerging picture becomes clearer. There are two broad approaches to the task.
The theory driven approach – this method starts with the theoretical framework which underpinned the design of the evaluation, and assesses the extent to which the ‘theory’ was found in practice.
The descriptive framework – where a theoretical framework does not really exist the data is used to construct one. A frequently used approach is ‘issues analysis’, where the issues that drove the design of the study, or that emerge during data collection and analysis are used to focus the selection and organisation of material.
Within these basic approaches, there are a range of techniques that can be employed to assist the analytic process, the choice of which will depend on the evaluation design and the question being addressed.
Time series analysis (not to be confused with the statistical technique of the same name) looks at patterning of events over time, with a particular focus on changes in pattern. For instance, a single case study design would use this approach to assess the impact of a particular programme intervention with an offender, in order to investigate whether the pattern of offending after treatment had changed in the desired direction.
Chronology is a useful approach for analysing the life history of an individual or institution.
Triangulation can be used in multi-method approaches, where themes emerging from quantitative data can be examined in more detail in the qualitative data.
Key events can be used as the means of organising data, for instance the nature of follow up for a missed appointment. The choice of key events may be guided by the theory driving the evaluation, or emerge from the data collected.
There are a range of texts dedicated to qualitative methodology and analysis. Recommended reading in this area is presented at the end of the handbook.