×

Login with Linkedin

Linkedin or with Twitter

Login with your Free Toolkit account. Not registered yet? Register today

Forgotten your password?
Existing client login for GoodPractice or eden tree

Free Toolkit We love to see our clients succeed and their leaders and managers perform better

From our free toolkit

The CIRO Model of Evaluation

Warr, Bird and Rackham’s four-stage CIRO (context, input, reaction, output) model remains one of the most widely used training evaluation models.

Developed in 1970 by Warr, Bird and Rackham, CIRO remains one of the most widely used training evaluation models.[1] Although over 30 years old, it strikes a chord with some of the key issues that trainers are still grappling with today: how to demonstrate the impact of training on organisational objectives, and, in turn, how to make a strong business case for training.

As the authors explain:

The training department, like all other departments, will be expected to play its part in the achievement of the organisation’s objectives. If trainers can demonstrate factually that they are making a genuine contribution to the organisation’s goals, this can lead to an increase in both the standing and influence of the training department within that organisation. The amount of support given by other members of the organisation will rest largely on the regard they have for the training department staff. So any activity which heightens that regard will ultimately benefit the training function.[2]

In order to raise the profile of the training function, the authors  propose a robust framework of evaluation that considers the organisational context in which the training takes place, and builds evaluation into the entire training cycle, from start to finish.

The model is based on four stages or types of evaluation – context, input, reactions and outcome – and is underpinned by a set of three questions, which, according to the authors, the trainer should always bear in mind:

1. What needs to be changed?

2. What procedures are most likely to bring about this change?

3. What evidence is there that change has occurred?[3]

The first two questions must be answered before the training begins, and the last should be considered before the training takes place, but can only be answered afterwards. According to the authors, it is the ‘collection, assessment and effective use of information concerning these three questions which constitutes evaluation’.[4]

Stage one: context – what needs to be changed?

The first stage involves identifying training needs by collecting information on the current individual and organisational context in order to draft objectives at three levels. This will provide solid guidance for the design of the training as well as a robust framework for evaluation post-training. The three levels of objectives are:

Objectives

1. ultimate: the performance issue that the organisation is aiming to improve at departmental/ organisational level

2. intermediate: changes in the behaviour of the trainees that will be required in order to achieve the ultimate objective

3. immediate: the new knowledge, skills or attitudes that trainees should be aiming to acquire in order to be able to change their behaviour

Stage two: input – what procedures are most likely to bring about change?

At this stage, the trainer considers the resources available and decides which ‘input’ or method will be most likely to achieve the objectives, e.g. coaching, training course or e-learning.

Stage three: reaction – what evidence is there that change has occurred?

As with the first level of Kirkpatrick’s evaluation model,[5] the trainer collates feedback from the trainees on how useful they found the training. The most commonly used method of collating this type of information is the evaluation questionnaire or ‘happy sheet’.

Stage four: outcome – what evidence is there that change has occurred?

Often regarded as the traditional evaluation process, this stage involves collating and analysing information on the effectiveness of the training in order to improve subsequent training initiatives. The question relating to this stage (‘What evidence is there that change has occurred?’) can only be answered after the training. However, advance planning and preparation is required to ensure that it can be answered afterwards. This means defining objectives (as in the ‘context’ stage) and constructing methods for measuring the achievement of these objectives.

There are three levels of outcome evaluation, which are defined in terms of the objectives drafted at the ‘context’ stage. These levels are similar to Kirkpatrick’s second, third and fourth levels, as illustrated below:

CIRO Model

Kirkpatrick Model

Immediate – the new knowledge, skills or attitudes that trainees need to acquire in order to be able to change their behaviour.

Level Two – what the trainees have learned, in terms of knowledge, skills and attitude.
Intermediate – the changes in on-the-job behaviour that will lead to the achievement of the ultimate objectives. Level Three – the transfer of learning, i.e. the extent to which trainees have applied it to their job.
Ultimate – improvements at departmental/organisational level, e.g. increased productivity, reduced costs or fewer accidents. Level Four – improvements at departmental/organisational level, e.g. increased productivity, reduced costs or fewer accidents.

At the ‘ultimate’ outcome level, the authors make a point that strikes a chord with many trainers today and is at the heart of the great evaluation debate: is it possible to measure at this level?

[The ultimate outcomes] represent for the most part major departmental or organisational objectives, so that many other members of the organisation over and above the training staff will be working towards them. When it happens that such objectives are attained it is hardly possible to decide who, in particular, is responsible; the answer must be that many people together contributed to their achievement. This is why it is rarely practicable to evaluate specific training programmes at this ultimate level.[6]

Conclusion

Despite its age and many similarities to the more familiar Kirkpatrick model, the CIRO model makes valuable points that are still relevant today, such as the importance of making evaluation an integral part of the training process and the need for trainers to make a strong business case to demonstrate their contribution to organisational objectives. It also acknowledges the difficulties of evaluating in terms of organisational benefits, which is explored further in the leading thinking article ‘Key Evaluation Issues’.

The CIRO Framework for the Evaluation of Training[7]

Evaluation Type

Definition

Context EvaluationObtaining and using information about the current operational context in order to determine training needs and objectives.
Input EvaluationObtaining and using information about possible training resources in order to choose between alternative ‘inputs’ to training.
Reaction EvaluationObtaining and using information about trainees’ expressed current or subsequent reactions in order to improve training.
Outcome EvaluationObtaining and using information about the outcomes of training in order to improve subsequent training. Three levels of outcome evaluation are in terms of immediate, intermediate and ultimate outcomes.

[1] According to Jacquie Findlay in Evaluation: Making it Work, a CIPD conference seminar booklet, CIRO is ‘the most used evaluation model in [the] USA’.

[2] Peter Warr, Michael Bird and Neil Rackham, Evaluation of Management Training: A Practical Framework with Cases, for Evaluating Training Needs and Results (Gower Press, 1970), p 9.

[3] Warr, Bird and Rackham, p 15.

[4] Warr, Bird and Rackham, p 15.

[5] D Kirkpatrick, Evaluating Training Programs (Berrett-Koehler, 1994).

[6] Warr, Bird and Rackham, p 19.

[7] Warr, Bird and Rackham, p 20, Figure 1:1.

Tagged with:
holding man

We work with you all the way

We have created this free toolkit especially for learning professionals working in leadership and management development.

Speak to an advisor 0845 22 33 00 2

Who uses GoodPractice?

  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
  • Client
See more