Now we come to the the ultimate reason for my three posts about evaluation in L&D (see also parts one and two for more details). To recommend the Success Case Method. I first came across it six years ago when I read Robert Brinkerhoff's book Telling Training's Story. It was the first time I had read a book about L&D evaluation that was theoretically solid and practical at the same time. Since then, I've used it and recommended to others on dozens of occasions. The starting premise is simple; for any learning intervention there will be some people who apply what they've learned enthusiastically, some who'll apply it half-heartedly or won’t get any results, and some who won't apply it at all.

From this basic starting point, Brinkerhoff builds up an easy-to-use methodology that can be applied to evaluate any learning intervention and deliver information that's useful on a number of levels. Using this approach will give you:

  • a robust analysis of the results of a learning intervention
  • a useable ROI figure without the usual enormous amount of effort
  • stories and examples of the outcomes of applying the learning
  • recommendations for improving the intervention
  • a true picture of how widely the learning has been adopted
  • recommendations for widening adoption

This isn't intended as a 'how to' guide, just a personal recommendation. There are a few good articles about the Success Case Method, including examples. However, there's nothing that beats reading his books: Telling Training's Story and The Success Case Method. They should be core texts for any L&D professional, and they're definitely of more practical use than Kirkpatrick's four levels.

Everything else is flannel

For as long as I've been involved in L&D there's been the excessively introspective debate about evaluation. What should we evaluate and why? How do we evaluate? What do we report on? What do we need? What are our customers asking for? Should we be fulfilling all their measurement requests? The whole discussion about ROI vs the new buzz term 'Return on Expectations' is flawed. Any learning intervention should either be put in place to meet compliance requirements or to improve the performance of the target population. There is no other valid reason. That's what should be getting evaluated, and the best tool I've come across for doing that to date is the Success Case Method.