It’s not often I feel compelled to single out a non-L&D book and recommend it to my peers, but every now and then someone pulls together different strands of thought that seem to be floating around and weaves them into a coherent whole. Dan Pink did it with his excellent book on motivation, Drive: The Surprising Truth About What Motivates Us, and now Tim Harford [1] has done something similar for solving complex problems with his new book Adapt: Why Success Always Starts with Failure.

In Adapt, Tim argues that in order to solve complex problems we need to look beyond leaders with grand visions or experts with their clever predictions. Instead we need to look to a much maligned but incredibly powerful problem-solving tool that’s proved itself time and time again when faced with seemingly insurmountable obstacles: trial and error.

The stories told in Adapt have a grand scale; the war in Iraq; the creation of the Spitfire; climate change; or the 2007 financial crisis. However, the messages are applicable in our everyday work, whenever we come across a complex problem which, if you’re a knowledge worker, tends to be your raison d’etre.

Three key recommendations come out of reading Adapt:

  1. you should have a lot of experiments running
  2. the experiments need to be at the right scale so that failure, which will occur, is acceptable
  3. you need to be able to tell what’s working and what isn’t quickly, so you can kill the experiments that aren’t working

So, the question for some might be ‘what on earth does this have to do with L&D and how is it applicable in our everyday working practice?’ In order to get a clear picture of how we might go about this, we first need to take a step back and ask ourselves what we’re here for. I’d argue that we’re here to build capability and improve performance. So, the question then becomes what experimental initiatives can we run, at a survivable scale, that would test out new approaches to building capability and improve performance? How would we know if they had been successful? What’s stopping us from running these experiments? [2]

To give you an indication of how we apply this type of thinking at GoodPractice, here is a small selection of the questions we’re currently running experiments to find the answers to:

  • Is a client branded email to end users more effective than a plain email?
  • Do icons for different content types help the user find what they’re looking more quickly for or slow them down?
  • Does highlighting our audio content on the home page of our sites mean users actually listen to more audio content (rather than just visit the pages)?

If you’re thinking that you can already guess the answers to these questions, you’re missing the point. People act in counter-intuitive ways; they’re complex. We can’t predict what they’re going to like or what they’re going to do. That includes you. The best we can do is experiment, and adapt. We should do more of it.

If you want to get notified about future posts, be sure to subscribe to this blog.

[1] I’ve been a fan of his for a while, since I read his first book The Undercover Economist and his status as a favourite commentator was cemented when he took over presenting Radio 4’s excellent programme about statistics and numbers, More or Less (you read that correctly, it’s an excellent programme about statistics).

[2] You could replace the word ‘experiment’ with ‘pilot’. But ‘pilot’ seems to indicate that you expect something to progress, so I prefer ‘experiment’. Must be the geek inside me.