image of David Brent from The Office Most people working in positions where they are managing people or processes are used to the constant flow of new ideas that come their way. Whether you work in human resources, finance or sales, there's a management guru ready to tell you there's a newer, better way of doing things. These new management ideas and strategies are like iPhone apps: there's thousands of them, but only a few are any good. Implementing the latest fad or revolutionary concept without evaluating its true worth can be costly. Not only that, but for those whose work in the knowledge economy, getting the right information can be critical to our success. No matter how excellent a concept sounds at first, it pays to look beneath the surface. Whenever I read about the latest cure-it-all initiative, I always ask the following questions:

Does the idea seem too good to be true?

If a 'new' management idea seems too wonderfully simple and easy to implement, then it probably is. In the words of the late, great Douglas Adams: "If it looks like a duck, and quacks like a duck, we have at least to consider the possibility that we have a small aquatic bird of the family anatidae on our hands." Very rarely, someone will come up with a simple idea, which is really effective, that no one else has thought of. Most of the time, however, it's been tried already and found wanting.

Is the author a credible source?

'Credible' will mean different things to different people. A credible source to me means one or more of the following:

  • I've read something the author has produced before and found it of value.
  • Someone I trust has made reference to the author.
  • If they're are an academic, I've heard of their institution or their work is published in a peer-reviewed journal.
  • If they're a practitioner, they have a history of achievement in a range of different contexts (achieving amazing results in one organisation, does not translate into success in a different work environment).
  • The author has no vested interest in it being adopted. I'm always highly sceptical of any expensive to implement models or strategies that come from consultants who stand to make money from the implementation.

If the author doesn't tick any of those boxes, it doesn't mean that I automatically discount their idea; it just means that I'm less likely to forgive other oversights.

Does the author support their claims with any evidence?

If a new book, article or column does not offer any evidence backing up their claims then I treat them with extreme caution. It's quite likely in this case that the concept being shared is theoretical, or based purely on anecdotal evidence, and hasn't been tested in the chaotic arena that is real life. In addition, the evidence should be referenced or easily discovered in the public domain. Oblique references to 'studies' or 'research' can't count as evidence unless there's a way of looking at the original study itself. For instance, everyone knows that you should drink eight glasses of water a day, right? This idea floated around for decades, and no one questioned what the original source of this was. If they had, as two researchers writing in the British Medical Journal did, they would have found that it's a total myth. This is not to say that new concepts lacking evidence shouldn't be tried out, but a more circumspect approach would be recommended, such as trialling the new model/strategy with a small number of people/teams/departments first. I'd never recommend implementing a costly initiative that has never been tested. That's what pilots are for.

Does the evidence used appear to be cherry-picked?

It's an ingrained human trait to pay more attention to the evidence that supports our personal viewpoint than the evidence that contradicts it. Let's say I make a claim that the most creative companies give their staff the freedom to work on personal projects that have nothing to do with their main job a certain amount of the time. I would point you in the direction of Google and their famous 20% time, and 3M with their 15% rule as examples. I can probably uncover a few more and give you some historical examples, such as Edison's Menlo Park laboratory. But what's missing from this wonderful picture are the examples of organisations who implement something similar and are no more creative. This is because, as a management guru with an idea, I'm unlikely to look for contradictory evidence. Great ideas go through a rigorous process of challenge and come out stronger as a result. If the evidence supporting a new technique or concept seems to be cherry-picked, it's always worth looking for contradictory evidence.

Do the numbers (if there are any) stand up to scrutiny?

Often, the author of a business book will back up their claims with statistical evidence or use a well known 'fact' as the basis of their idea. These numbers can come from research studies, real life surveys or occasionally out of thin air. We're predisposed to be persuaded by numbers, but nine times out of ten,1 these statistics don't stand up to scrutiny. Before accepting an idea based on the statistics that 'support it', you should prod the data to see if it's of any value. Typical problems that can be found are:

  • there's no reference to where details of the study/survey can be found (and therefore it might have been made up)
  • the reference simply leads to another article that references the study/survey but the actual study/survey details cannot be found (and, again, might have been made up)
  • the sample size of the study/survey was too small to be of any value
  • the subjects of the study/survey were not a truly representative sample of the wider population
  • there was no control group to compare results with
  • the study was conducted by a person or organisation with vested interests in a particular result

We see numbers used this way all the time in business and politics:

  • "80% of women found their skin improved"  where the sample size was 20, which is not particularly representative of the 3+ billion female inhabitants of this planet
  • "crime has increased/decreased x%" where data is taken from different reports measuring totally different aspects of crime
  • "90% of respondents said they used Twitter to learn" where the results came from an online survey found on a Twitter enthusiast site

Using evidence that falls foul of these problems, without mentioning it, can mean one of two things: sloppiness or deceitful intent. In either case, it's a mark against the author's ideas.

Are there any criticisms of the idea's author available and do they seem valid?

Popular ideas always attract some form of criticism. These can range from the wild ramblings of ill informed bloggers through to devastating point by point refutations of the author's key points by highly respected individuals. Even if you like an idea and it seems to be well considered, it always pays off to investigate whether anyone else has found problems with it. The chances are, if you're interested in the author and their idea, someone else has been too and they've probably done some research you can make use of. The skill here is to be able to sort the useful critiques from the ill informed stuff and to do this I tend to ask myself ... yep, the same questions.

What else?

I'm sure there are loads of other ways that people use to sort through the myriad of new concepts that come our way. If you've got any methods that you find particularly effective, I'd love to hear them.


[1] This statement is made with ironic intent.