SEVERAL AUTHORS HAVE identified Tom Gilbert as
the father of human performance technology. We respectfully
disagree—and think he would have too. Gilbert was
a brilliant and creative professional who made major contributions
to the field, and it seems safe to say that HPT
would not have progressed as far as it has without him.
But he was not alone. Those of us who have been in the
field since the early 1960s can identify a number of other
contributors who were significant and even crucial to the
early development and expansion of the technology.
The early days of HPT were an exciting time.We had a
powerful technology that many of us believed could
change the world, but it was a new technology and limited
in its scope and applications at that stage. Conferences
and meetings were characterized by lengthy discussions
(sometimes heated, and often extending into the hallways,
coffee shops, and bars) about the technology itself, what
the field really was, and its potential applications.
If we were forced to point to a single initiator of HPT,
it would probably be B. F. Skinner, with the publication of
The Science of Learning and the Art of Teaching in 1954.
But the development of the basic principles that Skinner
articulated into a far-reaching technology was the work
of many. A number of those people (including Gilbert,
Ogden Lindsey, and Dale Brethower) were graduate students
under Skinner; others were attracted by the potential
power of the principles in Skinner’s writings.
I’m in Toronto this week, at the Canadian Society for Training and Development’s conference. (On Thursday I’m giving a session: Using Job Aids: How, When, Why.)
I’ve been wanting for some time to rethink how I present examples of job aids, and after some experimentation at Whiteboard Labs, I’m launching Dave’s Ensampler.
“Ensample” is an archaic word with the same root as “example.” A long time ago, I saw a collection of organizing diagrams that Sivasailam Thiagarajam made, giving them the title An Ensampler of Hierarchical Information.
The job aids at the Ensampler have more consistent tagging, and I have a page that automatically displays the titles by category. This is new, and it’s a work in progress–for example, I’m trying out a way to have a tab in the menu here at the Whiteboard link directly to the Ensampler. If that works (or works well enough) I’ll put a similar tab up at the Ensampler to teleport back here.
I recently found myself caught up in a podcast about predictions and how to become better at making them. Part of my fascination was simply that I enjoyed the content of the show. Part of it was that, to my surprise, I heard some great tips and techniques that spoke directly to my work in the world of human performance technology (HPT).
If you work as a training consultant, an instructional designer, an evaluation or measurement expert, or any professional interested in improving your evidence-based practices, I think you’ll find—the probability is greater than two-in-three (ahem)—that you’ll find value in the podcast.
Here are my top-ten takeaways from the show:
1 – We should hold experts and pundits more accountable for the accuracy of their predictions.
“[Pundits] are notoriously bad at forecasting, in part because they aren’t punished for bad predictions. Also, they tend to be deeply unscientific.” (Dubner)
“I think in your guys’ profession [sports reporting, punditry], you can easily take back what you say… there’s no danger when somebody says it. Y’know, if there was a pay cut or if there was an incentive, if picking teams each and every week, you may get a raise, I guarantee people would be watching what they say then.” —Cam Newton (football player) on the lack of accountability for sports reporters on their predictions
“When you don’t have skin in the game, and you aren’t held accountable for your predictions, you can say pretty much whatever you want.” (Dubner)
“When you don’t have skin in the game, and you aren’t held accountable for your predictions, you can say pretty much whatever you want.” –Stephen Dubner
2 – For far too long, very smart people have been content to have little accountability for accuracy in forecasting.
“A lot of the experts that we encounter, in the media and elsewhere, aren’t very good at making forecasts. Not much better, in fact, than a monkey with a dart board.” (Dubner)
3 – One of the distinguishing characteristics of bad, overconfident forecasters is dogmatism.
A bad forecaster tends to have an unwillingness to change his/her mind in a reasonably timely way in response to new evidence. “They have a tendency, when asked to explain their predictions, to generate only reasons that favor their preferred prediction and not to generate reasons opposed to it.” (Tetlock)
We are predisposed toward interpreting data in a way that confirms our bias or our priors or the decision we want to make. –Stephen Dubner
4 – Forecasting is everywhere. We do it, and rely on it, far more than we realize. And yet we rarely measure the accuracy of our forecasts.
“People often don’t recognize how pervasive forecasting is in their lives — that they’re doing forecasting every time they make a decision about whether to take a job or whom to marry or whether to take a mortgage or move to another city. We make those decisions based on implicit or explicit expectations about how the future will unfold. We spend a lot of money on these forecasts. We base important decisions on these forecasts. And we very rarely think about measuring the accuracy of the forecasts.” (Tetlock)
People often don’t recognize how pervasive forecasting is in their lives. And yet we very rarely think about measuring the accuracy of the forecasts. –Philip Tetlock
5 – One of the great historical examples of bad forecasting with dire consequences was the Bay of Pigs Invasion (1961).
“The Kennedy administration asked the Joint Chiefs of Staff to do an independent review of the plan and offer an assessment of how likely this plan was to succeed. And I believe the vague-verbiage phrase that the Joint Chiefs analysts used was they thought there was a ‘fair chance of success.’ It was later discovered that by ‘fair chance of success’ they meant about one-in-three. But the Kennedy administration did not interpret ‘fair chance’ as being one-in-three. They thought it was considerably higher. So, it’s an interesting question of whether they would have been willing to support that invasion if they thought the probability were as low as one-in-three.” (Tetlock)
“We are predisposed toward interpreting data in a way that confirms our bias or our priors or the decision we want to make. So, if I am inclined toward action and I see the words ‘fair chance of success,’ even if attached to that is the probability of 33 percent, I might still interpret it as a move to go forward.” (Dubner)
Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person. –Philip Tetlock
6 –Beware the Vague-verbiage Forecast
Forecasts that contain fuzzy words (e.g., “a fair chance of success”) can be misleading and used mischievously.
“In a vague-verbiage forecast it is very easy to hear what we want to hear. There’s less room for distortion if you say ‘one-in-three’ or ‘two-in-three’ chance. There’s a big difference between a one-in-three chance of success and a two-in-three chance of success.” (Tetlock)
7 – Super-forecasters tend to have the following characteristics:
Do not believe in fate, but do believe in chance
Humble about their judgements
Good with numbers, but don’t necessarily know deep-math
Use an outside-in view, rather than the inside-out view
Super-forecasters tend to be open-minded, curious, and humble about their judgements. They also understand probability and tend to believe in chance but not fate.
8 – Practical Recommendations for Aspiring Super-forecasters:
Focus on questions where your hard work is likely to pay off.
Break seemingly intractable problems into tractable sub-problems.
Strike a balance between under- and over-reacting to the evidence.
Look for the errors behind your mistakes but beware of rear view-mirror hindsight biases.
Bring out the best in others and let others bring out the best in you.
9 – Super-forecasting is a set of skills that can be acquired and improved upon with practice.
“Just as you can’t learn to ride a bicycle by reading a physics textbook, you can’t become a super-forecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding or failing.” (Dubner)
“Forecasters believe that probability estimation of messy real-world events is a skill that can be cultivated and is worth cultivating. And hence they dedicate real effort to it. But if you shrug your shoulders and say, ‘Look, there’s no way we can make predictions about unique historical events,’ you’re never going to try.”
Forecasters believe that probability estimation of messy real-world events is a skill that can be cultivated and is worth cultivating. And hence they dedicate real effort to it. –Philip Tetlock
10 – If, as a culture, we placed greater value on the accuracy of our predictions, we would improve the quality of public debate.
“If partisans in debates felt that they were participating in [events] in which their accuracy could be compared against that of their competitors, we would quite quickly observe the depolarization of many polarized political debates. People would become more circumspect, more thoughtful and I think that would on balance be a better thing for our society and for the world.” (Tetlock)
Here’s another link to the original podcast. Enjoy!
The HPT Video Weekend Matinee series is intended to introduce you to the library, with over 100 videos, in the hopes that you’ll share them further into your professional networks, as you see appropriate. And if you have videos to share with us, please forward them to the site administrators.
A Conversation on LinkedIn about the Roots of HPT Sparked This Post
November 1, 2003
I want to comment on Tony O”Driscoll’s ambitious, but risky, undertaking in the July issue of Performance Improvement – that is, to chronicle the emergence of Human Performance Technology. The effort is ambitious in trying to provide an accurate synthesis of a very complex and diverse field of endeavor in a few pages. It is risky because so much of the critical history of the field of HPT is buried deep in the relatively unpublished activities of the 1960’s. Dale Brethower has suggested (and I agree) that most all of the “discovery” that is the foundation of what has become HPT was done in the period 1958-69.
And from 1970 to the present, the rest of the world has been learning and applying the important notions developed in the ‘60’s, as they were slowly made public through various publications, presentations and workshops. These two distinct phases in the history of HPT – first, discovery of the basic principles of HPT by the original thinkers/innovators and second, the “discovery” of the power and application of the principles by the rest of the world – emphasize that the “history” of an idea/invention does not begin with its public acceptance.
***** ***** ***** ***** *****
For the rest of Geary’s 2003 letter to the editor – which he had shared with me before this was published – as I was then the President of ISPI – when he was prompted to write this – please go to this PDF: Perils – Rummler and his history of HPT.