Are You Too Nice to Train?

By Sarah Boehle – Training Magazine August 2006

A little evaluation can be a dangerous thing. Just ask Neil Rackham. Years ago, the best-selling author of SPIN Selling and Major Account Sales Strategy was asked by a European technology company to examine the way it evaluated trainer performance. “They were using Level I methods at the end of each program, which consisted of giving out a questionnaire to students at the end of training that basically asked, “How do you feel about the trainer?” and “Do you think he or she was effective?”

Two trainers in particular consistently received poor ratings. As one might expect, management began to wonder aloud about those trainers’ futures. “One of the trainers had applied for a management position, and managers were wondering whether they should even consider him for a promotion if his evaluations didn’t seem to be any good, and whether consistently high evaluation scores from students should be a qualification for moving to the next level.”

Rackham decided to dig deeper. The results of his research were startling, to say the least. Turns out, the two most abysmally rated trainers in the company were actually the best in their quartiles-and often the best on staff-when it came to learning gains for their students. “In the end,” Rackham says, “Level I smile sheets had given management the exact wrong impression.”

If you think Rackham’s story is an anomaly in the training biz, consider the case of Century 21 Real Estate. When Roger Chevalier joined the organization as vice president of performance in 1995, the company trained approximately 20,000 new agents annually using more than 100 trainers in various U.S. locations. At the time, the real estate giant’s only methods of evaluating this training’s effectiveness-and of trainer performance, for that matter-were Level I smile sheets and Level II pre- and post-tests. When Chevalier assumed his role with the company, he was informed that a number of instructors were suspect based on Level I and II student feedback.
Chevalier set out to change the system. His team tracked graduates of each course based on number of listings, sales and commissions generated post-training (Level IV). These numbers were then cross referenced to the office where the agents worked and the instructor who delivered their training. What did he find? A Century 21 trainer with some of the lowest Level I scores was responsible for the highest performance outcomes post-training, as measured by his graduates’ productivity. That trainer, who was rated in the bottom third of all trainers by his students in Level I satisfaction evaluations, was found to be one of the most effective in terms of how his students performed during the first three months after they graduated.

“There turned out to be very little correlation between Level I evaluations and how well people actually did when they reached the field,” says Chevalier, now an independent performance consultant in California. “The problem is not with doing Level I and II evaluations; the problem is that too many organizations make decisions without the benefit of Level III and IV results.”

Just how common is it for Level I results to give management the wrong impression? According to the research, very.

*** Here is the rest of this as a PDF ***

Some videos for the 4 people mentioned in this article…

Neil Rackham

2020

1981

Roger Chevalier

2008

2011

2012

Will Thalheimer

2018

2022

Richard E. Clark

2012

2019

2020

2020

2021

2023

###

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.