I hate ROI. There, I said it. Every learning professional reading this article can send me irate emails. I’m fine with that. I’m trying to get a rise out of you and see if you share my frustration.
I have been chasing ROI for the past 25 years. The most frustrating part has been that I have historically lost the race, and I am fed up. What can I do to finally associate employees attending or consuming one of my training programs — be it instructor-led or virtual — with their ability to truly produce business results?
I’ve come to realize the relationship between employees participating in training and their ability to apply what was learned is a loose association at best. I’m not saying training does not contribute to a learner’s on-the-job performance, but in the sense of measuring a Kirkpatrick or Phillips level 4/5 impact, we all know “contributing” is not a strong word. Training does not guarantee doing. For us to achieve a true ROI measure, it’s about proximity to performance.
In its simplest form it’s a basic cause-and-effect relationship. The closer and more direct a cause is to an effect, the higher probability it truly had an effect. Training, especially instructor-led training, by definition is not in close proximity to performing on the job. It does a wonderful job helping with knowledge gain and practice, but rarely is it directly integrated into the workflow or designed in a way to support performance. We need to look at this situation differently. It begins by aligning the means with the ends.
First, let’s be fair to training. In the classic Kirkpatrick levels 1-4 model, training is best assessed at levels one and two. It is fair to ask the learners about the overall experience. Did they enjoy it? Did they find it helpful? Was the room comfortable and the instructor effective? All of these perceptions are reasonable measures. They are all contributing factors to the program’s overall success, but there is more to measuring ROI.
The next fair metric related to training is knowledge gain. We can pre- and post-assess our learners to see if new knowledge was acquired during the experience. Both of these measures fall comfortably within level one and two, and because of the proximity and intent of training, are fair metrics to gauge training effectiveness. But this is where things begin to get tough.
Once a learner leaves training and is thrust back into the workflow with all the distractions and other variables, it becomes more difficult to assume training alone is responsible for the transfer and application of what was learned. To measure levels three and four of Kirkpatrick and Phillips level five, we need to use learning and support tools that exist in that environment.
Performance support is an ideal discipline to measure these two powerful ROI levels. By design, it is not intended to teach. Its purpose is to intentionally and systemically support a learner in knowledge application when he or she is faced with a performance problem.
For example, if someone accesses a support system when attempting to use a customer relationship management system to close an account, our ability to directly associate use of that support system with a learner’s ability to complete the task is very high. We can also measure other variables that impact business effectiveness such as a learner’s ability to self-support through fewer support calls, less reliance on peers and mentors and his or her ability to complete tasks quicker and more independently.
In fairness to our overall learning programs, we have to become better at associating learning assets with the appropriate outcome. When we do, we can measure our overall effectiveness and impact on the business in a much more powerful way. We need to extend our reach and impact into the workflow where true performance is measured. Only then can we truly assess the business impact for which we’re all held accountable.