A perennial challenge in learning and development is proving itself as a benefit to the company and not just a necessary cost. What L&D leaders want to demonstrate is that their programs, courses and other L&D interventions are making a difference in work quality and quantity in the workforce — and that their improvement makes a difference in bottom-line metrics for the company.
We can do this by focusing on the higher-level kinds of evaluations — Kirkpatrick model levels three and four, or Thalheimer’s LTEM model levels seven and eight. However, while over 66 percent of senior learning leaders want to perform high-level evaluation, only 27 percent are actually doing high level-evaluation.. That’s because high-level objective data has been hard to get.
How do you get to every employee and measure their improvement? And who has time for that, if you’ve already had hundreds of participants go through the course?
The fact that it doesn’t get done doesn’t mean it shouldn’t get done. We just need a way to make it possible and practical to do.
What if we started with a measurement of the individual employee? What value does that employee bring to the organization? There was a reason they got hired. Are they meeting the expected need? And how do we know? The annual appraisal, or sometimes a more frequent quarterly appraisal, has been the typical mechanism for looking at the employee’s value. But rarely is that based on actual data. Most of the time, it is based on the subjective impression of the manager.
If we could get an objective measurement of the actual value the employee is bringing to the company and measure that over time to gauge improvement, then we’d be onto something useful to L&D, because you could compare the value produced before the learning event to after.
But, again, how do you get that on every employee who goes through a course? What if you set up a system — a culture — in which the employees keep their own stats, because they’re motivated to do it, because it benefits them?
Think about baseball: Stats are everywhere. Batting average is one. The number of RBIs is another. The various positions have their own stats. The player — and everyone else looking at the stats — can see how well the player is doing. And the better he’s doing, the better it is for the whole team, the owners and the fans (the customers).
Is it possible to come up with something like a basic batting average for every employee in their individual role? Even the soft-skill jobs? It’s not only possible, especially in a work environment with nebulous kinds of services — it’s key.
This can’t be a metric you choose just because it’s easy to count. It has to be relevant to the bottom line — the way a batting average is relevant to the money-making metric of a full stadium. So let’s start with what matters to the higher ups.
Relevant business results
You need a way to identify and track the metrics that matter to the executives, because you’ll need to know whether the employees you’re training are affecting those metrics. You’re looking for a way your learners and their managers can answer the question: “How do I see the value that I’ve created?” The answer they get is the set of relevant business results. From the individual employee’s point of view, these are the bottom-line metrics connected to the organization’s core mission that your efforts influence.
Here’s what makes these metrics valuable to you, the learning leader:
- The manager and participant have defined what they’ll both be watching and working to improve — the very things that will prove the success of implemented learning.
- Relevant business results generally produce trend data. You’ll easily be able to chart and see the improvement achieved.
- The participant influences the relevant business results. That gives the participant motivation to track the data for use in employee evaluations.
But here’s an issue to consider. Even though this metric will be a good KPI for learning, it’s a lagging indicator and it may be difficult to see the true impact of an individual participant. So, in addition, you also need a leading indicator, which is comparable to the individual’s batting average.
Value-added outputs
As you look for the value each employee brings, the manager and the employee will work to answer the question, “What does the value you produce look like?”
They should be able to come up with one to three value-added outputs for that employee. From the individual employee’s point of view, a value-added output is something you produce — a service, physical product, event, document or other countable output — that is proven to influence relevant business results.
A solid value-added output includes a countable noun and clarifying criteria. The criteria are written such that only the top performers can produce the value-added output consistently, say about 80 percent of the time. These value-added outputs allow for failure so that employees can better see their improvement.
Here’s what can happen when these value-added outputs are defined:
- Each participant creates baseline data before training, and continues to collect data so that they are able to see their improvement, rather than guess that they improved after the program.
- Because it’s now easy to see which managers develop great employees, managers are more engaged in supporting L&D programs overall.
- After training has been implemented, managers and employees know the value your L&D programs provide, and they — and their leaders, the executives — provide more support for the learning organization.
Your learners and their managers now identify and track the participant’s relevant business results and their value-added outputs — both lagging and leading indicators.
So the next question is: How do you maintain a culture in which tracking these relevant business results and value-added outputs continues? And, additionally, how do you get these results to support your learning KPIs?
Built-in reporting system
As an example, say you have two excellent L&D programs. One gets recognized for its excellence and the other one gets shelved and forgotten. What’s the difference? The recognition. I don’t mean an award program. I mean recognizing the value of the program. You have data to show the value being produced for the company. In that data is evidence the L&D programs you offered are paying off by bringing more value to the company.
Data doesn’t do any good if it isn’t seen, so create a culture in which the employee benefits from tracking their own stats and showing them frequently to their manager. It’s not bragging, it’s data. It’s objective. And it’s continual. You get the employees and their managers to see the value of being able to show evidence for the company’s faith in them.
Here’s what that can mean for you:
- The manager and learner will review trends and work to improve those trends. The manager may even find time to coach, to show their interest in their employee’s improvement. When they have a need to improve the value the employee provides, that’s an indicator that a learning intervention might be needed (It’s an automatic part of needs analysis).
- You can get looped into data reports that show your program’s influence.You’ll expend no effort creating questionnaires, hosting focus groups or doing a number of other activities to try to prove the success of your program. Setting up the reporting system does the work for you.
- You’ll not only have general trends to share with leadership, but individual stories of success from your training program.
An example: The front-line engineering manager
Let’s take a look at how this approach works in an especially high-EQ, hard-to-measure job.
I was working with a front-line engineering manager who told me he was reticent about increasing the amount of training for his employees because, he said, “They always leave within two years. It doesn’t seem worth it.” He also mentioned, because of his frustrations, that he might be looking for a new role in the organization.
I could see that his role in the organization was a concern to him. I could also see that his employee turnover affected what his group could produce (a relevant business result). I asked him questions that revealed he was the entry point for new engineers, and discovered that his employees often left for promotions into other groups within the company.
So, I told him what I thought his value-added output was: A promotable employee, and he was producing plenty of those. He shared that perspective with his manager, and they agreed that his contribution was of real value to the company. He felt valued, and he ended up staying in that role for three more years and became a training advocate. Ultimately, he was able to be much more precise with future analyses and L&D programs.
Putting these tools to work
For years, we’ve insisted that we need to plan evaluation during the analysis phase of course development. With these three tools, we can not only plan evaluation, but we can set up the evaluation and work to identify and collect baseline metrics that matter to the managers and employees we work with. We can get feedback that is far superior to what we are getting today.
When you put these tools to work, you’ll also get more benefits:
- Those who have worked with you before may already know their relevant business results and value-added outputs, meaning you have less work to do and, possibly, a lot of information about the true needs of the organization.
- Your KPIs will be more relevant to the value employees are producing for the organization than the KPIs other leaders are providing.
- These metrics have the bonus of utility in hiring, promoting, appraisals, competencies and everything related to value to the company.
These metrics give you objective data showing the value you’re bringing to the company. But metrics aren’t just numbers; they’re the stories you can tell about the value you bring to your organization.