The Learning Impact Measurement Framework

Kirkpatrick’s four levels of measurement are practically an institution in learning and development industry metrics.

Kirkpatrick’s four levels of measurement are practically an institution in learning and development industry metrics. Josh Bersin, principal and founder of Bersin & Associates, has compiled several years’ worth of research into his Learning Impact Measurement Framework, which offers CLOs new ways to measure learning and evaluate ROI.

Bersin said most training executives look at measurement through the lens of Kirkpatrick’s four-level model, which looks at satisfaction, learning, job impact and business impact. Yet, Bersin also said this model is problematic — about 92 percent of the companies Bersin & Associates surveys say the No. 1 problem they face is measurement.

“Many companies extend Kirkpatrick and try to measure the ROI,” Bersin said. “Training executives struggle to use that model. They try to essentially traverse up the levels. Measuring the first two levels — satisfaction and learning — are usually fairly easy. But almost everyone struggles with measuring job impact and business value.”

While Bersin and his team were gathering information for this new measurement framework, they spoke with companies that were very happy with their measurement programs. These organizations were not using the Kirkpatrick model. Instead, they were using measurement to help them understand how well they were aligning their business needs to learning and using metrics to measure more tactical — yet still important items — such as program adoption, efficiency and program reach.

Bersin said the Kirkpatrick model, as nice and easy to understand as it is, actually constrains learning and development executives, forcing them to think only in terms of the four levels.

“There are several problems with the Kirkpatrick model,” Bersin said. “First, it assumes that these four levels are essentially related to each other and are hierarchical. They’re not. In many cases, you don’t want to measure learning — it doesn’t really matter. Sometimes you’re just measuring compliance. In some cases, satisfaction is a tremendously valuable measure. To denigrate it and say, ‘It’s only Level 1 — we need to move beyond that’ is actually a mistake because sometimes companies can gain much by focusing on different kinds of satisfaction measures.

“Second, the Kirkpatrick model is built around the concept that training is a teaching tool. Therefore, in training you’re trying to measure the impact on an individual student. In reality, training in corporate America is a support function like IT or HR. Training doesn’t generate any revenue or any profit — it supports other lines of business that generate revenue and profit. If you want to think about how to measure training, you need to think about it in terms of how well it supports business initiatives that have financial goals.”

The Kirkpatrick model does force training executives to think about line-of-business problems, Bersin said, but not necessarily in a way that is practical or credible.

“The existing model misses a lot of things. It has no concept of alignment. It has no concept of financial efficiency, delivering programs on time or meeting customer needs, the very things that companies have told us are very, very important,” Bersin said. “Companies with successful measurement programs view measurement as a process, not as a project. Rather than saying ‘Let’s figure out how to measure the impact of this new sales training program,’ which is a project, they say, ‘Let’s figure out what we can measure across all of our sales training to determine how well we’re aligned with the goals of the sales organization.’ Rather than trying to measure the number of sales generated by a training program, which is virtually impossible, these organizations rely on a variety of different indicators that are more readily available such as sales revenue trends.

“For instance, if the end goal of the sales training program is to increase sales, then the job of training professionals is to identify the specific learning objectives and/or behaviors that would potentially impact sales performance and then determine the practical measurements that point to success. Companies that take this approach never got questioned because their learning programs are completely aligned with their business units. The consulting process and agreed-upon metrics ensure that the training programs address specific, business-related issues.”

CLOs often are asked to calculate ROI, which has evolved from the Kirkpatrick model into what’s commonly known in the learning community as Level 5. Bersin said many companies try to measure ROI on a project basis and might even get positive results, but these measures rarely generate actionable information.

“American Express has done extensive ROI analysis of its leadership development program. Measurements have shown that with a multimillion-dollar investment in leadership development, the company has improved the workforce efficiency of their sales call center managers by 22 percent. The company even computed the financial value of this improvement, and that’s wonderful.” Bersin said. “But what does that tell you? Is that a high ROI or a low ROI? Maybe the ROI should have been 200 percent. What are the elements of the program that would have made it higher? The bottom line is this data is interesting, but it’s not actionable.

“Companies that generate actionable ROI-type measurements actually do the ROI analysis before they develop the programs. During the performance-consulting process, they go to their business counterparts or their customers, they identify the root cause of the problem, quantify the problems and then use that information upfront to help justify the training program and better understand the potential benefit.”

The Learning Impact Measurement Framework has nine areas of concentration: adoption, utility, efficiency, alignment, attainment, satisfaction, learning, individual performance and organizational performance.

“In a way, these areas are all relatively easy to measure, but it’s a different way of thinking about the problem — we’re getting beyond measuring individual programs,” Bersin said. “The framework is really a thinking tool. You need to think about metrics throughout the life cycle of a learning program and focus on those metrics that are meaningful to the business and are already being captured. Let the business units tell you what metrics are important to them and incorporate these into your measurement programs.”

The Bersin & Associates study, “High-Impact Learning Measurement: Best Practices, Models, and Business-Driven Solutions for the Measurement and Evaluation of Corporate Training” is available now.