Identifying Tangible Business Results

Linking learning to ROI and tangible business results is a key priority. By knowing their stakeholders, CLOs can make the connection in a way that not only shows value, but also justifies the learning budget.

The need to link the strategy, impact and return on investment of learning to tangible business results is a key priority for learning executives. CLOs hear this continually from C-level executives who fund their initiative, so there’s no surprise that it has percolated to the top of the priority case.

In July 2003, analyst firm IDC predicted that providing measures of learning’s impact on business performance would become a standard offering included in vendors’ training tools. This prediction is now very much a reality. Sophisticated stakeholders of learning—whether it’s e-learning, coaching, in-house custom-designed training or traditional instructor-led—demand metrics that show how education impacts tangible business results.

For example, when PeopleSoft Education (since acquired by Oracle) saw a need in the marketplace to link its learning to tangible business results, it asked several questions: How does the training impact cycle time, productivity and quality? Does the learning result in fewer calls to the call center, less customization or lower implementation costs?

PeopleSoft adapted the ROI process from Jack Phillips to measure the impact of learning on customers’ business results, leveraging a learning analytics technology to collect, store, process and report on this data in an automated manner. As a result, PeopleSoft announced in March 2004 that its research had shown productivity increases of 20 percent as a result of training. More than 13,000 evaluations were conducted both at the end of the learning program and 60 days later. Other examples of impact included a 24 percent improvement in cycle time and a 22 percent improvement in quality.

Through the use of learning analytics, many learning organizations have worked to demonstrate the impact of their programs on the success of their organizations’ mission and strategy. Learning analytics is a powerful tool for CLOs who want to determine and prove the tangible business impact of corporate education.

Defining Learning Analytics
What is learning analytics, and how does linking learning to tangible business results fit into this model?

Learning analytics has been defined as a technology that helps organizations understand how to better train and develop employees, partners and customers. There are four components of a comprehensive learning analytics system: evaluation, testing and assessment, operations and impact. (See Figure 1.)

Evaluation Data
Never underestimate the power of evaluation data. Rather than squander an opportunity to collect data from a participant, expert, manager, instructor or other stakeholder by asking only for typical smile-sheet indicators, such as instructor performance, courseware quality or environment conduciveness to learning, let respondents document their predicted and perceived learning effectiveness, job impact, linkage to significant business results and even a financial return on investment. Evaluation is not always the preferred method of gathering data, but it is often the only realistic method given financial, physical and human resource constraints. Therefore, learning executives should definitely take advantage of the data acquired through evaluations to get some reasonable indicators of linkage to business results.

One example of how organizations can use evaluation data creatively is demonstrated at Eaton Corp. Eaton University’s award-winning evaluation technique is based on a completely automated process, whereby the company’s learning management system feeds its learning analytics system nightly via XML FTP feeds, providing all kinds of class and respondent details. Evaluations are sent out the next day to respondents for electronic data collection, providing real-time business intelligence to the university to help Eaton’s learning leaders monitor satisfaction, understand knowledge transfer, predict job impact and linkage to business results, and calculate ROI based on the human capital performance gains. The process is repeated two months later, when the system automatically sends a follow-up evaluation to the respondents and a manager evaluation to the supervisor. This process helps to “true-up” items such as time-to-job-impact and significant business results affected on the job, as well as recalculated ROI, in an automated and scalable manner.

Testing and Assessment Data
The next component of a learning analytics system is testing and assessment data. When a program is driven by safety issues, business risk or regulatory compliance, a test may be reasonable. Testing data should comprise a component of overall analytics, albeit a small component. The notion that a learning organization cannot achieve a link to business results (inherent in Kirkpatrick’s Level 4) without first traversing learning effectiveness (inherent in Kirkpatrick’s Level 2) is misguided, harmful and counterproductive.

For example, a major financial institution collected smile-sheet indicators, but then management requested linkage to tangible business results. The learning organization felt that in order to get there, it had to test first to achieve Level 2. Then, the data gathered would be one step closer to what management really wanted. The learning organization pulled instructors out of the classroom, pulled course designers off of new course design and focused on the creation and delivery of tests for a significant number of classes. Not only did this create frustration within the organization, but the learners also expressed an overall feeling of distrust and dissatisfaction over being tested for minor details. Finally, management never understood why its request was not honored, as these executives did not understand who Kirkpatrick is and why his Level 2 needed to be done before learning and development could understand whether sales training really resulted in increased sales.

Operational Data
The third component of learning analytics, operational data, focuses on answering the question, “How much do you train?” This contrasts with the other components of analytics, which focus on the question, “How well do you train?” Any line-of-business needs that monitor operations and metrics can help learning executives accomplish this, so the learning analytics system should have some component of operational data. Operational data includes items such as enrollment rates, cancellation rates, instructor utilization rates, course or location fill rates, and evaluation response rates. Many of these metrics are derived from basic registration data on the learning, which are typically housed in most learning management systems, but are often poorly reported. Many learning organizations struggle to provide this basic data, engaging their IT departments to write numerous queries using tools that are often cumbersome and not built for this task.

Caterpillar Inc., which won an award for best evaluation technique in 2004, provides a great example of leveraging learning analytics for this type of analysis. Caterpillar’s learning management system sends the company’s analytics system nightly files containing course details and enrollment details. This allows the analytics tool to easily calculate fill rates, response rates, enrollment rates and utilization rates. Further, all of these rates can be filtered by various components of the learning operation (by course, by location, by instructor, by learning delivery, etc.).

Impact Data
Impact data is a key component of learning analytics, and is at the heart of determining the effects of learning initiatives on tangible business results. A learning analytics tool should allow for the capture of impact data in multiple ways. One of these is through evaluations. Taking the opportunity to expand evaluations to predict and estimate impact while applying appropriate methodologies is effective, practical and scalable for a learning operation of great magnitude. However, both large and small organizations realize that certain stakeholders already gather results-oriented data, which complements or can be substituted for evaluation data.

Impact data should be carefully gathered and analyzed in the appropriate context and scope of the learning program and the environment in which the business uses the learning. Ideally, a learning analytics tool does not simply take data from one system, match it up against learning and declare impact. That can be harmful. For example, one article on sales training promoted an ROI beyond 4,000 percent. However, the reason behind this massive ROI was a major competitor that had gone out of business at the same time sales training was conducted. A learning organization can quickly lose credibility if it does not isolate the impact to training.

A learning analytics tool that allows the learning organization to engage stakeholders in a consultative manner to gather metrics before and after the event, to funnel the root cause of change and to adjust the data for conservatism yields more credible business impact and ROI figures. For example, when a bank does IT training, an expected macro business result might be increased productivity as defined by the vice president of IT. The micro-definition of this business result is the time needed to complete a specific set of tasks. The bank already has sample time study results set aside to compare with pre- and post-training data. By not putting the entire workforce through the program, the bank has a naturally occurring control group and can place that data alongside the information from the trained group for additional analysis.

The learning organization can then input the sample results, which include pre-training data, post-training data and the control group data, into the analytics tool. However, rather than claim that the entire productivity gain (if there was one) was due to the training, learning executives will consultatively walk IT managers through a root-cause analysis in the analytics tool to determine whether the net change was due to training, technology, policies and procedures, external factors, etc. Finally, a percentage is placed on the net analysis via the tool to adjust it downward for conservatism. As a final result, what may start out as a 30 percent gain in productivity is reduced to a 7 percent gain achieved directly from the learning, adjusted for conservatism.

The key to this analysis is having the right tools and templates to steer the stakeholders and the learning organization through the appropriate calculations. Recognizing that this depth of analysis may not be done all the time is important in realizing that you can use evaluation predications the rest of the time as opposed to smile-sheeting 90 percent of your learning programs and providing business result linkage on 10 percent.

Another important point is that the linkage to business results requires a concentrated effort to know the real macro results (revenue, quality, business risk, productivity, cycle time, cost, quality, customer satisfaction or employee turnover) and the micro-definition (for example, for quality, the micro-definition may be order fulfillment error rates per person per day). The analysis is only as good as the data and its source. Finding another system within the organization that tracks stakeholders’ micro business results can be challenging. Equally challenging is integrating this system with analytics tools—often a costly and time-consuming task. So, pick and choose the battles wisely when linking training to tangible business results.

Concluding Thoughts
A learning leader for The Linde Corp., a major manufacturer of welding equipment, once declared, “We are very interested in measuring ROI of training and certification, but I don’t know of a feasible method of doing this.” Another executive from Target Corp., a major mass merchandise retailer, stated, “Having someone in training for five days is a huge cost to us. I can’t do that unless I know that 95 percent of that is job-relevant.” These comments arise all the time when budgets are up for renewal or a new program is being proposed. Having a practical, scalable and repeatable model to link training to business results is key to selling the budget and the program to the stakeholder. The analysis does not have to be scientific and over-engineered, but it does have to have some sense of reasonableness and credibility. As a learning organization, the first step is to recognize that business impact is important and that you need to solve that linkage problem in some reasonable manner, allowing your stakeholder to come back for more detailed analysis where appropriate.

Nextel Corp. uses a learning analytics tool to automatically collect, store, process and report thousands of data points continuously on all of its training. At first, the learning organization felt compelled not just to provide reasonable indicators of the results of training, but also to dive deep with more in-depth analysis. The general sense was that stakeholders would value that deep dive. The reality was that the reasonable indicators were convincing enough to show how learning impacts business results. So, know your stakeholders and link training to tangible results in a way that can show value but not divert resources from your core business—learning.

Jeffrey A. Berk is vice president of products and strategy for KnowledgeAdvisors, a learning analytics technology company. For more information, e-mail Jeffrey at jberk@clomedia.com.

March 2005 Table of Contents