The Real Reasons We Don’t Evaluate

Whether evaluation is required of you or not, it’s important – and beneficial – to get in front of the process, not lag behind it.

Anxiety surrounds learning and development evaluation. Some do it well, some do it uneasily, and some don’t do it at all. But whether it’s required of you or not, it’s important — and beneficial — to get in front of the process, not lag behind it.

Recently, in the space of one day in our purview at the ROI Institute, we received both troublesome and encouraging news about learning and development budgets and the recession. One e-mail, from a learning manager at a large health care equipment supplier, read, “The CEO has eliminated the entire corporate learning function, stating that he’s not convinced that the value is there.”

The firm is not in a desperate situation. Yes, revenue is down, but profits are still there. As with many other companies, it is experiencing some belt-tightening to make sure it’s efficient in the future, and formal learning and development at the corporate level does not fit into its plans.

Then we spoke with a chief learning officer at another company. She told us, “Our CEO understands what we do and the value we deliver, and there’s no request to trim our budget.” This company, a large communications technology firm, is experiencing the same belt-tightening. The difference is one firm is able to show the business value of learning and development, and the other is not. Why do some CLOs show the value of learning and development and some choose not to? Unfortunately, most of them do not.

Many professionals in this field know how to connect learning and development to business. Practitioners have been exploring and discussing this issue for more than half a century. Since then, practitioners, researchers and professors have improved and modified the concept and have written dozens of books on the evaluation of learning and development. While some of these books may not add a tremendous amount of value, most do. But the bottom line is people have the knowledge to do it — or at least the information is available to them.

Most observers in this field would conclude that we’ve made little progress when it comes to connecting learning and development to the business. Our No. 1 stakeholder — the top executive — who must fund, support and provide a commitment for learning and development, is not necessarily happy about it. A recent survey conducted by the ROI Institute of 96 large company CEOs in the Fortune 500 found that the most important data executives would like to see from learning and development is business impact. ROI is second.

The Perceived Barriers
In demonstrating the business contribution of learning and development, there are both real and perceived barriers. The perceived barriers are the reasons provided for not addressing this issue. For the most part, they are mythical or absolutely untrue (see Figure 2).

The Real Barriers
The real barriers often are not identified directly and explicitly in benchmarking and survey data. Here are the top barriers:

Fear of results: Perhaps the strongest barrier is fear of disappointing results, whether it’s a negative ROI study or results that are far less than executives expect. It’s a fear of the consequences. We often encounter this comment, “If my key program is not delivering value, why should I conduct a study to show my top executives that it’s not working?”

The fear is that the results may be used for a performance review of the CLO, may lead to a cut in the budgets or particular team members may be held accountable. It’s as if learning and development leaders in this circumstance are saying, “In terms of the business results, we’re in the fog now, and we’d like to stay in the fog. If the fog is cleared, the view might not be what we want it to be.” Obviously, this is short-sighted thinking.

Here’s the reality: If a program is not delivering the results, the executive probably already knows it. He or she just doesn’t have the study to show the detail yet. After all, business results are generated in the business units, and if the business measures are not improving, the executives are aware of it. So a negative or disappointing study would not surprise them.

A much better approach is to tackle the issue on a proactive basis. Take a high-profile strategic program and conduct an analysis of its success with the goal of process improvement (i.e., if it’s not working, we will make changes to make it successful in the future). This proactive approach wins points with senior executives.

Waiting for the request: Closely linked to the previous barrier is the mistake of waiting for the request to show the business contribution. Often, we see CLOs interviewed on the issue of ROI and the business connection. Some CLOs state that they’ve never been asked for ROI, so there’s no need to pursue it. There’s faulty logic here. If you wait for the request, it’s often too late.

When a request for ROI (or business impact) on a particular program is made, results are often expected as soon as possible. Without prior planning for a study, an executive waiting for results will become impatient. It takes time to develop processes, build capabilities and change practices, as well as collect and analyze data. When we wait for the request, we are on the top executive’s timeline and agenda, which is an uncomfortable place to be.

In this economic climate, top executives are more concerned than ever about funding and budgets, and examining all parts of the organization for potential reductions in budget or exploring more efficient ways to do things. Because they do not have data about the success of learning and development, it’s a natural opportunity to ask for results. They often go to the most expensive, high-profile and maybe even controversial program and ask for data about the contribution of the program.

Obviously, they want data quickly. But if there’s been no serious evaluation at this level, it may take time to build the capacity. Even if external resources are engaged to do the study, it often takes time because, ideally, the evaluation should be conducted for a program that hasn’t been implemented. In short, executives have concluded the program is not adding value but want us to prove it.

Smart CLOs are taking the initiative to develop this capability before it’s requested. They are controlling the agenda and the timeline, providing the executives with a healthy dose of accountability, routinely and consistently.

Lack of investment: Let’s face it: We have not invested enough funds for measurement and evaluation processes. To determine an appropriate level of investment, estimate the cost for measurement and evaluation as a percent of the learning and development budget. This includes any expenditure for staff and resources for collecting data (measurement) and using the data to make adjustments to make changes (evaluation). Most organizations are at 1 percent or less.

Annually, we benchmark with organizations using a comprehensive measurement and evaluation system. These best practice organizations spend between 3 and 5 percent on measurement and evaluation. The investment in this area is much lower when compared to other processes. For example, consider a manufacturing plant and its total annual operating budget. We could use any entity in this example, such as a hospital, government agency or a call center. Now, think about the cost of measurement and evaluation throughout the plant. This includes the cost of collecting all types of data and the evaluation of that data. It would include productivity monitoring, cost accounting, quality monitoring and time counting.

Many full-time employees do nothing but collect data. Others are using the data in meaningful ways to make decisions and improvements. Depending on the particular operating entity, the cost of measurement and evaluation as a percent of the operating budget would usually be between 15 and 20 percent. For learning and development, it’s less than 1 percent. Why is that? Perhaps we haven’t made the proper case for higher investment to the management team.

To make the case, show executives the benchmarking data of best practices, and show that investment in measurement and evaluation for learning falls far short of other processes in the organization.

Then, fund measurement and evaluation gradually with the success generated by the process. For example, the first impact or ROI study will show how a program can be improved to make it better or do it with less cost or time. In either case, there is added value because we evaluated the program. With results in hand, it’s a great opportunity to make the case for additional measurement and evaluation funds.

It also helps to allocate resources either part time or full time by shifting assignments. In most learning and development functions, staffing adjustments are possible, perhaps making evaluation a part-time duty for several people.

Another method is to rearrange priorities. The use of impact and ROI analysis helps determine which programs deliver the most value. Usually, there are some projects and programs that aren’t well-connected to the business, and their viability comes into question. Sometimes it’s helpful to evaluate those programs to pinpoint the value they’re adding, and if indeed they cannot deliver the value, they can be discontinued. Perhaps those resources can then be diverted to the measurement and evaluation issues.

Think About Evaluation Early
The time to think about evaluating a particular program at the business level is at the time of conception. Unfortunately, by habit, practice or teaching, we don’t think about evaluation early enough. For years, learning and development professionals used the ADDIE model — analyze the need, design the solution, develop the program, implement the program and evaluate the program. Unfortunately, this model causes us to think about evaluation after the program has been implemented. For most projects, this is too late.

By thinking of evaluation much earlier, we ensure the solution is connected to clearly defined business measures. Objectives are developed at multiple levels, including application and impact, to provide the proper focus throughout the program. Expectations are created for participants and others to clearly see why the program is being offered and their role in its success. Data collection is built into the program to make it more palatable to the participant. Planning for data collection is accomplished early, making the process much more efficient while assigning more responsibilities to others.

With this in place, it not only makes evaluation easier but makes the program more results based. This early attention to business evaluation makes for a program with more focus, more results and more value. This is a simple, but fundamental, shift in our thinking. We must address evaluation early and often.

You Asked For It
The learning and development team is a support group, supporting the needs of the organization by delivering learning and development to satisfy those needs. When an executive requests a leadership development program or an operating manager requires technical training, the programs usually are offered with the anticipation of results. After all, it’s requested by someone who should understand the requirements and needs, and the program is designed to meet those needs. Why should we evaluate these programs? Isn’t it assumed the value will be there? Why waste resources trying to prove value when the requestor clearly sees the value?

As logical as that agreement seems, unfortunately, the logic breaks down. Executives often request a learning solution when they see a problem. If something is not working in the organization, those executives assume employees don’t have the knowledge or skills. Research continues to show that when there’s a dysfunctional or ineffective process, the most appropriate solution is a non-learning solution.
We have a tendency to blame the managers for this dilemma: They don’t understand what they’re doing. We should blame ourselves. After all, we bill ourselves as learning solution providers. When you have determined you have a need for a learning solution, come to us; otherwise, go somewhere else.

Unfortunately, managers are not experienced in the analysis techniques needed to determine the cause of the problem and the appropriate solutions. The managers are not the bad guys, although they appear to be. We force them into this process, and we’ll have to change their behavior. But we have to do it subtly and gradually, teaching them to ensure the request is appropriate.

Lack of Preparation
Unfortunately, learning and development professionals often don’t possess the skills and knowledge needed to implement evaluation that demonstrates business impact. Some have had the luxury of a degree in learning and development, instructional technology or human resource development. But the curriculum usually focuses little on measurement and evaluation — usually one course on evaluation and maybe one on needs assessment. So most learning and development team members have not had formal education in this area.

Many learning and development professionals transfer from other areas. When individuals enter the field, they have almost no preparation, training or insight into evaluation. Preparation to address these issues is minimal at best. The problem is exacerbated by the number of books offered to these unknowing practitioners.

Imagine facing 50 books on the evaluation of learning and development and trying to figure out how to express business impact. Fortunately, there are workshops and certifications to help with this issue. It is now possible for a team to develop capability internally.

Change Is Needed
The CLO is in a critical role with respect to business impact evaluation. We’ve yet to see a successful evaluation system implementation without the support, commitment and involvement of the CLO. We’ve seen some enthusiastic learning and development team members try, but eventually fail without CLO support.

Never wait for executives to request higher levels of evaluation data, whether it’s impact, ROI or anything else. We must take a proactive stand. Also, don’t let the top executive team design your learning and development scorecard. They don’t know how to do it, and the odds are it will be something that’s almost impossible to do. Instead, take something to them to review.

Move quickly to make changes and take action in this area. Don’t put it off to next quarter or next year. Here are a few actions to take now:

Take assessment of where you are now with the results-based approach. Be prepared to invest more in learning and development. Invest in technology, or at least use the technology at hand to assist with the evaluation challenge. Change practices and policies to address evaluation early in the process and often, building it into the process. Focus on objectives and expand them beyond the learning objectives. Require application and impact objectives for the vast majority of programs. Take a fresh look at a learning scorecard.

Build a measurement culture. Ask questions. Require data. Have people thinking about results, accountability, measures, metrics and analytics — but not to an extreme. Make accountability a routine part of conversations, expectations and, ultimately, the reward structure.

Conduct a few impact studies and maybe an occasional ROI for programs that are substantial, strategic, expensive and high-profile. They attract attention from the senior team and often require higher levels of accountability.

Get your executives more involved. Executive involvement helps: It keeps the focus on results and accountability.

Never miss an opportunity to speak to an executive about the success of programs in their areas. After all, their teams delivered the great data, and these successes are critical for them.

Start providing information about successes to the appropriate executives. Don’t promise too much, but deliver. Make it routine. Avoid the request for showing the value of your entire function. If that request is coming, it may be too late.