Evaluation as a Strategic Tool

Why is evaluation important to you, the learning executive? This article will attempt to answer this question from the point of view of someone who directs the training efforts for the entire enterprise.

Why is evaluation important to you, the learning executive? This article will attempt to answer this question from the point of view of someone who directs the training efforts for the entire enterprise.

The creation of well-constructed evaluations can be useful in generating support for budget expenditures, continuing and modifying existing programs, and measuring the impact specific training programs have on the organization’s business goals. In order to do this, successful training executives must be knowledgeable of the strategic role that is played by an effective evaluation program. They must also have a working knowledge of what constitutes good evaluation practice in order to understand, challenge and guide those who would create and implement an effective program.

To support the training executive in this critical role, this article will discuss the strategic importance of evaluation, describe Kirkpatrick’s four levels of evaluation and define the components of an effective evaluation for each Kirkpatrick level.

With this practical knowledge, the training executive should be armed with the essentials of a good evaluation. This knowledge is necessary in order to direct staff in developing an effective and comprehensive evaluation program that meets the short-term and long-term business needs of the organization by monitoring the appropriate metrics with the appropriate evaluation.

Evaluating Strategic and Tactical Learning
Broadly speaking, organizational learning falls into two categories: strategic and tactical. Strategic learning starts with the organization’s goals and its desired outcomes. Increased market share, growth of new vertical markets and new product innovations are examples of the goals addressed by strategic learning. On the other hand, tactical learning is a response to a particular performance problem or regulation. Examples of tactical learning goals are decreasing customer complaints, achieving targeted competencies for a predetermined set of criteria such as OSHA or Sarbanes-Oxley regulations, or meeting certification requirements like the A+ certification from the Computing Technology Industry Association (CompTIA) or the Microsoft Certified Systems Engineer (MCSE). Both types of learning require appropriate evaluations to determine if their goals have been met.

Because all organizations exist in a dynamic environment, their business goals and regulatory requirements are continually changing. A good training program will reflect these changing needs for new knowledge, skills and attitudes whether they are strategic or tactical. Concurrently, a good evaluation program will tell you to what extent these needs have been met for both types of learning by providing you with data for continuous improvement.

For any training intervention to be valued, it needs to include a reasonable demonstration to its stakeholders that it has achieved its goals. This is the basis for the creation of an evaluation strategy. By definition, strategy is defined as “a plan of action intended to accomplish a specific goal(s).” This means that evaluation cannot come as an afterthought, but must be planned when the goals of the training are first being defined. In addition, the evaluation must be appropriate to the goals it seeks to measure. If your goal is to achieve a specific level of customer satisfaction versus determining the extent that a group of learners increased specific knowledge or skills, your evaluation strategy will be different. Understanding how to match the right evaluation to the desired goal is key to generating results that are valid and supportable.

In the book “Evaluating Training Programs: The Four Levels,” by Donald Kirkpatrick, there are four levels of evaluation:
Reaction: How do the participants feel about the program? This is sometimes called “Customer Satisfaction.”
Learning: To what extent did the participants increase knowledge, improve skills and/or change attitudes?
Behavior: To what extent did participants’ job behavior change?
Results: What final results occurred? This could be quantity, quality, safety, sales, costs and profits, or return on investment (ROI).

In order to guide your staff in what levels to apply to the evaluation task, it is important to point out two common misunderstandings about the levels. The first misunderstanding is that each level represents an assessment method. Because numerous methods can fall under each level, it is best to look at the levels as a categorization scheme (i.e., their original purpose). The second misunderstanding is that the levels represent an ascending hierarchy of value. In other words, Level 4 is better than Level 3, and Level 1 is the least valuable of the lot. This is also an incorrect assessment. Each level is dependent on what you are trying to measure.
Understanding the levels and these misconceptions will greatly assist you in guiding your staff toward building a strong foundation for your assessment program and its data. Good evaluation is about making right choices against well-stated objectives and crafting a strategy that will lead to defensible results.

Level 1: Reaction
If you want to measure the opinions, attitudes or feelings of others, you should use this category of evaluation. It is sometimes called the “customer satisfaction” level because it relies primarily on the self-report of the consumer of the training intervention and his or her level of satisfaction. Pejoratively, it is sometimes called a “smile sheet,” which should not prejudice its use if all you want is the learner’s opinion.

In the realm of blended learning, Level 1 is especially useful in determining the learners’ reaction to Web-page design and layout, usability issues, level of interaction and interface design, and in assessing the capability of an online instructor or mentor. The best-designed blended program in the world is not worth much if the learners are confused by the interface, think that the graphics are unimpressive or feel that the online instructor is not knowledgeable. Especially in this case, the customer’s opinion matters.

Level 2: Learning
After a training intervention, if you want to know if the learner has changed an attitude, improved knowledge or increased a skill, this category should be used. When any one of these characteristics has changed for the better, it is a valid assumption that learning has taken place.

To measure the extent to which the learning has taken place, make sure that you have a previous measurement of the learner’s attitudes, knowledge and skills relative to the specific objectives of the training—a pre-assessment. If the training presents information that is entirely new to the learner, a pre-assessment is not necessary because there is no prior learning to measure. After the training, the learner is retested against the objectives to determine what the gain is.

For example, your organization is launching a new version of Excel, and you have purchased an online training course on the new version. Prior to the training, you give your learners a pre-assessment to get a baseline score of their knowledge of Excel functions. They then take the training. After the training, you administer a post-assessment to see if there has been a gain in their knowledge of additional Excel functions. Did their post-assessment scores increase over their pre-assessment scores, and by how much? If at all possible, use a control group that has not received the Excel training to validate the effectiveness of the training over mere daily usage.

Level 3: Behavior
This level measures the transfer of learning to the learner’s job. It is one thing to measure changes in attitude, improved knowledge and increases in skills in a post-assessment. However, these positive changes must also transfer to the learner’s job in the form of new behaviors.

Because the learners may not immediately be confronted with the opportunity to display their new knowledge, skills or attitudes, behaviors should not be evaluated immediately after the training. If the training has been about the principles of project management, the best time to observe any training impact is when the employee is given responsibility for managing a project.

If at all possible, have your evaluation obtain as many points of view of the behaviors as possible. To obtain this comprehensive perspective, an observation protocol called a 360-degree evaluation is sometimes used. A 360-degree evaluation usually involves the observations of the employee, the manager, subordinates (with caution) and others who know the behaviors of the employee. The key is to develop a methodology for observing and quantifying the behavior.

Level 4: Results
This level evaluates the final results of a training intervention. This is probably the most difficult level to evaluate because there are so many other factors that could have contributed to the results.

To overcome some of this difficulty, you may want to “call your shots before you make them,” as in the game of pool. If the training is intended to increase sales, margin or quality, or to reduce expenses, customer complaints or calls to the help desk, make sure that you make it clear to your stakeholders before the training that these are the metrics you will be examining after the training.

While this is a difficult level to evaluate, there has recently been a great deal of effort toward developing methodologies for demonstrating results. Jack Phillips has done a significant amount of work regarding return on investment (ROI) for training studies. Also of note is Kaplan and Norton’s work on the balanced scorecard concept—broad business measures that are linked to training and other corporate initiatives. If the reader is interested in these and other similar attempts to demonstrate results, a list of resources is available on the CLO Web site at www.clomedia.com/roiresources.

Conclusion
Once you choose the appropriate level of evaluation, you can strengthen your efforts by following the guidelines outlined in Figure 1. Many of these guidelines have already been mentioned in the descriptions of the levels, while others have not. As a guide, use this table with your training staff to ensure that their evaluations are as sound as possible.

Good evaluation leads to metrics that can support the value of your training program. Knowledge of the levels of assessment and common misconceptions, and application of the guideline for each level can make you a better consumer and producer of evaluation as you direct your staff in the creation of an effective evaluation program that generates supportable metrics. These metrics can demonstrate to your stakeholders how well your training efforts have addressed the tactical learning needs of the organization, as well as demonstrate how your strategic learning initiatives have impacted the business’ broader business goals. What more can a training executive ask for?

James L’Allier, Ph.D., is chief learning officer and vice president, research and development, for NETg Thomson Learning. Donald L. Kirkpatrick, Ph.D., is professor emeritus at University of Wisconsin. He has written many supervisory/management inventories and training and management books, including “Evaluating Training Programs: The Four Levels.” For more information, write to jlallier@clomedia.com.