You’re in a meeting with the chief executive of your organization, and you’re asked why you’ve budgeted $100,000 for a new training curriculum for the customer service team.
Which of these responses are you more comfortable giving?
A. “We’re getting feedback from the team that our current customer service training is not so good. My gut tells me it’s time to revamp the training and reintroduce it using some cool new technology that we want to try out.”
B. “We’re getting feedback from the team that our current customer service training is outdated. The ways that reps interact with our customers have changed, and the team feels the current training doesn’t reflect the changes. Our plan is to bring the training current and, since managers think that the reps spend too much time out of the office for training already, convert it to self-paced content and put it online so reps can take it at anytime, right at their desks.”
C. “The key metrics we monitor for customer service training are validating important feedback from the team. They’re telling us that the ways reps interact with our customers have changed and the training needs to reflect the new model. A steady decline in first-time call resolution supports that feedback. Managers are also concerned that reps spend too much time out of the office for training. A steady growth in case backlogs supports that theory as well. The $100,000 investment will bring the curriculum current and incorporate a blend of self-paced and virtual instruction. Bringing those metrics back on target is expected to increase our customer satisfaction index by 10 percent. In the past, each percentage-point increase in customer satisfaction has meant an average of $250,000 in incremental sales revenue.”
I can’t say that Response A has never been given before, but it’s easy to imagine the look on the chief executive’s face, a one-sided eyebrow-raised scowl. Response B is more acceptable but still lacks any proof that the feedback is valid, thereby lacking real justification for the investment.
That leaves Response C. But is it realistic that you’ve got the hard data ready to back up the feedback that’s guiding your investment decision? It is if you’re thinking metrics before the feedback ever arrives.
The informal feedback loop for improving learning has been in practice for years, long before the proverbial smile sheet was developed. There was a time when it was all that the training department had to go on. Today, anecdotal assessment of learning initiatives is both necessary and valuable, and learning organizations still need to do it.
When Is Anecdotal Evidence Appropriate?
To keep the respect of stakeholders, CLOs need the hard numbers to justify their ongoing initiatives, particularly when their actions involve spending real dollars. If that money is from another unit’s budget and that unit is willing to allocate their funds to close the feedback loop, then your professional credibility is less on the line. But you’ll do your stakeholders a favor by asking for proof in addition to their off-the-cuff opinions. The proof you’re looking for doesn’t have to come from a full-blown return on investment (ROI) analysis. But when you receive the type of feedback about a learning intervention that will require an investment to rectify it, it is smart to quantify with your stakeholder the financial and other costs of different options to make the changes indicated by the feedback.
Words alone can’t prove the efficiency or effectiveness of what learning does. Words don’t equal impact, numbers do. But words will act as an index to point you toward quantifiable data that either indicates impact or justifies the actions to seek improvement. To get ahead of the feedback, you need to know what success looks like in terms of the metrics and targets for any given initiative.
Sketch it out in terms of a scorecard. What metrics will determine our success? Where are they now, and where do they need to be? What actions need to be taken if the metrics go up, go down or remain unchanged? Your stakeholders should know, in terms of measurable gains, what a successful training experience is expected to bring. If not, your role is to help them define the criteria that you’ll measure together to determine the success of your partnership.
Think in terms of what learning can do to enable successful business outcomes. With customer service training, a scorecard might include measures that enable first-time call resolution, total duration to resolve service calls, time to competence for new hires and so on. Quantify what success looks like for each measure, and decide what actions to take when the learning program is and isn’t meeting expectations. Using data from your customer service tracking tool or other data repository, you and your stakeholders can correlate the learning enablement measures against the real business measures to see how a learning intervention is impacting performance and immediately proceed with any needed corrective actions.
Balancing Metrics With Anecdotal Feedback
Questionmark CEO Eric Shepherd, who is a leading authority on assessment software, said, “Metrics and statistics are essential, but anecdotal information can often complement them. As with so many things in life, it depends on the situation. When you are measuring opinions, anecdotal evidence is extremely valuable. But if you are measuring knowledge and skills, it can in some cases be less helpful and possibly distracting. Collecting open-ended responses for feedback surveys and training evaluations makes them more actionable when you see an issue with the numerical representation of people’s thoughts and feelings.”
Never forget that learning is a service organization dedicated to the success of the business units that it serves. So, if line management says they didn’t like your training after it’s been rolled out, not only is that very valuable feedback, it demands a response to show management that you understand their concerns and what you’re going to do differently next time to get it right. But you also need to monitor the success metrics. You shouldn’t turn full circle on anecdotal feedback unless the success metrics you defined also turn in the wrong direction.
There needs to be a balance between the two, where one can validate the other. In this case, one plus one will equal three. In some cases, measurement just isn’t possible or available, but anecdotal feedback is. In these cases, your best bet is to compile the feedback and create a story out of it. Then, share that story with your stakeholders to get agreement on what the overall feedback really was and what you’re going to do about it. Real stories with or without metrics can be a very powerful tool for gaining agreement with stakeholders while showing your own focus on your internal customers.
Varieties of Anecdotal Evidence
Anecdotal feedback also is helpful in other ways. It provides the opportunity to make quick, low-cost corrections that can resolve minor issues with new learning programs. Organizations seem concerned about anonymity when collecting feedback electronically, as they claim it increases the genuineness of responses. Although some organizations capture various demographics from students, some still are reluctant to capture those demographics that may narrow down any ability to isolate who it might be that is responding, regardless of the demographics’ value in assessing and improving impact. The value of anonymous evaluations across the board is debatable. Learning needs accountability if it wants to be taken seriously by the rest of the business.
Some of the most valuable feedback comes right off the cuff, face to face, and there’s nothing anonymous about that. Put yourself in front of a group of senior managers or executives for a learning opportunity. You’re almost guaranteed to get unsolicited feedback. Front-line workers or new hires are a little more reserved and not as willing to simply voice their opinion. But when asked, most are more than willing to contribute feedback in some ways to gain recognition, particularly those who really “get it” in regard to the learning content. With anonymous feedback, it becomes difficult to investigate things further when you don’t know who is providing it.
So, how do you get the same candid feedback using online survey tools? Surveys based on a five-point Likert scale are good for producing basic aggregation and trending across broad curriculums, demographics and the like. Adding more open-ended questions helps to draw out feedback from the audience that can lead to improving the learning experience. But electronic forms in general tend to distance the personal connection between learner and learning organization. And it becomes even harder to connect the dots on anecdotal feedback when surveys are outsourced to external organizations.
Written feedback is different than spoken feedback, as it often loses context in the translation. Just think about how an e-mail “flame” gets started. A person writes what they think is a reasonable and clear message to someone, and the recipient takes it completely out of context. Meanwhile, the recipient steams over it, writes a hot response and fires it back across the bow of the original author. And back and forth it goes.
In the old days, organizations could achieve a 95 percent response rate on feedback surveys (i.e., Kirkpatrick’s Level 1) by handing out hard copies at the end of class, then collecting and scanning the results into their tracking system. Today, response rates to online end-of-course surveys usually hover between 30 percent and 40 percent, and many organizations just aren’t that concerned about it. Some practitioners state that they really don’t look at the survey results for mature courses anyway. Yet, feedback surveys are necessary early on in the life of a program. Admittedly, once a program is mature and running smoothly for a number of years, they do become less valuable.
Is All Feedback Good Feedback?
You might say that some feedback is better than no feedback, but is all feedback good? Not necessarily. It takes virtually no ability to simply criticize, although at times it seems like some people have earned an advanced degree in it. There will almost always be some complainers, and if your stakeholders agree with you as to the role these people are yet again playing, the complainers’ comments can be acknowledged, but politely ignored. Feedback also can be biased, reflecting one’s own opinion as a result of some unique scenario that forms the exception, not the rule.
Anecdotal feedback as a whole is unscientific. But just as you aggregate Likert results and look for trends, you can look for trends and patterns in your anecdotal feedback. If multiple employees are giving the same feedback, then it’s time to investigate the issue. If you’re receiving a lot of one-off comments about a particular program, then it’s likely that something isn’t clear with the content message. And too often people identify a problem without offering a solution.
The manner in which you ask for feedback can help resolve this. Asking “What do you suggest as a better way to do this?” or “How would you do it differently?” forces one to think beyond his complaint to possible solutions to his perceived issue. Some proposed solutions could be just what’s needed. At any rate, asking people for their ideas requires more careful thought on an issue, usually giving the individual a fuller appreciation of it and why it was handled as it was in the first place.
Be Proactive, Not Reactive
Going back to our opening scenario, it’s easier to validate feedback with hard data when you have a measurement strategy in place. Your measurement strategy identifies your key measures of success for the learning organization and for specific stakeholder initiatives.
Anecdotal feedback is a crucial part of improving learning and driving impact. But be prepared to know where it came from, validate it and act on it. A well-planned and implemented measurement strategy — one that involves anecdotal evidence — could save you some embarrassing moments in your leadership and earn you a permanent seat at the table.