“Tell me the change I will see after this intervention.” Stakeholders often pose this question to the learning and development team. As we justify “change” in behavioral terms, stating that change is dependent on the “will” of the individual, we tend to lose ground with stakeholders. I believe many would resonate with this statement.
Organizations adopt the stages of learning effectiveness basis the intervention, its purpose and overarching execution challenges.

In this article, I will share examples of learning interventions, and the steps we took at our organization to identify each stage of effectiveness. I will also share insights on the successful implementation that lead to clear indicators and measurement of learning effectiveness.
Context of the business
The organization, going through a transformation, was expanding its product portfolio, including increasingly technologically advanced products. Over the past two years, it had conducted a flurry of learning sessions and programs to upskill its salesforce. However, the end impact on sales was minimal. Therefore, it was imperative to look at the why of it all.
What was the intervention all about? In line with the above context, an intervention was curated to conduct an audit of the as-is state in the field. The objective was to understand internal and external challenges to the implementation of the learning sessions. For this purpose, we needed unbiased sales professionals who had an eye for detail, to spend time with the sales team, observe their modus operandi on the field and make deductions of what’s working and what’s not. It was decided that the results of the ‘On-field View’ would help us decide the next steps for the coming months.
How was Level 1 of effectiveness understood and measured? We understood that the larger organizational environment was suffering from a “trust deficit” regarding learning and development since they had not seen impressive results in the recent past.
The key measure of success for us at Level 1 (Reaction) was twofold. Firstly, agreement from the participants to accept outsiders into their work lives for a period of three months. Secondly, from a business result perspective, we needed transparent and authentic findings from the ground up.
What made it work? We did not follow Level 1 in the traditional sense looking only at the learner/participant, but rather weaved in a key aspect of business result in Stage 1 itself. So, we measured the beliefs of the participants rather than how the program made them feel. The key question was: Do you believe this program will help you? We shared findings with the business at an individual, team and system level, highlighting gaps. We took the satisfaction scores at Level 1 itself.
The next steps
Once the findings were accepted, the business and L&D team decided to formulate a real-time feedback model. Based on the past years’ feedback, the perception about learning programs was that they did not consider a “here and now” situation and thus application became difficult. Therefore, the auditors now dawned on the hats of on-field coaches, walking with the sales team on the field. They had already been well-established in the system, and therefore, giving feedback and helping participants build action plans became easier.
How were Levels 2 and 3 of effectiveness understood and measured? For us, the assimilation of learning and the resulting behavior change were viewed in parallel. The reason was a strong hierarchical bent of mind and deep historical relationships that existed between managers and their respective team members. Therefore, we had to be considerate toward this sensitivity.
What made it work? As we assessed how the feedback given by the on-the-field coaches was being assimilated, we continuously highlighted the progress of implementation to the managers in a periodic manner (every 2 weeks).
This approach helped us in a variety of ways: One, to business it was a way of being included, closely, at every step. Secondly, they became responsible for the change as well, at times, even removing roadblocks for us. And finally, it helped us address several interpersonal, and intrapersonal issues that participants faced. Therefore, the gamut of effectiveness became much larger than originally anticipated. Once the period of 3 months was over, we had all the input (field observations), feedback, and action plans implemented by the participants. Now it was time to see the output (Level 4) and outcome (Level 5).
How was Level 4 of effectiveness understood and measured? One of the key challenges we overcame was the intra-team challenges. To ensure a continuous positive change in performance, it was critical to cement the changes we had influenced within teams (through a two-way feedback process). For this purpose, the baton was passed on to the manager for business continuity. They benefited from the light touch meetings between the participants and coaches which aimed as reminders of actions to be continued. In separate connections, the managers added their insights and helped participants overcome the external environmental challenges. And yes, we saw more than 10 percent positive change in two key metrics that participants were responsible for as KRAs (key result areas).
What made it work? Continuing from where the intervention left off: The input of action plans continued to be implemented by the participants in an environment where they did not have the everyday support of the manager or coach. This step ensured that confidence levels improved significantly. Additionally, shifting the baton to the manager gave a much-needed message to the team: The manager believed in the change they had made and wanted to support them in the learning journey.
How was Level 5 of effectiveness understood and measured? In our context, the end outcome from a process standpoint depends on many other teams to drive closure. Most of these teams were not part of our intervention, and therefore, if business impact were to be positive (increase in revenue), it would require a much deeper implementation of the learnings across the value chain, consistently.
Therefore, we adopted a complete no-touch model and let the participants influence the value chain without any intervention/feedback. Over the next two to three months, we saw a consistent increase in revenue (more than 30 percent) for the cohort.
What made it work? We did not put any performance pressure on the participants. Rather we focused on the process, the input we had control over. We also recognized the factors which were not necessarily part of the initial brief and made curations from time to time: Intra-team relationships, managerial maturity and attitude toward learning programs.
Main findings from an effectiveness paradigm for L&D
For every intervention, keeping the end goal in black and white may not be possible. Evolve as you go along. Before starting the intervention, be aware of the organizational environment, which may impact or impede progress.
It’s also important to remember that business involvement is key for sustained support and implementation. Additionally, frequent communication and perspective sharing are critical to maintaining authenticity.
Finally, Levels 4 and 5 require more time than you may originally anticipate, but keep the buffer in your plan of action.