Data in the flow of training: Suggestions for L&D teams of one

You don’t need to be a scientist in order to collect meaningful data that assesses your learning program’s impact.

It’s hard to assess the impact of your learning and development initiatives. It’s even harder when you’re a team of one. But whether you’re a small and mighty team running one or two programs, or a full department serving thousands of learners, measuring the impact of learning remains one of the most important steps that the majority of organizations continue to miss. 

I am no stranger to this misstep. As a team of one at a midsize nonprofit, I don’t always have the capacity to do a pre-assessment, immediate post-assessment, 30-day observation for behavior change and business impact study (to say nothing of that other level of analysis to which we all aspire–looking at you, ROI).

As both the program manager, content creator, facilitator and technical producer, I’m lucky if I have time to incorporate a smile sheet. All of this, of course, ignores the difficulty of having enough responses for a representative sample size. Plus, there’s the issue of encouraging folks to complete such a survey in the first place. 

This challenge got me thinking: How can I meaningfully measure the impact of learning in a way that not only provides information that helps me be effective but also strengthens the alignment between our learning programs and other organizational priorities? In my former life as a first-year college writing instructor, I would often do pre- and post-diagnostic assessments with my students to determine how to pace the curriculum. I didn’t call them that, however. Instead, the assessments were built into early, low-stakes assignments intended to get my classes into the habit of writing.

For them, it was just a journal or a classroom free-write; for me, it was a gold mine of information about reading ability, mastery of grammar and sentence mechanics, and levels of engagement. Those of us working in adult learning use these diagnostics as well during a session, often as a way to generate and sustain engagement, especially in virtual spaces. Rarely, however, do we harness it for the wealth of information it is. By gathering metrics in the flow of learning, we can generate benchmarks where there were none before and assess a program’s impact based on those benchmarks, while simultaneously identifying the needs of the organization that may otherwise go undetected. 

So what does data collection in the flow of training look like? In some instances, it’s a poll question. During the first session in a new curriculum for our monthly supervisor training, I asked supervisors to select which characteristic of supervising they struggled with the most: supervising as a duty, as a practice, or as a balance.

This understanding of supervising a team, by the way, comes from The Management Center, whose resources I recommend. Overwhelmingly, they selected balance. As a diagnostic question, this poll gave me insight into which skills would be helpful to focus on in the coming months–delegation and expectation setting/outcomes-based management were already at the top of the list, and this confirmed that to be the best plan. As such, I could adjust the curriculum content for each session to emphasize these skills. This answer also provided insight into an area HR and senior leaders would both care about: organizational climate.

Unlike culture, organizational climate shifts more often, as it’s a marker of staff perceptions and attitudes at a point in time. If balance is something your management team is struggling to maintain, then it’s also a good time to start assessing things like engagement, job satisfaction, and morale. In other words, it’s a data point L&D professionals can share with internal stakeholders, thereby strengthening those relationships and leveraging our consultant skills. 

What are other ways you can collect data in the flow of training? The opportunities are a bit endless. Another option I like is using tools such as Mentimeter to generate word clouds or other visuals for collective brainstorming. These kinds of responses not only provide a quick and immediate diagnostic about prior knowledge of the content but also perceptions and attitudes toward the topic at hand. Things like word choice, analogy and symbolism—basically everything you were asked to pay attention to in your high school writing courses—are just as valuable data as the percentage of people who responded that they thought the trainer was knowledgeable on the topic.

While your creativity really is one of the only limitations to gathering learning data in this way, the other major barrier is in which tool you use to collect the information. Being able to collect it and analyze it after the session matters, especially for follow-up assessments. Virtual first tools are ideal and work for both remote and in-person sessions. Like all things, the medium will determine what you can and cannot do, and in some instances can influence the message.

As you consider what you want to understand about your learning audience, make sure to identify a tool that will support that purpose. Polls, for instance, are ideal for knowledge check questions, where you want to see a percentage change in understanding. Word clouds are also useful for comprehension (i.e. do they know the terminology), whereas you might want to use something like a text submission option for questions about how to behave in a scenario or how to execute a process. 

Whatever your innovation of choice, it’s crucial to prepare a milestone for reassessment. This is where impact will be measured. It should be collected in the exact same way, and ideally, with the exact same learners. If you only host optional training, asking the exact same learners the same question may be impossible. And while that data will not be a perfect one-to-one reassessment, it can still be useful for identifying a new subset of learners or for seeing how far the training content was disseminated across the organization, or even for determining if the learning from before did indeed stick for those attending you can cross-reference with the prior attendance list.

Our supervisor session has a steady recurring attendance, although it is technically optional. Nevertheless, the plan is to reassess six months out from that first session to see how the first half of the year’s programming has impacted their sense of balance. In this second round, I will also ask what skills or strategies covered in the supervisor session they applied which helped them achieve a greater sense of balance. Adding in this second question will allow me to also measure behavior change in the absence of direct observation. And of course, I will ask again at the end of this year’s programming. Mapping out these milestones during the design stage ensures you have both timely feedback for adjusting if necessary and the metrics stakeholders want for proving the value of the investment in training. 

As a small team, you owe it to yourself to harness opportunities that both prove impact and help you operate more proactively than reactively. Although there are countless resources and voices sharing excellent case studies of longitudinal studies and positive ROI examples, the reality is that for most of us in L&D, the resources aren’t there to carry out these bigger data analyses. And for those who would argue that artificial intelligence can help lift some of that load, the other hard truth is that not all L&D professionals work at an organization where AI has been adopted nor is even permitted. Moreover, becoming proficient in using AI tools requires time that smaller teams don’t always have to spare. That doesn’t mean the tools and opportunities aren’t there. It does mean, however, that creative and strategic harnessing of the tools and opportunities you do have is key. 

You don’t need to be a scientist in order to collect meaningful data that assesses your learning program’s impact. And you certainly don’t need to be one to add value to your organization. Take it from this former English teacher: All you need is a little creativity, curiosity and strategic planning to run a data-informed L&D department.