Making the Case for Adaptive Learning

There needs to be a better way for instructional designers to build learning so that valuable data is consistent and available. Adaptive learning may be it.

co_0708_lead_1000_v2

With so many tools and products available to corporate learning professionals, sifting through the ones that are most practical for an organization can be challenging. Budget considerations are always a concern, but learning leaders also should consider how learning programs affect the bottom line.

Does the program decisively demonstrate how the organization is reducing cost? Conversely, can it show that learning initiatives drive revenue? Every other department can show the cause and effect of their work through statistical data. So how does that happen in learning and development?

For more than 25 years, statistical data used to prove a program’s value has come from reporting standards based on AICC or SCORM outputs. But what does that sort of reporting reveal about behavioral change, return on investment or if learning even took place?

Looking at how long it took someone to complete training, how many attempts they made, or what score they achieved reveals nothing about learning quality. When everyone goes through the exact same slides or questions, current reporting can only capture a small portion of data about those learners. It’s the differences rather than the similarities between learners that offer the deepest insights.

Even comparing information pre- and post-course misses the most valuable data: “What is the learner’s understanding during training?” Each slide, scene, question and piece of information an organization puts forward and how the learner is, or is not, using it should be where the assessment occurs.

So, how can a case be made for it based on what tools we have available as learning professionals? Before going any further, a mention of Experience API — more commonly referred to as Tin Can, or xAPI — would be appropriate. The xAPI was created to store and provide portability for learning experiences. It provides the ability to run content offline and on mobile devices as well as capture learning achievements completed outside the digital world. With xAPI statements the ability to capture more data than previous standards like AICC and SCORM is certainly possible, but the reality is it was never created to be a learning measurement tool. Because it lacks a common language, instructional designers will have to create content with the statements they have in mind for courses.

For example, an instructional designer may use 20, 30, or even more statements to better assess a learner’s experience in the course and all of that is great. The problem is, a colleague may capture similar statements with a completely different intention, use slightly different language or use less — or more — statements. Someone new may come on board or leave, and scenarios are created that don’t link what has happened in the past with the present; there is no consistent view of how to leverage that data in the future.

There is a lack of continuity. Language will not be consistent, nor will it provide enough detail to generate the credible insights business strategies require. So, while xAPI is a much better assessment tool than previous reporting standards, learning professionals need to think bigger and better. Higher quality learning starts with challenging the current thinking around how instructional design is approached and content is built.

Today, many organizations hire big data experts, computer scientists and statistical mathematicians because margins are getting smaller and smaller with competition. The ability to quantify business practices is essential. While this may seem intimidating or outside the learning leader’s realm, it’s actually a great opportunity.

So how does it get done? Consider adaptive learning advocacy. Adaptive learning is adapting content according to students’ learning needs. This can be driven by their responses to questions, interactions, tasks and experiences. Historically, learners run through a course, everyone generally sees the same slides, responds to the same questions and moves forward on the same path if they respond correctly. If they respond incorrectly they may go back to the previous slide with one less option or perhaps to a previous slide to review information so they can respond appropriately to the same question next time.

With this approach, organizations miss many opportunities to understand the learner’s comprehension, learning gaps and overall content approach. Learning leaders also miss opportunities to understand how well the course is designed, what improvements can be made and if there are alternative training methods that may be more appropriate.

Imagine if every learner’s path was determined by the consequences and repercussions of their actions. Imagine a course dynamically changing to better understand how a learning objective is perceived based on the learner’s response to it. What if an instructional designer could link a competency framework to each course to gain insights into the effect learning is having on HR objectives?

Or, consider working with subject matter experts to assign statistical values to specific behaviors expected from learners in a particular course. With this approach, organizations discover variant outcomes they’d never expect to see if they didn’t have the ability measure. Think about the value in providing different feedback options for each learner based on each individual’s response and subsequent pathway. Allowing the learner to search for options that may not initially appear to them, or providing the opportunity for an open-ended response will improve course quality and increase engagement.

From an assessment perspective, having a learner choose one of three or four choices is inferior to having learners search for the answers on their own. Further, stopping a learner in their tracks at the moment of difficulty within a course and querying why they responded a certain way will yield valuable data for the organization. Dynamically adapting coursework to fit to the learner’s needs will immediately identify professional development opportunities based on the knowledge gaps learners demonstrate.

A learning architect will be able to identify different organizational challenges based on detailed measurements gleaned from each learner, group, department and from the organization as a whole. An adaptive approach will even allow an organization to take information from other pieces of technology they may be using such as HRIS or CRM systems and incorporate it into the learning architecture. Trends and patterns identified will help save costs and drive revenue for the organization unlike any other department.

The opportunity for learning leaders to become executive level power players starts with data, metrics and the subsequent analytics they can provide to the organization. Deep learning measurement concretely demonstrates why learners do what they do. The ability to capture this sort of information will greatly empower learning departments within the corporate hierarchy.

Zachary J. Konopka is vice president of Skilitics, a digital learning company. To comment, email editor@CLOmedia.com.