‘Black Mirror’ or better? The role of AI in the future of learning and development

The pros of artificial intelligence in learning — the speediness, customized content and targeted recommendations — outweigh the cons. However, learning leaders need to truly understand the technology, risks and critical approaches for mitigating those risks.

The hit TV anthology ‘Black Mirror’ has captivated viewers with speculative tales of how emerging technologies like artificial intelligence, machine learning and intelligent automation could go horribly awry. It makes for great television, but do similar futures await learning leaders who are looking to strategically leverage these technologies?

We have seen where AI-powered digital technologies are steadily and increasingly becoming part of our daily lives. Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana are accessible in many of our homes and through our digital devices. These and other AI agents are evolving and becoming more capable of completing processes humans are traditionally tasked with.

AI-driven technologies have the potential to enhance our lives as both learners and workers. Researchers and developers are continuously improving them to mimic human behaviors; they can learn, problem-solve and process language. Although they are becoming stronger imitators of humans, they still lack essential human traits such as wisdom, insight, humor and empathy. These are the traits that will ensure humans remain a critical part of the workforce for the foreseeable future.

How can AI be used in learning?

Businesses are using AI agents to engage customers, rapidly create content, analyze transactions and detect fraud. As organizations continue to transform their businesses with ML, we should consider how AI can transform learning and performance. Some of these tools have already entered the space; others have not, but are available in adjacent industries. These are just a handful:

Personalized learning recommendations: Social network platforms such as Facebook, Twitter and Instagram have been using AI to personalize the news and advertisements a user receives based on preference or interest. The AI in these tools uses a wide range of variables to predict the reader’s interest. Learning experience platforms like Degreed and EdCast use comparable technology to drive their recommendation engines, proposing content that learners are likely to find relevant based on their demonstrated preferences.

Building a personal performance network: One of the most critical assets a new hire can build is a personal network within the organization to help them assimilate, grow and perform in their role. An AI agent could, for example, help a new employee navigate a large population to strategically identify a coach, a subject matter expert, a career mentor or an experienced peer who can help them learn more about their role and the organization, which leads to better business outcomes, whether that is customer service or delivering a product. A tool such as Starmind can help a company analyze and identify talent across an organization.

On-demand 24/7 learning assistance: Many companies have used AI bots to provide front-line digital customer engagement for years, and they have become quite effective in mimicking humans. There has been an increase in the use of bots in learning, but the practice is still emerging and not widely adopted or implemented. Imagine an advanced AI-assisted learning bot that not only could reason and interact to support a learner with immediate requests, but also could anticipate and provide resources across each of the moments of learning need.

An example: During a live Georgia Tech lecture class in 2016, a professor told his students that his teaching assistant , Jill Watson, would be available online to answer their questions throughout the duration of the semester-long course. Near the end of the class, the instructor revealed that Jill the TA was, in fact, a virtual AI bot. The students shared their surprise, and many thought she was a human; she was so responsive and helpful that some students felt they had developed a connection with her.

Productivity, networking and wellness-driven learning recommendations: With the rollout of Microsoft’s MyAnalytics in Office 365, many organizations who use the MS Suite now have access to this functionality. By tracking a worker’s activity through their calendar, email, collaboration sites, chat and web conferencing, an organization can begin to apply learning recommendations with surgical precision at the point of performance.

For example, an AI agent could identify that it’s been 90 days since the last performance review with one of a manager’s direct reports. The tool could send a reminder, help set a calendar meeting, pull notes from the last performance review, share best-practice videos to help them prepare and nudge the manager to record the completed performance review. Finally, a feedback bot could analyze a recording of the manager’s practice session — searching for triggers, microaggressions or other areas for improvement.

Adaptive learning to optimize/shorten the learning journey: This is a growing application in the learning space. Tools like Area9’s Rhapsode platform can streamline a learner’s path to performance, help uncover unconscious incompetence and reinforce the learning to offset the forgetting curve.

Improved accessibility: AI can help learning designers and developers create more accessible and seamless learning journeys from the point of design — rather than as an afterthought. Microsoft’s Seeing AI app or automatic transcripts in YouTube are examples of how learning content can be made more accessible. These features include text-to-speech and speech recognition.

Sourcing, analyzing, and generating draft content: An AI-assisted learning-development tool can search a variety of sources internally or externally to find content that is relevant to a particular learning or performance outcome. Digital marketers and online publishers have been using AI to generate content for simple stories for years now. Odds are you have read an online article or blog post created by a bot and didn’t even realize it. In the learning space, there are tools such as Emplay and IBM’s Watson that can support this. For example, let’s say a designer wants to create a quick microlearning on how a vacuum pump works. The designer could engage an AI bot to crawl internal or external networks for potential resources — including videos and images. The AI agent then analyzes them, aligning pieces to specific learning outcomes, prioritizing resources for relevance and tagging them by modality. Ultimately, this would free up the designer to focus more on learner-centric design and delivery.

What are the ethical and practical issues we should consider?

As you can see, there are many potential benefits to the adoption of AI in the learning space. However, before we invest in AI, it is important to first explore the risks and practical issues of adopting AI across the enterprise. Following are some of the key issues aligned to learning.

Employee/learner privacy: AI and the algorithms it is built on require data about learners. A learner’s preferences, behaviors, productivity and those they collaborate with can provide data that can be used to personalize learning and performance-support systems. However, there are serious questions about how this data is collected, stored, analyzed and applied. The potential exists for an organization to negatively use productivity data or collaboration data to promote, reward or fire an employee.

This becomes more significant when you include the ability to capture and search the audio from a recorded meeting. Almost instantaneously, you have a searchable record of everything said in a meeting, which is something not everyone is ready for.

Employee/learner security: The more information an AI agent has about individual workers or employees, the more that tool can be used to target a specific individual and exploit or harm them. For example, with access to the right data set, including training certifications, a bad actor could leverage an AI-driven talent analytics tool to strategically identify a specific human with a critical skill set and a personal vulnerability, like extremely high debt.

Inaccurate, outdated or intentionally false information (deepfakes): This comes from two directions. One way is through inbound data. Content recommended by AI tools can be compromised by accessing inaccurate data sets and including them as recommended information to a learner. These recommendations have all the earmarks of authentic, expert information, and it’s difficult for novices to know the difference. Another way is through the creation of AI agents to specifically create false information at scale to influence the behaviors of a population. An example of this is the challenge of sorting through scientific, evidence-based data collected from publicly accessible sites during the pandemic. Aggregator bots have had a difficult time differentiating scientifically validated information from intentional propaganda about the efficacy of masks and social distancing.

Algorithmic bias: The outputs are only as good as the data they are based upon. AI built by inherently biased humans can lead to biases in the algorithms; the data fed to train the algorithm may have biases in it, both of which can be amplified by the algorithm. For example, Amazon built an experimental AI tool to streamline the résumé-analysis process. The tool would review large batches of job applicants’ résumés to identify top talent.

Basically, the tool used AI to rate each résumé on a scale from one to five stars. As Amazon began working with the system, they realized the system was not rating candidates in gender-neutral ways. What they determined is that the data set of résumés they fed the system to train itself was predominantly male, aligning with the trend in the industry at the time that most people in the role were male. Essentially, the tool taught itself that male candidates were preferable. The tool was never used to evaluate real candidates and was abandoned.

What you can do to mitigate the risks

Ensure privacy and security: Balancing the need for data to fuel the AI engine with the learner’s right to privacy is a challenging topic, but it’s one that must be addressed. Although recent legislation has attempted to outline key components, the approaches continue to evolve. The following are a good start to apply to the learning context:

  • Transparency: Clearly identifying when you are interacting with an AI system that is collecting data about you.
  • Right to the information being collected: There is an established need and right to access the data.
  • Opting out: The ability for the learner to opt out of the system collecting data.
  • Built-in limits: The purpose and scope of data must be limited by design.
  • Automatic deletion: The learner’s data should be able to be deleted upon request.
  • Security: Any data collected must be always secure.

Remove, flag or tag inaccurate, outdated or intentionally fake information (deepfakes): Build in mechanisms to identify and eliminate inaccurate or misleading content. This strategy was implemented during the last U.S. presidential election on Facebook and Twitter.

Eliminate algorithmic bias: Ensure that the AI outputs are continuously evaluated to look for evidence of bias; continually train the algorithm to adjust as the environment and data set change. Working with a diverse team and a robust, bias-free data set will reduce the risk of bias.

Educate the humans: As users and consumers of AI outputs, learners must understand how their actions shape the recommendations they receive, evaluate the quality of information received, and know when and how to break out of the algorithm (e.g., when the need to innovate arises).

In the end, in the learning space, the pros of AI — the speediness, customized content and targeted recommendations — outweigh the cons. However, to ultimately avoid becoming the inspiration for a future “Black Mirror” episode, learning leaders need to truly understand the technology, risks and critical approaches for mitigating those risks.