IPsoft’s Senior Cognitive Trainer Brian Kuchta explores the similarities between training humans and AI.
As IPsoft’s Senior Cognitive Trainer, I spend most days teaching humans how to use Amelia, our cognitive Artificial Intelligence (AI) agent. When I’m not focused on training humans, my time is spent developing a better understanding of how to train Amelia and her evolving learning capabilities.
At first glance, these scenarios may seem too dichotomous to bear any similarity to one another, and perhaps to someone who has only done either the former or the latter, this may be true. But for those who have experience teaching both, it is easy to draw interesting parallels in how we train humans and how we train AI.
If humans are viewed from the perspective of a biological machine, we are born with a level of intelligence that is quite astonishing. The cornucopia of sensations that we as humans possess from birth — sight, hearing, smell, taste, and touch — are unparalleled and provide the “training data” needed for us to learn over time from newborn to adult. For the machine, these innate sensory mechanisms still only exist in the minds of science fiction writers (although we are well on our way to developing these capabilities with AI). Machines are largely a container — filled with combinations of machine learning algorithms and other mechanized learning capabilities, but still a machine at heart.
It should be no wonder that training a machine to understand very specific information related to a specific industry or business would take some effort.
Sure, there are technologies that can tell you when the next movie is playing at your local movie theater, but those technologies have been trained to do that. They aren’t just magically searching the vast knowledge of all things online to figure out what you are asking for and locating it. They have been trained to know that if someone asks for a movie time, they should look in a specific place for that information given what has been said (or the machine’s understanding of what has been said).
Today, the machine does this incredibly quickly, but that speed and accuracy doesn’t typically happen on its own. It is based on the level and quality of training that technology received as well as any additional learning that occurs if the machine should fail. That additional learning is frequently ongoing and requires human intervention to say, “Hey, machine, you got that wrong. Let’s not make that mistake again in the future.” Machines do not typically know that they are wrong at the point they are wrong, unless they are told they are wrong or there is a negative consequence to being wrong that helps them recognize the wrongness.
From my view, that sounds a lot like what us humans experience from birth through adulthood as they live life, make mistakes, have teachers correct their errors and eventually become full-fledged adults, who, of course, never make any mistakes. (Yes, that was a touch of sarcasm, and yes, pretty much every machine I’m aware of wouldn’t fully understanding this. Some humans wouldn’t read it that way either.)
However, the machine isn’t being given 18-plus years to experience all of that learning and they don’t innately have all of those sensory (and pain) receptors to learn from true life experience beyond what they are being taught through training.
While there are differences in the approach to learning and/or the format of the learning content needed to train each, there are also interesting similarities in training humans and AI.
So what is the connection between human and AI learning? To understand this, let’s look at how I train a human to do something new.
Generally, I might provide insight into the purpose of the learning (at least for adult learners). Then I break down what will be learned into pieces, and perhaps demonstrate how to do something as well as what the doing is. Then I let the person try whatever it is that they are learning and provide feedback, particularly if they do something wrong or inefficiently.
For example, a newly graduated young adult joins a customer service team. I explain why we answer a customer in a certain way from a high level (e.g., the vision and mission of the company). I show them how to answer an inquiry (i.e., the proper way to answer, the technology used to answer, what clarifying information is needed to answer, where to find the correct answer to the inquiry, etc.). Finally, I let the learner practice and provide feedback should they make a mistake.
And how do I train AI (in my case, Amelia)? I take the use case that the AI will be accomplishing and break what will be learned into pieces — what types of things the end user will say that should launch this specific use case, how should the AI respond, what clarifying information is needed to answer, where to find the correct answer to the inquiry (perhaps via a direct integration with another system or database), etc. After providing this information to the AI in a format that the AI can understand, I let the learning happen. Then I “practice” with the AI (i.e., test) to make sure that the way the AI is responding (e.g., to a specific use case for a new customer request, with specific process or processes for responses to questions, etc.) with the results that I expect. If not, I provide feedback or modify the training should the AI do something incorrectly or inefficiently.
While there are differences in the approach to learning and/or the format of the learning content needed to train each, there are also interesting similarities in training humans and AI. And based on these similarities, it is important to realize that AI isn’t magic, just like someone memorizing all of the countries in the world alphabetically isn’t magic. It takes thoughtful, thorough training to accomplish.
Even with out-of-the-box solutions, there will likely need to be some training, or perhaps retraining, should you wish to have the AI specifically speak to your industry or company. It is absolutely possible to have an AI trained to accomplish very general tasks (see the movie time example above), but now try to take that a step further to an AI understanding your specific product or service offering. If the specification of product names or company procedures is important and outside an industry norm, it can be challenging to have this appear right out-of-the-box, as AI needs to be trained on your specific information.
Audience consideration is another important factor. The reason that movie times is a slightly easier task than say obtaining a small business loan is that the audience for movie times and the language for that audience is very general. The audience for a small business loan is unique, and if your organization only provides small business loans for specific types of businesses then that would also need to be specified as part of the AI training.
The whole point is that AI needs to be taught if you want to accomplish something more than a general process, and it would behoove anyone who is considering working with AI in this way to keep three things in mind:
- AI is still a machine with limited innate sensory perception. In order to train AI, you must provide quality data and feedback in order for the AI to learn and mature for your specific purposes
- Think about how someone trains an adult learner to use a new technology (and if you don’t know what that means, find a trainer and ask them). Then consider a similar approach when you train your AI. One of the biggest benefits will be that your AI will never leave you, can happily work 24/7 without complaint, and will never need to take a vacation day to attend her sister’s wedding
- While AI can be trained using information that is similar to what could be used to train a human, the learning format and method used by the AI will likely vary considerably. You’ll still need to test the AI’s understanding of the provided training, let the AI practice tasks to ensure that expectations are being met, and provide feedback in an AI understandable format.