What is “deep context switching” in a chatbot? Consider the following transcript between a user and a healthcare chatbot, designed to check up on an elderly patient on a daily basis. (names and other details have been modified).
“B” denotes Bot and “U” denotes user (Frank)
1B: Good morning. Is this Mr. Frank Bates?
1U: Yes. That’s me.
2B: This is my daily call, to make sure all is well and you had a good night’s sleep
2U: In fact I couldn’t fall asleep last night.
3B: Sorry to hear that Frank. Can you tell me more? Did you experience pain? Or disturbing thoughts?
3U: No, the neighbors had a loud party until very late.
4B: Anything else you’d like to share with me on the medical front? Anything I should tell your Doctor?
4U: Not really. I’m just lonely.
5B: You must be getting out less now with the virus and all. I hope you are keeping safe. I can remind you of some Covid-related precautions if you’re interested.
5U: No need. I’m just bored.
6B: How about a short amusing quiz?
6U: Why not
7B: How do you fit 4 elephants in a refrigerator?
7U: wait, I do have a message for my doctor.
8B: No problem Frank. What’s the message?
8U: Please change my Lovastatin to Lipitor.
9B: I’ll make sure he gets the message. So How do you fit 4 elephants in a refrigerator?
9U: No idea.
WHEN SWITCHING CONTEXT – Idenfity THE CRUCIAL POINT
The Chatbot in question is composed of several Conversational Components, covering various possible contexts that may arise in a convo. The Entry Component issued the first 4 responses, covering the topic of the user’s sleep quality (needless to say, this component covers many more options around possible inputs about the user’s sleep).
The user’s 4th input was received after the Entry component had finished its task and gathered the info about Frank’s quality of sleep (and reasons). The “I’m just lonely” invoked a quiz component designed to entertain the user, who indeed agreed to participate.
However, at this point, something happened: Instead of answering the first quiz question, the user said “I have a message for my doctor”. This is a crucial point in Conversational AI: A certain topic is being discussed, and in its midst, before the topic is exhausted, the user produces an input that brings up another topic.
Switching contexts in the middle happens when the context is being interrupted in favor of a higher priority one. After the context is switched and exhausted, a decision must be made, if the previous context that was paused in the middle should be resumed or dropped altogether.
3 ways of breaking out of context
While talking to a chatbot, the user can deliberately break out of context in 3 main ways: He can produce a “Goodbye” intent (like saying “bye” or “see you later”), or he can produce a “Topic exit” intent (like saying “enough about this” or “let’s change the subject”). The 3rd possibility is less prevalent, and it involves the user bringing up a context which is indeed covered in the content, although not the one active now. I like to call these inputs “strong keywords”, or “strong intents”: Inputs that invoke important topics even at the cost of breaking the current topic of discussion.
Meet Roo: The sexual health chatbot that changed the game
What’s the difference between chatbot and digital assistant? The Conversational AI glossary
Returning to our healthcare example, the current context in turn 5 was the quiz, but the user said: “I have a message for my doctor”. This is clearly a “strong intent”, where a healthcare Chatbot is involved. This is important. Therefore, strong intents are recognized even before the current component (quiz) has a chance to continue the flow. Control is now handed over to another component, which handles messages to the medical staff. Once that new context is exhausted, control returns to the quiz component.
What’s important for Anna?
AnnA, a Companionship Bot promoted by Jason Gilbert, is almost entirely made of a large multitude of components, all standing by, waiting to be invoked by a user’s input. Since the users often do not suggest discussion topics, AnnA has a long list of conversation topics she brings up spontaneously. Yet, some of these topics are considered by AnnA more important than others. These are conversation topics which warrant breaking out of anything else and focusing on. For example, one of these topics is a component which handles depressed users with suicidal tendencies. phrases like “I want to kill myself” are a strong intent that will invoke this component in any point of the convo.
Healthy conversations: check out our latest healthcare meetup!
How can an Alexa Skill help in medication adherence?
Strong intents are system-wide. Hence, they must be a part of any chatbot which uses any kind of componentized framework, with multiple competing contexts. Fine tuning of complex deep context switching involves decisions like when to drop a previous context instead of returning to it (often based on the length of the distraction), or if to continue the previous context where it was interrupted, or to start it over.
Very few chatbots on the market can perform deep context switching in an elegant way. Typically, you would need to finish whatever you’re discussing with the bot, and then, while in a root position, bring up the new topic.