This world was stressing enough even before Covid-19 came and turned it upside down. When issues like work stress and anxiety, relationships in workplaces and inappropriate behavior fill our lives, we need help. Here, mental health chatbots are one of the ways in which we can cope with this reality.
Aishwarya Kamath is the conversation design team lead for Wysa, arguably one of the world’s mental health chatbots. While it’s being used by more than 3 million users, it also helps psychiatrists serve their patients 24/7. In the latest episode of Taking Turns, she joins us to discuss the ins and outs of her work and experience.
How did you become a conversation designer?
That’s a very interesting question. When I started, I was doing an equivalent of CPA – I’m a commerce graduate. I think what the U.S has a CPA, we have something called a CA here. So I was trying to navigate those waters. Initially, I did my training for two months with a CA trying to learn the job. Two months down the line, I realized, oh my god, this is not for me. Definitely, there’s so much numbers and so much documentation and there’s so much to do on a day to day, which I felt like I can’t handle.
Then I started navigating my second best bet – I really thought I could do well in writing. So I started looking for internships when I stumbled on Wysa, and joined there as a content writer. Then, that role completely evolved into something else. Because on the job, day to day, we did that stuff that AI trainers did. Slowly, AI training evolved into something else, which is now called conversation Design. So it’s been an amazing journey, just being there and seeing Wysa grow and my growth in that direction as well.
What’s the bot or the project that you’re most proud of?
When I joined Wysa, Wysa had the capability of listening to a lot of different domains. However, it was most passive in nature. By “Passive”, I mean – it just doesn’t respond back as and when you understand those domains. Right now, there’s something we worked on which has moved from passive listening to active listening. Where Wysa makes you feel heard, it makes you feel like we get what you’re saying.
For instance, user talks about relationship and Wysa is able to pick that up. It says “hey, I know relationships are complicated. I know stuff… hang in there, it gets better”. So just acknowledging the user based on that active listening is, I think, my proudest project so far in Wysa.
Ready to build your bot on cocohub? Start here!
How to invite a bot to a video meeting?
How to use and create intents in your bot?
Who are the users of mental health chatbots and what are the main topics they’re talking about?
So far, from what we see at a population level, people talk a lot about relationships. For example – conflicts, arguments, how to navigate that thin line between personal-professional, work stress or managing relationships between colleagues. Also sleep issues are also at the height of it. So those are a few things that generally users talk about on our platform.
What else can you tell us about Wysa? The company, the users, how many users do you have.
So for those who don’t know, Wysa is a mental health chatbot. As a result, it helps you through your emotional health journey, make sure that you’re taking care of yourself. Furtherly, it’s someone you can reach out to at 04:00am when you’re in that negative thought spiral. Then you can talk it down, you can put yourself to a calm sleep, from a position of feeling anxious about what’s tomorrow like, right? So Wysa help you get there and does that for you.
So far, we’ve had 3.5 million plus users on both iOS and Google Play. Also, we’ve had massive amount of good ratings – 4.8+ ratings on both Google Play and App Store. Actually it’s been one or that validation helps us on a day to day.
Availability is one of the best advantage of chatbots. But what is the most important thing when you build mental health chatbots?
To my team of conversation designers who I have at Wysa, I tell them the three E’s of conversation design. First, making sure it’s Effective. Hence, users get to where they want to be. Once they get there, we’re making sure that the conversation is helpful.
Second one is Easy. As in keeping the language very simple – nothing about 6th grade language, no complicated words. Just making sure everybody understands what’s going on there.
Third one is making sure it’s Empathetic. Also it’s the core of what Wysa is – non judgmental, empathy. Really, you put yourself in the shoes of a user and walk that journey with them. That’s something that we like.
Previously on Taking Turns | Watch the whole playlist
Brittany Neal on how Wix uses chatbots to reach millions of users
Christy Torres explains what does a finances chatbot needs to do
Tess Tettelin breaks down the multilingual chatbot challenge
So when we talk about empathy, which is so crucial in your field, how do you turn a bot into being involved in an empathetic conversation?
On a day to day, processes for writing any conversation starts from that needs assessment. First, we’re going through, looking at core data of what it is like for a user. Then, we’re walking that journey of what we’re talking about, and translating that into the bot. While writing that part, we do steps like conversational test, which is like Wizard of Oz.
Generally you’d have a phone call and you just try that out, conversation with a couple of colleagues. Furthermore, we’re making sure that in our process, at least 3 people who we have on board feel that this conversation is good to go to the development stage. Even from the script phase of it. So we make sure that it’s there, that the empathy is there, and it’s approved by 3 of our people.
How often do you review the logs, and the change the way the bot speaks, based on those logs?
So that’s an interesting process. Because every time you make a change, we go in there and we check whether that change has made an impact, whether that thing has turned out to be as expected.
After testing that hypothesis, we come back – ‘Oh, this is not working. So let’s try something else. Now let’s make sure that it actually impacts the user the way we want it. So, it’s helpful the way we designed it to be’.
In my opinion, it’s a part of any product development. That’s why when I say when you’re designing a conversation, you’re not actually just writing for that conversation. More than that, you’re going back there looking at data, making those changes. Just making sure that what you deliver to the end user is really helpful.
In your opinion, can mental health chatbots be a substitute for a human psychologist, or just an extension?
Absolutely not! I think this is a common question that everybody asks for Wysa. ‘So, are you trying to replace human therapist?’ However, I think it goes hand in hand. Because as a therapist, you can’t be there for somebody 24/7. But as a chatbot, what you can help the user go through is try this when you’re feeling really anxious. When you’re at that moment, but you can’t get hold of a therapist, here’s something you can do. Hence, it’s an aid that you can’t provide – along with a therapist. It’s like a coach that talks to you.
What’s the most interesting or funny thing that happened to you as a conversation designer?
Actually, funny… I don’t know, but I think when I started, it was really scary in a way to look at actual user data.
I mean, we don’t know who said what, but we do know at the population level. This is what users are talking about, right?
Initially, when I joined as an AI trainer there, we were working on some of the risky models, like SOS or abuse. Back than we had daily standups where we say ‘today I worked on this or that’. So I think in one of the stand ups, we said ‘today I worked on sexual abuse’, and that came out really wrong. But in context, we were actually working on refining that model, making sure that conversations escalating the users to help lines and all of that. But when we’re talking about it in the stand up, it came out so wrong.
Also, it feels like one of those things that gives us meaning to what we do right here. The purpose is, in the end, to solve for accessibility. Wysa allows us to keep that accessibility open to all users. So I think the gap between the expectation and the reality is, like, still not what I supposed to do.
What tips can you give to aspiring conversation designers, people that are interested in this field? Also, what are the core skills we use in our day to day jobs as conversation designers?
First of all, when starting out as a conversation designer, don’t be afraid to dip your hands in and wear multiple hats. Basically, you can do a little bit of tech. You can do a little bit of designing the conversation. Then you can actually do a little bit of storyboarding for the conversation.
However, I think what I’ve seen in my new recruits, they feel hesitant to dip their hands in data or do that analysis themselves, or do a little bit of tech. This way, you get an understanding of how those teams can work together better. So, don’t be afraid to venture out in those roles if you have to. It’s fun!
Secondly, the core skills. Here in our teams, we use something we call storyboarding – where you write down the scripts when walk the users’ path with them and write for each use case. Then, the other bit is data analysis in some sort like qualitative, even if not quantitative. Just reading through data and understanding what different patterns like. Also, what are you writing for? Who are these users and what are they saying?
Finally – Excel, of course, helps you do that. Until I discovered this whole journey you can take with Excel, I used to be extremely scared of numbers. Really, it makes your life easy – I can tell you that.
Give me one thought or forecast of yours about the future of conversational AI, or of mental health chatbots.
Because the world’s changing so rapidly, especially now we’ve evolved into remote work – I won’t go too futuristic. So my prediction, at least for the near future, is that #1 – conversation design will make us go off screen. Furtherly, that adoption from off screen will go into some other platforms. Obviously voice is already there. However, AR or things like that, that’s one area that I see.
Especially for mental health chatbots and this industry – I see preventative mental health is becoming a little more big as we go forward and as and when the stigma around mental health is scraped off slowly. Actually, I think preventative mental health care is definitely something that is going to go big, and AI is going to play a huge role in that as well.