How Roo, Planned Parenthood’s Chatbot, Reflects a societal change

Taking Turns with Ambreen Molitor
Taking Turns with Ambreen Molitor

Published by:

Eran Soroka

Sometimes, a societal change can be seen as millions storm the streets, sign petitions, or lead to a constitutional change. And sometimes – like in the case of Roo – it can be seen at a much different way: at the logs of a chatbot.

Since its launch at the beginning of 2019, Roo – the groundbreaking sexual health chatbot of Planned Parenthood Federation of America – had millions of conversations with teens from America and beyond. It helped educate them about important aspects of adulthood. How to start a relationship, how to deal with changes in the body, and how to behave respectfully to your partner. Ambreen Molitor, Senior director of product, Digital Products Colab at Planned Parenthood Federation of America, joined us for a special episode of Taking Turns – as a part of our 6th Meetup, which was dedicated to healthcare.

• How much impact did you have about Roo’s conversation design?

One thing to note is that our AI is actually powered by a 3rd-party software system. But as many folks who do maintain AI know, it is not just like a thing that you build out that you leave it for bed and it works itself. There’s a lot of maintenance that goes into it. There’s also a lot in terms of training the software to continue to answer responses, and questions that are relevant and changing through time and research but also trying to figure out how we present that information to you, right?

There are 2 elements in terms of how we train the bot. One is a lot of the way and manner in which people ask questions. And how we respond is filtered through gaining conversations that we’re having in a separate product that we have which is called “chat text”. It allows educators to have conversations one-on-one through text or like a widget on our webpage. There, a user can ask a trained educator all kinds of questions. And that conversation helps build the manner, the tone, the types of questions that the AI will know in terms of understanding the sentiment. And creating that sentiment analysis and just sort of proactively understanding where this question is getting up.

Internet is easier to access in the USA than Healthcare, so there’s an opportunity in the equitability that AI can bring to our community that we haven’t been able to see in a very long time.


The second format that we have is we actually look at the conversations almost on a daily basis. We have a team that looks at the conversations looks at all the false positives. All the questions that are answered inaccurately. And we also look at questions that we know have been answered correctly and just kind of gut-check to make sure that they still are relevant or accurate, or if there’s any medical research that needs to be updated.

“Roo has a welcoming, inviting way of responding”

So that is part of the constant form of the high satisfaction value that we get from users when they engage with the bot. There’s other part, which I think helped garner so much love from users and from the industry. If you’ve engaged with Roo, we’ve stripped all the UI/UX elements that you normally see when in a text messaging or bot experience. We’ve skinned the front end completely, so it’s a lot more delightful and there’s animation. There’s just a lot of ways that we trigger the personality for Roo to come out, through the design but also in the way that Roo talks and answers questions. It’s a very welcoming, inviting way of responding – rather than just sort of a binary medically accurate answer.

And so, that element of just making sure the presentation of like a text exchange feels a little bit like a conversation with someone that they can trust or like enjoy talking to helps facilitate that. So again, although the AI power is a third-party service, the maintaining and making sure that the accuracy rate is all internal. And the presentation, which makes up for so much of the love that we get for it? We’ve built it on our own as well.


Originally it started from sort of observing a change in how the United States was thinking about sex education. When Roo first started, 29 states mandated sex education – and only 13 of those 29 required that information to be medically accurate. At the same time, we observed that a lot of sex education curriculum continued to follow heteronormative format. Which, you know, is not reflective of how society is today. We wanted to respond to that in a meaningful manner.

We also observing some behavioral changes amongst our younger demographic. About 2.5 years ago, we were noticing a huge spike in the teen section of our website. There are a lot of people going in and asking questions. We’re also observing through academic research that 84% of teens were finding sexual health information online. Actually, I think that it’s a healthy habit – to be proactive and learning.

The nuance there, is that there’s a lot of misinformation on the internet as well. So we’re making sure that we were able to address both of those issues and in a positive light. And not change behavior in terms of how people consume or look for that information; but get them closer to as medically accurate information and welcoming information as possible.


So, the combination of looking at policies that were changing in sex education in the United States, combined with behavioural change that we’re seeing in teens and how a obtained information, was something that provoked the idea of Roo. And how we came into figuring out that Roo is like a particularly useful format to do it in an AI-powered chatbot experience, is going at a level deeper in terms of observing teen behavior.

Roo, a demonstration

We did a lot of focus groups. We had academic research and we looked at teen users in junior high schools in the US. Also, we were monitoring their habits and getting to familiarize ourselves with how they’re not only obtaining that information. but how they’re communicating in general. We found that teens – even myself, and I’m not a teen – we open our cell phones 75-95 times a day. And the majority of the time it’s actually in some sort of one-to-one messaging.

We found that really provoking. And the research that we’re seeing is that people are communicating or conversing and like a new method. There’s an assumption that everyone is always on social media, going through feeds and consuming and participating in that. That’s actually not the reality of what we were observing especially with teens. They really rely or find themselves to be more expressive and more communicative when it’s one to one.

the importance of staying anonymous

We were also hearing from teens that when they’re obtaining sexual health information – privacy and anonymity were really important to them. I specifically remember a teenager who mentioned that they Google stuff, but they’re very cognizant of the fact that Google can cookie that. And so does anyone that’s using the same laptop or computer. And then, their parents can find out, Tech companies can find out. It was important for them to know that nobody would be able to surface their questions. With Roo you can ask the question and what’s great about it is once you close, there’s no thread. And this is different from a text messaging format where you have the thread history.

So, all of those things combined essentially got us to the place where we are at today. Which is, let’s build a solution that helps users, teens specifically feel safe to ask questions; And to do it in the format that they’re already looking for information; Also, allow that information to be medically accurate and inclusive; And that someone can access that information at any given moment. Hitting all of those bullet points got us to, essentially, the birth of Roo.



We were always surprised by how teens are very proactive and what they want to do. Going into the first years, our hypothesis for Roo is just that all the questions are going to be binary. When users are coming to ask us what they should do or what they want to do, that’s actually a value/judgments question, not binary. There is no right or wrong answers. It’s a matter of letting them know first of all that this question is great, we respect whatever you decide. It’s normal to have the flexibility to decide on your own.

Would you consider making Roo even more conversational, maybe even a voice skill?

Absolutely. We’re looking at ways to have Roo become more conversational in the upcoming months. This way we can help folks get to the heart of, like, what birth control method is right for them. There’s a lot of decision-making questions there where we hit our limitation once we get to the 280 character limit. If a dialogue happens, we can start answering questions in a more personalized fashion, even more custom.

What is it like winning a Webby award for a chatbot?

So humbling. So humbling.

💬 Previously on Taking Turns 💬
Michelle Zhou: “Humans talking to machines are brutally honest”
Mary Tomasso: “Don’t just write a conversation – speak it”
Michelle Parayil: “Bad copy can ruin a customer’s day”
Henry Ginsburg: “Want to get in? Grab a pen and start writing”
Kent Morita: “In the right context, humor can be very effective”
Breakup, Pokemon and YASS!: Greg Bennett talks convo design
Hillary Black: “Chatbots are like Social Media on its early days”
Every Word Matters: Language lessons with Maaike Coppens
Thorben Stemann: “Users asked my bot for her picture”
Emiel Langeberg: “Voice Tech can be also a research tool”
Rebecca Evanhoe: “Context is the most important thing for voice”
Lauren Golembiewski: “Voice can help you learn a new instrument”

We’ve stripped Roo of all the “normal” UI/UX elements, so it’s more delightful, and there are a lot of ways that we trigger a personality for Roo to come out.


Where do you see Healthcare and AI 3-5 years from now?

We’re optimist and I see AI becoming a tool that helps empower users to educate themselves about their bodies. They’ll be able to self diagnose or understand symptoms that they’re having better off. So they can get to a trained clinician or an educator faster, rather than getting to the state where it’s very reactive and something has changed or happened and the education happens reactively.

AI is a great component, that allows people to intelligently and quickly get answers to their questions faster. Rather than like working through this whole matrix in their head: I need to answer this question – Should I book an appointment? Do I have the time to do it? I need to block this time?… there’s so much preparation that goes into that versus just going to Roo, and asking this question. There’s optimism regarding that accessibility factor and the quickness and agility of being able to answer that question.

the (positive) impact of 2020

Also, if there’s anything we’ve learned in 2020, I think technology has made things a lot more accessible. I feel that prior to all of this innovation that we’re starting to see in the Telehealth Arena, if you wanted to get care, the norm was to physically go to a clinic, block that time off and get that service. Now, it opens up opportunities for anyone that needs sexual and reproductive Health Care Service. And they’re able to access that care quickly without burdening their day-to-day routine.

The internet is easier to access in the USA than Healthcare. So, I think there’s an opportunity there in the equitability that AI can bring to to our community. An opportunity that we haven’t been able to see in a very long time. And again, I think we’re starting to see the fruits of that labor come into 2020. Technology has made that a lot more accessible in many ways.

Next week, we’ll have episode 4 of Coming To Terms with AI! In the meanwhile – subscribe to our YouTube channel | Join our Discord community | Sign up for our newsletter | Follow us on FacebookLinkedInInstagram or Twitter