How did chatbots break into the mainstream? With Phil D. Hall

Taking Turns with Phil D. Hall
Taking Turns with Phil D. Hall

Published by:

Eran Soroka

The history of chatbots and conversational AI dates back to the 1950s and Alan Turing, then to the likes of Eliza by Joseph Weizenbaum in 1966 and ALICE by Richard Wallace in 1995, who paved the way for the chatbots we see today. But why was the technology lagging behind for so many years even during the internet era? To talk about the history of chatbots, we needed to find somebody who really saw it all.

Luckily, that’s when we found Phil D Hall, owner of Elzware. Building his first bot almost 40 years ago, Phil is always up for a good chat – this was the longes episode of Taking Turns to date, and we could have talk for another hour or so. So let’s take a look back on the history of chatbots. Seat back, and enjoy the ride.

How did you get to conversational AI?

Really, that’s a long sort of time ago. So my first interest in conversational AI was in 1982, when my father came home with a borrowed ZX Spectrum, a very early laptop. Actually, it wasn’t even a laptop, but a computer with rubberized keys, able to save to a C90 tape. Back then, being 19, I had no formal training in computer science. I don’t think it really existed in those days, or if it did, it was very much higher education. 

So I decided to build something on the ZX spectrum, and I built a very basic chatbot. It said, how am I? Then, if I said that I was well, then it went down one set of conversations. Alternatively, if I said it was bad, it went down a different set of conversations. I had it only for a few months, and then it went back as a loan. Then, it was quite a long gap until I got into professional chatbots development in 2002.  So I’ve been running Elzware since 2002. 

Now, I’ve been building chatbots since I was working at a systems integration company in the late 1990s. There, in Logica, I was taught all the software development stuff. However, the main reason that I’m involved in chatbot development is because I met one in a bar in a virtual world in 1997-98. While studying anthropology, I was doing ethnography in a virtual world rather than going to Costa Rica. When trying to work through this sort of D&D world. 

The meeting with the chatbot was really terrible. So I sort of kept that in mind when the opportunity came to move into the business. And I did.

That’s a unique way of getting into things. What’s the bot or project that you’re most proud of?

Over the years, there’s been an awful lot of fun systems which we’ve built, tedious customer service-y systems. When Elzware was set up in 2002, it was on the back of work we had been doing in a Dutch post bank. There we did a customer services bot that worked with the contact center. Back then, the systems were called Email Response Management Systems (E.R.M.S). So they were processing emails, then converting that into a structured rooted system which worked across the call center.

Potentially, the system that has made me most proud – is the Teach bot system, in English, reading and writing. It was around 2008-2011, and we were using proprietary software, not open source software like we use nowadays. So it was giving kids in secondary school in the UK the ability to ask any question about anything at any time. Then, they were able to distill that information down, so teachers or parents could see the kids’ performance. Unfortunately, the financial crisis & a big governmental restructuring of education took us out. 

However, the one that had the most impact is the “I am Echoborg” show. In a nutshell, it’s a piece that generates discussion about people’s attitudes towards AI and chatbots and conversational systems are sort of the front end of that. Although, there’s so much that is going on with people being driven by algorithms. For instance, Facebook, YouTube or literally being driven by somebody in an Uber who doesn’t really know where they are or who they’re going. So we’re living in interesting times.

Ready to build your bot on cocohub? Start here!

How to invite a bot to a video meeting?

How to use and create intents in your bot?

Talking about the history of chatbots, I’m taking you back a bit. Because you said that there was a gap also for you between starting and making it professionally. But other veterans call it the “Dark Ages”. After the early chatbots of the 60s, 70s, early 80s, and then until ALICE in ’95, nothing happened for years. Can you elaborate on why it took so much time for the technology to catch up?

In my opinion, the drivers for AI before the “winter” were very much about mapping and working out academic constructions of what is or isn’t conscious. Then – working out how to code those. So we ended up with notions like perceptrons and some really major arguments between academic institutions, higher education institutions internationally. So there was work done in and around people’s attitudes in the 50s, automation. As in ‘robots.were going to take over everybody’s jobs and nobody would have to work for a living’. Clearly, we’ve realized that that was all sort of hyperbole, an early hype cycle.

So, there’s the space between when Weizelbaum created Eliza and the professional organizations. Some of them still exist – the company that I work with, “Brightwear”, was doing email response work 20-25 years ago. Also, there were west coast American companies like Credit Virtual. Really veteran players still working in this marketplace right now, bringing in expertise, working with their own proprietary software. 

Blame the social media

Actually, regarding the history of chatbots, I think the Dark Ages was mainly about delivery mechanisms. The ability to have good quality chatbots – and the term chatbot and conversational AI or even AI more broadly – is so fluffy, contested and full of marketing hype. Thus, it’s almost impossible to define what that is. 

However, good quality chatbots that were being built 20 years ago were actually knocked off their course. Partly, it happened by financial blips, where budgets disappear off with the large fintech companies. But my perspective is that it was taken out by social media. To start with, when the big social media companies came in – the people who are at the top of the big companies, who aren’t necessarily those that are most technically droids, really understanding what’s going on in the world – were like, oh, it’s easy. So we’ll just answer a question on social media, it will always be true. And that will be the end of it. Then this morning, I noticed that Twitter are now going to validate some bots on Twitter as being more or less acceptable in the world?… 

So the Dark Ages, for me, was because there was no business reason for chatbots and conversational AI. Where we are right now is we’ve got big data as an entity, as a thing. Therefore, people are looking to use it and part of that is to do that in chatbots. Whether that’s the right methodology or not, is a different question. Overall, the Dark Ages was actually about academic funding being pulled, other focuses being poor. Now, here we are in a full circle where academic funding underneath ML and deep learning is just off the scale. And people that are coming out from these data science courses, are just being hoovered up by startups.

Previously on Taking Turns | Watch the whole playlist

Hale Tuba Yilmaz on the best cases for chatbots in education

Jim Rowe on how chatbots can help SMEs thrive

Aishwarya Kamath on how Wysa created a mental health chatbot

Meeting a chatbot in virtual reality is definitely one of the best stories I’ve heard. However, out of the things you’ve created later, what is the most awkward or funny thing that happened to you with your chatbots?

First, we had the early days of the “I am Echoburg” system, and we were doing testing for it. There, we use a human being as an avatar. So the human being repeats the words of an AI. It’s sort of “Milgram” experiments, 1950’s experiments. You have a person in a white coat and they tell somebody who’s come into the experimentation chamber to put an electric shock through somebody who’s passed out, because they’re not answering correctly. It’s built in that kind of environment.

It started off 6-8 minutes long in Berlin at The Emotional Machine conference in 2016, and it’s now grown. Now, our next show is going to be for an international conference. There will be heads of countries taking the interview steps, which we’re hugely excited about. Back in early testing, when we had multiple levels of AI control of humans going on, we were doing some testing locally in Bristol, which is around the corner here, and the system started talking to itself.

So there is a structure underneath the system, which is quite flexible. It’s not built against algorithmic constructs, so it’s not a black box. Rather, it’s a glass box or maybe a gray box, a hybrid construction, which isn’t particularly fashionable right now. However, as we were going through this test, the system started talking to itself and myself and Rik Lander, who’s my colleague in the development of this, We’re both racking our brains about what had happened. Then we realized that part of the anchoring, which we were using to create the unique user ID, was an IP address. Then the network we were on was dynamically, reallocating IP addresses, every single connection.

I set fire to the train

However, the next story just takes you back right into the pre-history of chatbots. Here it’s important to stop for a moment. Because there’s so much hype, and there’s so many people who are so passionate about trying to build good quality chatbots nowadays. For me, there’s a bit of a blind spot; really good quality chatbots were already built many years ago. 

Back around 2009, a system was built for a UK train company. It was driving people around their website, with an avatar. Also, it responded in real time. It wasn’t built on flash or anything like that. Just a cool thing. Then (it’s also happening now), some people would be rude or offensive to the bot. So in discussion with the marketing department ladies, we agreed that if somebody was rude, the bot would force them to apologize, yeah?

One time, somebody phoned up my connection with this chatbot to get some information. When asked “are you a chatbot”, the system came back and said, “Yes, I’m a chatbot”. Then the person completely forgot that, argued with the chatbot and got offended. Afterwards, this person made a complaint to the head of the rail network. When this came through me, as I was the head of operations, it didn’t seem reasonable. So I went in, found the log, found out what had happened, wrote a nice email back and explained.

Then, the angry person went from ‘that’s the worst thing I’ve ever seen in the world’ to like, ‘that’s the cleverest thing I’ve ever seen in the world. I honestly thought it was a human being’. 

What’s the most important thing for a good chatbot, a good quality chatbot?

For me, from the bids that I get – because I do test some of the chatbots that are coming out from sort of Dialogflow and TensorFlow – there are 3 things that really need to be handled.

First of all, error capture. The systems are not being built with normal full lifecycle software development, methodologies,testing, quality assurance, regression testing harnesses. Is methodology built on shoveling data in, and then if what comes out the other end is appropriate, you tick a box and move on? If so, you’re handicapped a bit on the ability to do formal unit testing, use case testing, etc. 

Also, I chatted to 2 or 3 bots recently – well known companies, well invested companies as well. Immediately I heard just complete drops being ignored, unusual variables appearing out from nowhere. It’s not like people are trying to get GPT-3 to dance, which is going to be an interesting sort of revision of what happened with Tay potentially. So error handling in conversation, proper testing, is probably most important for me. The system which we work with has full regression testing harnesses, which run against the entire system. It’s sort of soup to nuts. Since we’ve been doing it for a long time, you’d expect our system to be sophisticated.

Overall, the main thing that the chatbots of today need, or maybe the marketeers need, is a little humility. When people say “There’s a revolution in HR”, “There’s a revolution in revolution, revolution…” – hold on. Actually, most of the systems are very simple, IVR-level simple. They’re interactive bot responses with not much more than buttons. So let’s just take a deep breath.

What’s in a bot?

Recently, I watched a pitch by Amelia-IPsoft, when they were talking about the future. There, they talked about something like ‘the growing divide between basic chatbots and conversational AI agents’. Now, all these words really mean nothing. Because if you go back to Erwin van Lun, who ran for many years and then decided to step away – a real visionary fellow – there were hundreds of different terms that identified what was or wasn’t a chatbot or a conversational AI. 

For me, I think what we need in the system right now is we need an acceptance of not the Rasa levels, because I think they’re hype; Actually we need to keep it much more simple. Foundational level-1 chatbot or an interactive bot response. For an Echoburg level conversation, we need different terminology. To get there can be difficult.

TT34 Hall quote 1

What else does Elzware do?

So Elzware has always been a pure play. We’ve always built conversational interfaces, we’ve fought for the rights of the market to be taken seriously. Also, we’ve done quite a lot of work in Avatars and voice recognition as well. Last week I gave a talk to in Birmingham. They have a hub there, Aston University. I just talked about avatars and meta-humans from the 80s up to right now. 

What Elzware does is 3 things. First, we build hybrid conversational AI systems. So if people have realized that what they want is not possible with certain simpler methodologies, they come to us and we look to build it. Then we do consultancy work, help smoke in the mirrors or RfI’s and pitch documents. Simply, help people get the right companies in the building. Third, we train people. We quite like to build a system and then train people with our technology, which is in part about design.

However, probably if you and I were going to start again from scratch and have a similarly interesting and diverse conversation, it would be about what kind of structures are in design. 20 years ago, Elzware used to employ a lot of people, relatively speaking. Right now, it doesn’t, it’s an association, it’s very lean. Back then, we used to have content people and technical people.So there was a technical developer and a content developer, and the two people had a formal social/scientific relationship, but that’s all up for grabs. Yeah, Elzware builds, we consult, we train, we have a lot of fun. We do crazy-ass stuff.

If somebody else wants to do crazy-ass stuff with chatbots – and today it seems like a booming profession – where would you recommend those people to start?

First, you could open up a box with a system which allows you total visibility and control. Something like Pandora Bots, with Lauren Kunze and Steve Worswick, which are doing well. In their system you can see exactly what you’re doing, but they’re building really good and interesting relationships into front ends with rapport. Also, they’re taking on the big companies like Facebook and blender bot, and good for them. I think that’s a good method going forward. The aforementioned ALICE system, the heart of the Pandora bots, has been trained at least in the UK as part of F.E Further Education and Higher education. So that’s a good place to start. 

Then the system which we use, Chatscript, is a level more technical. It gives you scalpel level controls, but I wouldn’t recommend it for the faint of heart. If somebody is interested in how to really create something which is more software than sort of statistics, that would be a good line. Also, new players are coming on, and I think your company is an example here, Eran. You’re able to do something which is relatively low code or no code. There you can think about dialogue, about where social science and behavioral integrations can fit together. But also sort of see where what you’re doing could be seen to be ethical or not. Giving people a quite straightforward view of how things fit together, it opens up the discussion.

TT34 Hall quote 2

And as long as people appreciate that reinventing the wheel isn’t necessarily going to be helpful, we just need to take a deep breath. Work out where we’ve come from, to see where we’re going to. Thus, we should be in good shape.

After witnessing the history of chatbots, give me one forecast, where do you see it going?

Well, I think the area that I’m enjoying working in right now is multi-language stuff. For example, in an area like South Africa, there are 11 major language types. Across Africa, the continent, thousands. Even with accents across the UK: The difference between somebody talking a harsh Glaswegian and somebody talking like a flat Estuary English, and the ability for those things to be recognized effectively. 

Going forward, there’s an interesting line, about trying to democratize and open up the voice part of the equation. There’s organizations like the Open Voice Network, which are part of the Linux Foundation. So I sort of volunteer and am involved in the media and entertainment part of that. Because I think if we had a position where people could all have access to good quality healthcare education information in whatever language – and this wasn’t just a bubble built on sort of young male American voices and data constructs – potentially we’ve got something which harps back to the early days of the internet.

Mostly, I’m interested now in the application of functional linguistics within the analysis and understanding of user types. I don’t think people even know what all the tools are yet, Eran, I think people just need to go – let’s go back. Let’s see what we started with, bring that up today. And let’s get some proper standards in place. Let’s make sure that people who turn up and promise the world when really they’re just faking it – move them out the way.