Hundreds of questions were submitted by attendees of Botmock’s AMA about becoming a conversation designer or developer. Five industry experts with backgrounds in design, engineering, and product development answered as many as we could in a live roundtable discussion. If you would like to watch the full recording, you can do so here.

The following CUI experts joined us to answer your questions:

Grace HughesContent Design @ Accenture’s Innovation Hub- The Dock, Dublin

Sachit MishraProduct Manager, Google Assistant

Gabrielle MoskeySenior Voice Experience Designer @ a national insurance company

Matthew PortilloConversational Experience Design, IPsoft Amelia

Brielle NickoloffProduct Manager, Botmock (panel moderator)

In Part III, we covered the following topics:

  • Should I let users know that they are chatting with a bot instead of a human?
  • What is the current trend? Do people prefer talking to a machine/system that sounds more like a human, or more like a machine?

Should I let users know that they are chatting with a bot instead of a human?

We conducted research on this ethical question wherein we interviewed different groups of people: internal employees from Accenture, external experts from academia, and other external industry experts. We also interviewed teams within Accenture, on the ground, who were creating conversational interfaces. What we found overwhelmingly is a consensus that we should be transparent about the system’s identity. There is an ethical goal to be upfront about the fact that the user is speaking to a machine, not a human and to be clear about the capabilities of the system. However, there is tension and conflict between creating that delightful and meaningful user experience versus this call for transparency. That’s when we get into the nitty-gritty of conversational design that can be difficult to resolve. But our view is that we should be transparent and upfront about letting the user know that they’re speaking to a bot.

We’ve run experiments in the past where it was ambiguous about whether it was a human or a bot, and when the user found out it was a bot, they felt betrayed and got upset. So not only is this an ethical and moral thing, it’s also a user experience issue. One other thing we found is a result of when we are very upfront and the bot says something like, ‘Oh, I’m a virtual assistant’, or it just says flat out, ‘Hey I was programmed I am a chatbot,’ When the bot says something like this, people are more likely to write out short sentences and things the bot could actually handle, versus long paragraphs that were nearly impossible for it to decipher. So, we’ve found that letting the user know they’re talking to a bot is important from an ethical standpoint and also from a pure usability standpoint.

There’s a big element of user trust here. One case study from Google comes to mind. There was a product we called Duplex that was demo’d a few years ago at Google IO. In the demo, it set up a hair appointment for someone on their behalf by calling the salon and imitating a real human to set the time and date with the person on the other side. Then it got back to the user and said, ‘Ok you have an appointment at this time.’ It was very mind-blowing, sort of showing that we’ve raised the Turing test bar by imitating a human to the maximum extent possible. But it brought up a lot of ethical questions, especially for a proactive call to someone in the service industry who doesn’t realize it’s a bot. That experience was mostly for demo and testing purposes, but the position that’s been taken now is such that we always let the user know they’re talking to some sort of automated system, and we’re very clear about that. And that does have some impact on the way the user responds to it. The user does prefer talking to something human-like, they’re less irritant than they would be if they knew they were talking to something that actually sounds robotic. If the user feels that the system is intelligent enough and that they can actually achieve what they need to achieve without going to an agent, you will get more out of the user. Similar to what Gabrielle said, the way the user talks will change if they know it’s a machine versus a human.

What is the current trend? Do people prefer talking to a machine/system that sounds more like a human, or more like a machine?

This is not a case of a one size fits all. It depends on context, the use case, and the type of interaction that’s actually needed. There’s a lot of debate around this right now and it did come through during the research we conducted. It’s important to understand when people need more of an empathetic/relationship -building exchange versus simply a transactional conversational exchange. There are some instances where there’s a distinct advantage to actually speak to a machine versus a human. There are plenty of examples out there, including chatbots that help people with mental health issues. In these cases, the user often feels more comfortable talking to a machine rather than a human. There will always be cases where someone wants to speak directly to a human to get something done, but there are other times that it may be best to actually use a machine.

This is a challenge right now because you may be coming up against cultural barriers. Beyond that, sometimes people are a little afraid of what they don’t know. If they’ve been burned by chat interfaces in the past, if they’ve found that bots sometimes don’t solve their problem, then that works against you too. There’s no easy answer right now, but at this point, it just takes time and it takes success to build up the experience. As it does deliver for the user, they’ll find that it can be advantageous and can beat the experience with the human. If you’re calling the wireless company and saying, “Hey, what’s my bill next month,” the advantage of a computer is that it can look that up pretty fast, faster than a human agent could navigate, type, click through their own admin UI. So we need to take the time to exploit the advantages that chat interfaces give us, and automation gives us, while being really careful to actually design good experiences. Again, souring users at the beginning is going to do bad things for your usability and adoption. Short answer, it takes time (to get users used to conversational UIs).

If we remove ourselves from the design needs and think about the actual modes of interaction that users are engaging with, we’ve got smart speakers sitting in our kitchen, we’ve got chatbots [embedded in GUIs], we’ve got IVR on a telephone. Anyone speaking to a smart speaker knows they’re not going to get a human (at least at this point in time), but anyone picking up the phone to talk to someone at a company is definitely expecting to be able to talk to a human. Almost no one picks up the phone to access a bot. If we keep ourselves grounded in the mindset and ask ourselves, “What exactly is the user’s expectation coming into this?,” right out the gate, it can help inform our initial design decisions and set them up for success. Same with a chatbot- if you let the user know right off the bat, “Hey, I’m a bot,” then there’s no confusion as to what they’re getting. If they see a little chat button in the bottom of their screen and they think, “Perfect, I can chat with a human,” that’s almost a similar experience to the phone situation, but most users at this point when using a graphical user interface are conditioned to know that they may or may not get a human right away. That’s not really the case on the telephone, and it’s pretty much never the case when a user is accessing a voice assistant on a smart speaker or smartphone.

Grace made a great point about mental health cases where it may be advantageous for a user to speak with a chatbot versus a real human- specifically with text, not voice, because when you’re talking about private things you probably want do that, well, privately [over text-based chat].
As far as the telecom company case, it’d be great to dive into the actual feedback specifically about what customers didn’t like, because there are two layers there: there’s the speech layer (how real the machine actually sounds) and there’s the intelligence layer (if the system actually gets the request done; did it actually bring out the telecom person out to my house). I’ll give an example of a case where I think it worked really well for me. I was on an e-commerce app recently trying to make a return, and I expected fully to have to go through the phone call with an agent. I was gearing up to talk to another human being, I was like, ‘Ok, I gotta get my order number ready, gotta be friendly to this other person now.’ So I open the app, I go to the order, and within the app itself, there’s a way to start a return. When you start the return it opens this chat interface. Crucially, that chat interface already knows what you’re trying to do. It is basically optimized for that 80% use case. And it uses suggestion chips to fill in what you might want to do, how you wanna get the refund, where you want to drop it off, etc. So all I had to do is tap through this chat interface, didn’t have to talk to a single human being, and this chat interface facilitated my entire return. I think this is a great example of the flexibility of chat and the immediacy and convenience of in-app context where the order number and context is already clear. This all sort of came together to create an extremely delightful experience, that’s an example of where chat worked way better than calling in.

The Botmock team is excited that more of our community members are involved than ever before. There will be more panelists and attendees from diverse backgrounds contributing to these events in the future, so keep an eye out!

Further resources mentioned by panelists:

Content Design, Sarah Richards

Talk: The Science of Conversation, Elisabeth Stokoe

This article is an expansion of the third topic covered in Botmock’s first conversation design and development AMA session. You can also read recaps of the other two sections, Becoming a conversation designer or developer and Measuring the success of conversational UIs.