You asked, they answered!
Hundreds of thoughtful questions were submitted by attendees of Botmock’s AMA [ask me anything] about becoming a conversation designer or developer session. Five industry experts with backgrounds in conversation design, development, and product management answered your questions in a live roundtable discussion. If you would like to watch the full recording, you can do so here, and if you would like a recap of the session, read on. 🤓
The response to this event was overwhelmingly positive, so more AMA sessions are in the works. Keep up-to-date about upcoming ones by following Botmock on Twitter or Linkedin. We couldn’t have done this without your participation!
The following CUI experts joined us to answer your questions:
Sachit Mishra, Product Manager, Google Assistant
Gabrielle Moskey, Senior Voice Experience Designer @ a national insurance company
We covered the following topics:
#1 The path toward becoming a conversation designer
#2 Measuring the success of your conversation experience
# The technical elements of conversation design
Whether you are just starting to learn about CUIs like chatbots, voice experiences, or IVR systems, or you have been creating them for years, we hope you’ll find value in the answers from our experts. If you have further questions or anything to add, please comment!
Topic #1: The path toward becoming a conversation designer
What role can we as designers play in the creation of conversational user interfaces (CUIs)?
Do I need to learn Python to get into this field?
As a content designer, my goal is to provide people with the info they need at the right time and in the most efficient way. When we design for voice, we have to think about the natural flow of conversation and make sure that the information holds universal meaning regardless of any user’s educational background, culture, and conversational style.
We think conversation is simple because it’s so natural and intuitive, because we do it all day, every day, some of us more than others. But it’s actually such a rich, nuanced, complex thing, so it shouldn’t be surprising then that translating natural dialogue into a programmatic interface is actually a really difficult thing to do.
From the perspective of an Engineer or Product Manager I can attest to the amount of guidance that we crave from our design team. This is especially useful when designers are able to offer constructive design recommendations based on real data.
Our chief role as designers is to advocate for usability and adoption; this is our north star.
In that regard, we work from a unique position in which we always advocate for the user experience. As conversational designers we have to make sure that the users can intuitively and efficiently find the information they are looking for.
Designing a VUI versus a GUI (graphical user interface) requires very different processes, but many conversational experiences will require some combination of both. Whenever you are interacting with a voice interface like Alexa or Google Assistant, there may still be a visual component to that interaction [like cards that pop up on a user’s phone, a link that gets sent to their phone, images that show up on smart speakers that have screens, etc.]. Graphic designers have a big role to play in the VUI design process, and their main objective is to make sure that the visual affordances successfully support the invisible, verbal exchanges.
For a deeper dive into each panelist’s response on topic 1, go here.
Topic #2: Measuring the success of your conversational experience
– How do you evaluate and critique conversational interfaces?
– How do you make design decisions transparent and arguable? What have you found helpful in facilitating a design stand up for your interfaces where everyone can see the decisions you’ve made and offer feedback?
– What are the main metrics for evaluating the performance of a conversational AI?
– How do you design and develop for failure?
Retention is super important when thinking through metrics. It’s something that’s really hard to achieve with chatbots and voice experiences, as compared to modalities like mobile apps and websites. We measure things like the number of times a user comes back to an experience over the course of 7 days, or 28 days. We consider week-to-week retention — are they coming back week over week? The metrics you want to use actually depend a lot on the use case, since not every type of metric applies to all use cases, but these are generally ones my team cares a lot about. Another one I always consider is a no-match or fallback rate. During any given conversation, a user might say something that wasn’t accounted for and we call that a no-match or fallback. We want to know how often that’s happening in any given conversation.
I’ll describe some of the metrics we use while we’re actually designing for any given use case. At IPsoft, we tend to make the broad assumption that users aren’t necessarily excited to speak with a chatbot, and they don’t find it novel that a computer wants to speak to them. Users want to get to a resolution in the most efficient way possible. We look at how much time it takes for a user to complete a specific task, and how many turns of conversation it actually takes. If a user has to go through a really lengthy conversational process to complete something that they could likely do somewhere else, this usually frustrates them and it’s likely not going to be a good experience.
It is critical to incorporate error handling strategies and make sure they’re conducive to a good user experience. To make a system error-ready, we’re not going to just respond, “Sorry, I don’t know.” Instead, we can actually do something helpful, either by asking the user to rephrase it, or suggesting a disambiguation (“Oh, I think you meant this; is this what you wanted?” if we have a medium accuracy of what we think they said).
For a deeper dive into each panelist’s response on topic 2, go here.
Topic #3: The technical elements of conversation design
– Should I let users know that they are chatting with a bot instead of a human?
– Can I route conversations to a separate NLP system like DialogueFlow?
– What is the current trend? Do people prefer talking to a machine/system that sounds more like a human, or more like a machine?
– What UXR methods do you use in the context of conversation design and development?
We’ve run experiments in the past where it was ambiguous about whether it was a human or a bot, and when the user found out it was a bot, they felt betrayed and got upset. So, we’ve found that letting the user know they’re talking to a bot is important from an ethical standpoint and also from a pure usability standpoint.
There are some instances where there’s a distinct advantage to actually speak to a machine versus a human. There are plenty of examples out there, including chatbots that help people with mental health issues. In these cases the user often feels more comfortable talking to a machine rather than a human.
The position that’s been taken now is such that we always let the user know they’re talking to some sort of automated system, and we’re very clear about that. The user does prefer talking to something human-like. They’re less irritant than they would be if they knew they were talking to something that actually sounds robotic. If the user feels that the system is intelligent enough and that they can actually achieve what they need to achieve without going to an agent, you will be able to better deliver for them.
Sometimes people are a little afraid of what they don’t know. If they’ve been burned by chat interfaces in the past, if they’ve found that bots sometimes don’t solve their problem, then that works against you too. There’s no easy answer right now, but at this point, it just takes time and it take success to build up the experience. As it does deliver for the user, they’ll find that it can be advantageous and can beat the experience with the human.
Anyone speaking to a smart speaker knows they’re not going to get a human (at least at this point in time), but anyone picking up the phone to talk to someone at a company is definitely expecting to be able to talk to a human. Almost no one picks up the phone to access a bot. If we keep ourselves grounded in the mindset and ask ourselves, “What exactly is the user’s expectation coming into this?,” right out the gate, it can help inform our initial design decisions and set them up for success.
For a deeper dive into each panelist’s response on topic 3, go here.
The Botmock team is excited that more of our community members are involved than ever before. There will be more panelists and attendees from diverse backgrounds contributing to these events in the future, so keep an eye out!