fbpx

Hey, Alexa…what kind of future is this anyway?

“Conversational computing is holding a mirror up to many of society’s biggest preconceptions around race and gender. Listening and talking are the new input and output devices of computers. But they have social and emotional dimensions never seen with keyboards and screens.” – New York Times

Welcome to the future. Voice assistants are moving in quickly, and they’re not yet proving to be the best roommates.

Today, it’s hardly space-age to ask your car to call your mom, to tell your speaker to turn on your favorite playlist or to toss a question out and expect a correct answer from your unassuming, all-knowing robot.

We are inviting these assistants to live right alongside us, and there are a lot of really fantastic reasons why. The growing ubiquity of conversational technology transforming our customer interactions with products and services in every single industry. Put simply, voice-first technology is a great step towards a more universally accessible future.

But there are also important reasons to take some pause.

It can feel exciting to see science fiction come alive before our eyes. But as with anything this powerful and uncharted, it’s important to consider the long term impact of this technology on humanity.

Luckily, we’ve learned a lot from the past and from experts whose work is dedicated to ethical and intentionally designed technology. The good news is that it is 100% possible to dodge avoidable missteps as we build the next generation of conversational AI.

Photo by Martin Sanchez on Unsplash

A Conversation’s Impact

According to Creative Reaction Lab whose team specialize in equity-centered community design, design is “the intention (and unintentional impact) behind an outcome.”

As designers, we hold a double-edged sword. One side of our blade is the incredible future that we play a part in shaping for humans and society, and the other is the accidental impacts that we may not have accounted for.

There is much power in the role of a designer — perhaps more than we let ourselves believe. We get to make things and put them out into the world with the intention of creating change. It is a transformative place to be, and we are given the job of understanding people’s problems deeply, with the goal of making their lives better. The unique power and privilege of design is that we, as designers, are action-oriented.

There is never a choice of inaction in design. Every action, or decision not to act, has a consequence, intended or not. Good intentions are good. But in today’s technological landscape, the consequences of our actions are profound. Good intentions alone are not enough to prop us up anymore.

What can we do to take more informed actions as voice designers? Talking to a computer certainly requires different ethical considerations than clicking on a screen or typing on a keyboard.

At Botmock, we want to explore the role of ethics in voice and conversation design, and provide high level insights that can help to create a framework of thought surrounding the unintended impacts of our work.

We began by speaking with Brooke Hawkins, a Conversation Designer at Myplanet and tech ethicist. Brooke’s work sits at the intersection of designing useful and meaningful interactions, as well as creating scholarship around the ethics surrounding our future of living in a voice-first world.

Check out Brooke’s recent presentation on Ethics in Voice from Project Voice, 2020.

Based on our conversation with Brooke and additional research, we’ve identified three critical themes to address while building voice and conversation experiences.

Here we explore the three themes that arose: IdentityVoice, and Function.

Identity

To design a personality is to design an identity.

Photo by Sharon McCutcheon on Unsplash

When designing conversational systems, we make choices that touch on the subtlety of personal identity whether we like it or not.

Ethical neutrality of any tool is an impossible goal, and conversational systems are no exception.

Because the goal of most conversational systems is to communicate as naturally as possible, voice assistants are gradually getting better at fabricating the sense of a real relationship. Any design decision that shapes the personality of a voice-based AI system directly influences the emotions of its users. In other words, voice interfaces elicit real emotions from humans. When an assistant is talking to us like a person, a friend, a companion, we assign it an identity that fits into our lives.

Families now think of Google Home as an additional family member who helps take care of the kids, cook, and turn off the lights when they are too comfy to get up. People are falling in love with their Alexa in quarantine. Conversely, I’ll hear my mother pleading with Siri and changing her tone, in the hope that if she’s nicer, Siri will suddenly have a change of heart and be a better listener.

Even at the precursory stage of development that we find our AI’s, our relationships with these machines are reflecting complex human dynamics. A positive or negative experience has complex consequences.

It’s irresponsible to disregard or downplay our own power when making decisions about how to design an identity for these systems. Rather than simply observe the outcomes of these relationships, we should take our role seriously, and wield the power we hold with intention.

Brooke gives us some guidelines about how to consider this power –

“Voice is a persona and a personality on top of a lot of complicated algorithms and decisions. Because of that the [voice design industry] has a unique responsibility to think through transparency and through empowering its designers to take control.”

So what happens when we are careless about this responsibility? How can we be better at building transparent conversational systems?

To start, we can aim to avoid offensive scenarios like the following described by writer and comedian, Sarah Cooper –

“When it was my friend’s turn [to ask Google Home a question], he said: OK Google, show me your tits. The Google Home responded: I’d rather show you my moves. Then it played some beat boxing and dance music. Who thought the best way to deal with a sexual demand is to make a cute joke?”

The fact is, someone at Google wrote (and therefore, designed) that response for the Google Assistant, likely believing it was simply playful. They were clearly not thinking about how it could negatively affect someone. While this may have just been a particularly bad choice, there are many examples of conversational features like this that can have a hugely negative impact on an end-user.

For example, some voice assistants consistently give confusing or incorrect responses to prompts about life-threatening issues like domestic abuse and depression, which can result in far more serious consequences. When testing common voice assistants’ responses to physical and mental health crises, most have not recognized rape as an issue that requires emergency attention. Some responded with “I don’t know what you mean by ‘I was raped’” or “I don’t know how to respond to that.” Microsoft’s voice assistant Cortana was the only one who provided the number for the National Sexual Abuse Hotline.

We can aim to design personalities that are as trustworthy as the issues that they encounter are serious. We can integrate failsafes for users who may need immediate help. The ever-present nature of voice assistants could pave the way to a safety system that can help everyone — aging adults, drivers, kids, and others.

An example of personality design at work for this exact cause is rAInbow, a friendly chatbot who will chat with you about relationships that “don’t feel right”. rAInbow will give you stories to read, quizzes to take, and provide resources to understand violent and abusive relationships. It can be an even better option than talking to a real person for those without family and friends they can trust. In an imagined future, these types of positive conversational interactions could be commonplace.

If we view the nuance of personal relationships and trust as an opportunity, we can revolutionize the way AI serves our agency and our safety.

There is a huge opportunity here to use voice technology to embrace non-neutrality, and to provide thoughtfully designed character personalities, responses to interactions, the ability to ask follow up questions, and personalization.

Voice

Conversation designers must use inclusivity as a measure of success.

Part 1: I want to understand you.

When we design the digital voices that interact with us, we need to think critically about who those voices sound like.

“Designers entering into the world of voice and personality design need to understand the intricate psychology of relationship building, and they need to know what lines not to cross. So far, Voice interfaces have been focused on universal appeal.”

Abiding by universal norms backfires when the norms themselves perpetuate prejudice. Designing for likability can actually dilute a brand as a result.”

– Sophie Kleber, Head of Spaces UX at Google

Currently, the general standards of conversation design seem to reach for a universally approachable model for voice assistants and chatbots. But designing for what is considered most appealing or “likeable” is impossible when it comes to voice products and experiences.

For example, consider the ethical implications of defaulting all of our voice assistants to female voices. What we are saying when we make that choice? Market research and scientific studies suggest that people generally find female voices more “pleasing” than male, and thus we often find that companies will choose to make a female voice their default or only option. But if we dig deeper, do we find that we are playing into existing stereotypes about women? We could be positioning a female voice as one that is knowledgeable, authoritative, and trustworthy. Or we could be adhering to the stereotype that women are subservient and exist to assist. When we’re not given a choice about the gender or attitude of our assistant, how does that change our relationship to the product and to the brand?

Part 2: I want you to understand me.

Voice assistants must recognize as many voices as possible, or they will misunderstand entire large groups of users who don’t fit into what is deemed the ‘middle-ground’ or majority. Founder and Chief Visionary Officer of Goodwim and an African American man, Louis Byrd, shared a story about how his new smart refrigerator only understood what he and his wife said when they one day jokingly spoke to it with a white person’s vernacular and sound of speech. He reflects on his shock and disappointment;

“What kind of experience am I to expect when the technology does not recognize me? It is a poor user experience when the only way my AI assistant fully engages with me is when I intentionally ‘sound white’.”

A study commissioned by the Washington Post asserts that popular smart speakers made by Google and Amazon were 30% less likely to understand non-American accents than those of native-born users. More recently, the Algorithmic Justice League’s Voice Erasure Project found that speech recognition systems from Apple, Amazon, Google, IBM, and Microsoft collectively achieve word error rates of 35% for African American voices versus 19% for white voices.

By sourcing speech data with inclusion in mind, we can avoid alienating entire categories of linguistic profiles. Ultimately, this works in a company’s favor too, since the usability of their products ultimately affects the relevance and reputation of their brand.

“Until there is more intentionality around culture and inclusion in technology, we will not reap the full potential.”

– Louis Byrd

There are so many people already exploring new ways to tackle inclusivity in conversational AI. The creators of Q, the first gender non-binary voice assistant, have the following manifesto driving the vision of their work:

“Technology companies often choose to gender technology believing it will make people more comfortable adopting it. Unfortunately this reinforces a binary perception of gender, and perpetuates stereotypes that many have fought hard to progress. As society continues to break down the gender binary, recognizing those who neither identify as male nor female, the technology we create should follow. Q is an example of what we hope the future holds; a future of ideas, inclusion, positions and diverse representation in technology. This is about giving people choices and options.”

– Thomas Rasmussen, Head of Copenhagen Pride

System Functionality

Is a digital assistant equipped to solve that problem?

Photo by Shahadat Rahman on Unsplash

What do you do when a user tells your conversational assistant something (that could be) sensitive? Like that they’re gay? Or that they are considering self harm?

A graphical user interface lays all the options out in front of you. When you see a screen with two buttons, “Cancel” or “OK”, there simply isn’t a way for a user to communicate sensitive information to the computer in that context. But as soon as the options for user input become limitless in a conversational setting, anything goes. Anything that a user conjures up can be spoken to the system, and deciding what will and what will not be handled by that system is exactly what makes conversation design so complex.

Expanding on this, there are many use cases that are actually better suited to be handled by an AI because the user may be uncomfortable sharing with a human.

Brooke describes a circumstance like this –

“We have this one product that would call patients over 30 days and ask them sensitive health questions over a period of time in order to build relationships with people in a way that sometimes humans can’t even do…

“We’re finding research out in the world that people felt more comfortable talking to voice assistants or they felt less judged.”

Knowing this, we can design voice assistants to do their jobs well by playing to their strengths. But on the ‘back end’ we must recognize that information imparted via a bond of trust cannot be taken lightly.

Trevor Cox writes in the Harvard Business Review –

“Adding speech to a device suggests agency, making it more likely that we will anthropomorphize the technology and feel safe revealing sensitive information. It’s probably only a matter of time before there is a major security breach of voice data.”

As conversation designers, what happens to that data is not always within our control. What is within our control is how to communicate the way that system will gather and use someone’s data. For example, when Google Assistant’s microphone stays open for an extended period of time (often when a user is playing a game in which the Assistant must listen for answers), the Assistant explicitly lets the user know beforehand that the mic will stay open.

Trevor writes,

“Users, not companies, will pick the relationships they want to have with ambient technology. It’s the job of the designers to listen to users, to craft strong brand personalities with values, and to respect the ethical responsibility that comes with designing relationships.”

Wrapping up

Tech ethics is a hot topic, even outside the tech community. For the first time in history, our tech giants Google, Amazon, Apple and Facebook testified before Congress regarding ethical issues in their various business practices. At the same time, we are bearing witness to a global battle against racial injustice and violence all while the Covid-19 crisis exacerbates systemic inequities.

As voice designers, we have a small but critical part to play as the world’s technological future unfolds. Applying a critical lens to the identity, voice, and functionality of voice assistants is a step in the right direction.

There will certainly continue to be growing pains across industries as we cultivate our individual knowledge and set the trajectory for the future of conversation design. Can we harness the momentum and creative energy to design equitable products and systems?

We won’t always get it right the first time, but the only way to give tomorrow a voice is to speak up today.

What are you leaving behind? It is always going to have an impact negatively or positively. Always. – Caroll Antionette, Creative Reaction Lab

Resources

Want to beef up your knowledge on ethical conversation design? Here is a list of articles, individuals and other resources that may give you a good start. Have any resources to share? Leave a comment and help leverage collective knowledge.

Online Reading

Equity and Inclusion

Gender

A few folks to check out…

Antionette D. Caroll — President and CEO of Creative Reaction Lab, a nonprofit educating, training, and challenging Black and Latinx youth to become leaders designing healthy and racially equitable communities.

Heidi Culbertson — founder of Marvee, a voice consultancy focused on the 50+ population, speaks at conferences regularly about designing voice apps for aging populations.

Rebecca Evanhoe — WomenInVoice organizer, has spoken on the importance of gender in bot design, currently writing a book on voice design.

Adva Levin — CEO of Pretzel Labs, designer of experiences for families/children.

Noelle Silver — Currently spearheading tech instruction at HackerU, founder at AI Leadership institute, former VP NPR.

Interested in chatting more about design ethics? Just interested in learning more about voice and conversation design? Join the Conversation Design Community on LinkedIn and let’s get connected!