3 examples of context-aware voice design - Botmock

Photo by Jonas Leupe on Unsplash

This morning around 8:45, I told Google Assistant, “Set an alarm for 9”. It responded,

“Alarm for 9 A.M. — Set.”

Fifteen minutes later, the jingle played and the time showed up on the screen. I said, “hey Google, stop.”

“By the way, you can also just say ‘stop’ without having to start with ‘hey’, followed by ‘Google’.”

Because I’m a voice nerd, I wanted to try to get Google Assistant to say this tooltip again.

#1 Even though I didn’t specify AM or PM, Google Assistant assumed that I meant 9 AM

This feature might be hard coded (something like IF time of day = morning THEN assume user means alarm time should also be morning). And/or the feature might rely on statistics to make this assumption (in the past, 99% of users who set an alarm for 9am at 8:45am — without specifying AM or PM — meant that they wanted it at 9am, not 9pm.

#2 Since Google Assistant made this assumption, it *implicitly* asked me to confirm it

Notice how Google Assistant didn’t explicitly confirm by asking, “Did you want that alarm to be set for 9am or 9pm?”

#3: Google Assistant adjusted its answer using data about who was talking to it

The most obvious context-aware part of this interaction was that Google Assistant didn’t bother me a second time about the tooltip. It recognized my voice, knew that it had already told me, and didn’t repeat itself. But it also knew that it hadn’t yet told my partner about this tip, so it still got triggered when he asked.