How AI is honed to appeal to humans

Published: 
Washington Post
Listen to this article

What it takes to get people to like robots

Washington Post |
Published: 
Comment

Latest Articles

SOTY 2022/23: Linguist (English) first runner-up loves to play devil’s advocate

Hong Kong children are taller and heavier over the last 30 years

Heavy rain in Hong Kong: Observatory issues 4th rainstorm warning in a week

Europe’s longest tunnel for testing hyperloop technology opens in the Netherlands

How customers, eateries are reacting to Hong Kong single-use plastics ban

So, how do AI companies get their bots to appeal to humans?

At a recent meeting of Microsoft Cortana's six-person writing team - which includes a poet, a novelist, a playwright and a former TV writer - the group debated how to answer political questions.

To field increasingly common questions about whether Cortana is a fan of Hillary Clinton, for instance, or Donald Trump, the team dug into the back story to find an answer that felt "authentic."

The response they developed reflects Cortana's standing as a "citizen of the Internet," aware of both good and bad information about the candidates, said Deborah Harrison, senior writer for Cortana, and a movie review blogger on the side. So Cortana says that all politicians are heroes and villains. She declines to say she favours a specific candidate.

The group, which meets every morning at Microsoft’s offices in Redmond, Washington, also brainstorms Cortana’s responses to new issues. Some members who are shaping Cortana's personality for European and Canadian markets dial in. (Cortana is available in Spanish, Portuguese, French, Japanese, Italian, German, limited Chinese, and British and Indian English.)

When the team was preparing to launch a feature that has Cortana sifting through emails and suggesting people to meet, members debated whether a reminder - "You said you wanted to meet with" so-and-so - sounded pushy.

They considered whether Star Wars jokes were appropriate or too cultish. And they talked about how to shut down vulgar comments and respond to a tendency, among some users, to goad Cortana into repeating sexual comments.

Across the Microsoft campus, a different group apparently didn't heed that lesson: Tay, a chat bot Microsoft recently released on Twitter, was shut down within a week after it started talking like a Nazi. The bot was parroting comments made on the Internet.

"We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes," Microsoft said on its blog after it took Tay down.

Such incidents reflect a fundamental challenge in building AI - negotiating the virtual assistant’s relationship to human beings. Writers and designers said the trickiest question they wrestle with is how human can - and should - the bot sound.

Should the virtual assistant be purely functional or should it aspire to connect emotionally with the user?

Human beings have higher expectations of software that is personalised than they do of automation software that can perform similar tasks, said Mark Stephen Meadows, founder and president of San Francisco-based Botanic.io.

Cortana and Siri digital assistants to learn the length of the Golden Gate Bridge in San Francisco.
Associated Press

When software can talk and is given a persona, it opens up a Pandora’s box of human behavior. People don't go out of their way to trick or use vulgarities with other types of software, he pointed out.

Moreover, studies of human-robot interaction have described a phenomenon known as the "uncanny valley," in which attempts to make robots seems humanlike can inspire unease or revulsion instead of empathy.

For that reason, many developers of artificial intelligence make a point of adding a weird element to their avatar designs - such as an asymmetrical face or an odd joke - something that signals that the virtual assistant isn't human and doesn't want to be, Meadows said.

At the same time, the imperfections are meant to be endearing. A robot without such flaws could seem cold and alienating.

If you ask Cortana if she is human, she says no, and then she adds a meant-to-be endearing joke: "But I have the deepest respect for humans. You invented calculus -and milkshakes."

Not aspiring to be human - and having a sense of humor about it - are attributes that can have the added benefit of making users more forgiving of a virtual assistant's limitations and mistakes, said Cathy Pearl, director of user experience at Sense.ly.

And as anyone who has used a virtual assistant knows, they make a lot of them. The technology is still young, and its capacity to handle situations is restricted by the limited information it has been exposed to. Cleverly showing that a bot recognises that it doesn't know something is one of the most challenging aspects of writing for AI, she said.

Another difficult question concerns gender. While Cortana is "crystal clear" that she is not human, the team gave her a female voice because early users expressed a preference for a female virtual assistant,

Harrison said. Siri and Google Now also have female voices by default, though you can configure them to be male. Amazon's Alexa can only be female (the company's chief executive, Jeffrey Bezos, owns The Washington Post), while IBM's Watson has a male voice. 

Nonetheless, Cortana’s developers were worried enough about playing into female stereotypes that they tried to avoid them. The design, for instance, limits the number of times she apologises and she avoids self-deprecating remarks.

And despite the voice, the team insists that Cortana isn't a woman. "She knows that gender is a biological construct, and since she's not biological, she has no gender," Harrison said. "She's proud of her AI-ness."

 

Sign up for the YP Teachers Newsletter
Get updates for teachers sent directly to your inbox
By registering, you agree to our T&C and Privacy Policy
Comment