The Robot Spoke—and Said, “I love you” (Part 4) — An Article by Dr. Patrice Caire

This is the fourth of a five-part series “The Robot Spoke” by Dr. Patrice Caire, AI & Social Robotics Consultant.

In the previous articles in this series (The Robot Spoke–But what did it say?, The Robot Spoke–How machines got the hang of it, The Robot Spoke–And it sounded smarter than ever), we saw how super-computers managed to beat humans at chess—and even at Jeopardy! We also saw how the Internet of Things embedded minicomputers everywhere, allowing us to use our voices for everything, from turning on TVs, to microwaving popcorn.

But computers taking orders from us is a far cry from computers conversing with us. So, let’s look at how researchers have been scattering rose petals along the garden path, setting the scene for us to be more intimate with robots.

Would it Kill You to Say, “I love you”?

Chances are your robot isn’t deliberately withholding affection from you. After all, we can all agree that love is more of a human skill than a robot thing. Love simply isn’t included in the robot’s toolbox—not yet, at least.

Think about it: people can perform all kinds of complex tasks with apparent ease. For example, recognizing your boss’s face; understanding your colleague’s motivation for changing jobs; and picking up a stapler—all involve complex physical and psychological tasks that people can somehow manage.

Robots and computers are another story, however. Robots are in their element when doing calculus and figuring out when Hurricane Suzie will hit West Palm, for example. These tasks are comparatively difficult for humans, given that they involve extraordinary computational skills. For robots, reasoning requires minimal computational resources; it’s perception and mobility skills that are so tough for them.

Dr. Hans Moravec
Dr. Hans Moravec, of “Moravec’s Paradox” fame, in 2016. Photo: WSJ.com

This human-machine contradiction was first  observed in the 1980s; several computer scientists and engineers framed the contradiction as Moravec’s Paradox. It was an homage to roboticist and futurist Dr. Hans Moravec, who explains the phenomenon in terms of human evolution. In his 1988 book, Mind Children, he writes that encoded in the human brain is a billion years (!) of sensory experience. That’s a billion years of getting to know “the nature of the world and how to survive in it….” In contrast to the merely sensory experiences that people have accumulated, our ability to process abstract thought “is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it,” according to Moravec

So, for us humans, the older the skill, the more time natural-selection has had to improve our design—and the more difficult it is for us to reverse-engineer ourselves in service to creating robots. That said, a certain amount of reverse-engineering has already taken place: robots can play chess and can solve other computational problems, without even breaking a sweat. What remains mystifying for robots, however, is anything associated with perception. Attention. Visualization. Motor skills. Social skills. And the pièce de resistance: emotions!

And there you have it: the breathtakingly complex problem of how to get our robots to profess their love for us.

Do I Look Happy Right Now?

The chemistry between us and our machines cannot be denied. (Aren’t we in love with our phones? Why else sleep next to them at night?)

But alas, as the sex therapists always tell us, chemistry alone is not enough to sustain a relationship, long-term; romantic relationships need more, if they are to progress beyond that first sultry look. But alas, today’s robots simply aren’t wired for love.

Dr. Rosalind Picard demonstrating her wearable technology for understanding human emotions (2007). Photo: Rick Friedman/Getty Images

The quest for artificial emotional intelligence emerged in 1995, when MIT Media Lab professor Dr. Rosalind Picard published a seminal paper titled “Affective Computing.” Dr. Picard pretty much invented affective computing—the gargantuan task of getting machines to compute emotions.

Dr. Picard’s research began in the years following a widely cited 1971 experiment led by UCLA emeritus psychology professor Dr. Albert Mehrabian. According to Mehrabian, human communications adhere to a “7%-38%-55% Rule.” Translation: purportedly, when you speak to me, a mere 7% of the signals I receive are based on the content of your message. Only 38% of the signals I receive relate to your tone of voice. And, crucially, a whopping 55% of the signals I receive come from your facial expressions and gestures.

Among those impressed by Mehrabian’s findings was Dr. Rana El Kaliouby, a former grad student of Dr. Picard’s. Along with Dr. Picard, Dr. El Kaliouby has emerged as a leader in the field of affective computing.

Pssst: Your Robot Knows You’re a Bad Driver

In pursuit of the emotionally intelligent robot, Dr. El Kaliouby and Dr. Picard in 2009 co-founded Affectiva, a startup that specializes in AI systems that process human emotions. The results have been staggering: Affectiva created the world’s leading lab for studying the mental states of people inside cars. This lab takes the form of a simulated car interior, rigged with cameras, sensors, and actuators [switches, for example] to provide real-time data on everbody’s mental state.

Screenshot of Affectiva’s emotion-tracking AI system (2018). Photo: Affectiva

Here’s how Affectiva’s affective-computing system works: computer-vision algorithms use the data collected to identify the key facial features of the people in the car. Then, deep-learning algorithms analyze and classify these facial expressions. Next, the system matches each facial expression to the corresponding emotion. Finally, in another feat of computer engineering, Affectiva’s speech technology analyzes sound bites, enabling the computer system to recognize each human utterance—in the context of typical car conversation.

Indeed, once we understand the mood of people in cars, we can get robots—and other machines—to interact with us in an even more sophisticated, more personalized, and intimate way.

When Love Is Not Enough

Are you still waiting for your electronic device to say, “I love you?” I am.

So, when’s the magic gonna happen—which of your many devices is most likely to profess its love for you?

Speech-processing aficionado Dr. Tatsuya Kawahara is among the most qualified people to answer this question. According to Dr. Kawahara, a robot will profess its love for you…and that robot’s name is ERICA.

Backstory: Dr. Kawahara, a professor of Informatics at Kyoto University, collaborated with Osaka University’s robotics lab. Dr. Kawahara has long understood the technical demands of robots engaging in pillow talk, explaining that any emotional robot “must be able to extract the correct topic or keyword and then generate a coherent response which can stimulate further conversation, by using a variety of responses.”

ERICA—a humanoid robot—demonstrating its interviewing skills at the 2018 International Conference on Intelligent Robots and Systems, in Madrid. Photo: Gent/AFP

In 2015, Dr. Kawahara and his collaborators unveiled ERICA, an autonomous humanoid robot that looks, acts, and interacts just like a human. ERICA can blurt, “I love you” any day of the week—albeit in a computer-synthesized voice. ERICA can even slip you smoldering glances and bat the old eyelashes.

With its smooth-looking skin and big brown eyes, ERICA attracted immediate attention. A hyper-realistic replica of a young White woman, ERICA has a life-size body. Most of the time, ERICA remains in a sitting position; its main moving part is its head.

ERICA is a triumph of multi-modal interaction. For example, speaking, gesturing, and smiling are well within its repertoire. Dr. Kawahara, who is responsible for much of ERICA’s expressive multimodal functioning, has created a robot with surprisingly lifelike abilities: ERICA nods, blinks, and indeed bats its eyelashes, all at appropriate times in the conversation.

ERICA’s geminoid achievements are the culmination of Dr. Kawahara’s two-decades-long experience as a theoretical and empirical researcher in the field of multi-modal conversation and speech-processing. Chatting with ERICA is akin to interacting with a trained psychotherapist—ERICA’s active-listening skills are that good. These prodigious communications skills have been tested in various social contexts, from greeting guests at a hotel reception desk, to speed-dating. By 2017, ERICA had performed so convincingly, that it was recruited to star in the 70-million-dollar Hollywood sci-fi film, “b”: ERICA is the world’s first fully autonomous, artificially intelligent actor.

Does that mean ERICA can say “I love you”—and mean it? Not by a longshot. As journalist Sarah Bahr notes in a 2020 New York Times piece, ERICA has a synthesized British voice with a “slight metallic tone that sounds like she’s speaking into a pipe.”

Sexy or not sexy? Say it with me: not sexy.

Robots Approach their Destiny

We have our answer now: robots aren’t likely to genuinely profess their love for us anytime soon.

Where does that leave us? Giving up on love, and turning to more sensible ways of integrating robotic speech-recognition into our daily lives. Which brings us to Part 5 of this series: “The Robot Spoke—and Said, ‘Please Pass the Screwdriver.’”

See you again in April!

SmartReads

Sign up for my monthly
#SmartReads on the Translation Industry



    Your email is safe with me and I will never share it with anyone.

    Search

    Archives