People Think: Fears and Emotion AI

Screenshot from the Ex Machina (2015) movie.

Technology has changed the world. It is growing fast and steady, making our living more advanced. Indeed, technology is everywhere. Smartphone apps are getting better and more intricate. We put smart devices in our homes in order to make life easier and carefree. They have actually become somewhat our friend: the more they are with us, the more they learn from us and understand our emotional state. Yet we cannot truly trust them. Sometimes we don’t trust them at all, thinking they might want to take over our lives.

This week we decided to ask our followers on Twitter and Facebook what they are concerned with the most when it comes to teaching emotions to computers — Emotion AI. With approximately 650 respondents, the results were intriguing. We set out to answer all of the raised concerns and discuss how justified they are.

Fear #1: Emotion AI manipulates our emotions

People think:

Technologies penetrate our lives every day. If they are able to understand our emotions and behaviors, they will manipulate us into buying things we don’t need.

Why it’s OK:

Technology has become available to many and it made us addicted to it. When somebody pays attention to us on social media we instantly feel happier. Nothing makes us more upbeat than seeing that a friend liked us on Instagram. It boosts our confidence. It’s what on the bright side. On the downside, just remember the situation when you somehow ended up in a room full of cameras. This feeling of anxiety came up fast. It’s okay no one really likes being watched. Or this uneasiness we get when we receive breaking news on our devices. Sure, all of it influences our emotions.

Nevertheless, the industry tries to fix this problem. They understand that our interaction with the tech goes beyond its functions. It is more and more intimate. Therefore, they try to create new algorithms that would make our relationship with the devices more trusting and deeper. So that we would view technology not as an enemy, but a new friend who tries to understand us better.

For example, personal assistants are becoming more popular with people. Image if one would function according to its owner’s emotional state — lending a helping hand when needed, cheering up on a difficult day. This tech creates the atmosphere around us, but they should not force us into anything.

Fear #2: Emotion AI is capable of recognizing only 6 basic emotions, it can’t truly understand us

People think:

Emotion AI tech still begins its journey. Most of the algorithms can work only with 6 basic emotions (happiness, fear, sadness, disgust, surprise, anger) because that’s what they are taught to recognize. They understand how these emotions are expressed and what features accompany them, however, they are unable to grasp the context and cannot trace how this or that emotion came to be. For the machines, humans are just information in numbers that should be processed, that’s why we’re perceived one-sided.

Why it’s OK:

Most prominent scientists and labs agree this problem exists. Companies struggle with the fact that algorithms cannot perceive data on a deeper level. We at Neurodata Lab use a multimodal approach for that: we target not only facial expressions during the analysis, but body movements, voice, and even heart rate which makes the results more accurate. Of course, it doesn’t solve the whole problem with context, there’s still a lot to do. As well as, we still have to understand how to process hidden and fake emotions, but we are getting on there. Future technologies will be able to understand context and advance the way emotion recognition is conducted.

Fear #3: They collect our data to use it. How?

People think:

Humans are easy to manipulate, especially in a world of technology. That’s why companies gather so much data. They know when we wake up and go to bed, how we feel and what we think about the Stranger Things trailer. They know where we eat and how we live. Our interests and hobbies, our relatives, schedule. It is a new level of intimacy and a cultural break. But where all of this data is going and who may access it raises questions that are difficult to answer.

Why it’s OK:

We have already said that Affective Computing, in fact almost every machine learning industry, thrives on information. It is important for training AIs. The more data available, the more advantages both for them and us.

In June 2019, a group of researches, including Lisa Barrett, conducted a review of the AI landscape, where they suggested recommendations on how to make sure these technologies are safe for the users. For example, to initiate policies that can be adopted to prevent privacy abuse. First of all, the companies must make it clear why they gather personal information and make this data anonymized. It is important to make people understand what this research is for and where all the information will end up: whether to create a better product or develop healthcare apps that will be a great help, for example, in treating depression, or to track the physical state of those who stay in the retirement homes, and so on.

It is a great marketing move that can, in fact, be useful to the people. Without this research, companies won’t be able to make products more audience-centered and to introduce solutions to meet real customer problems.

Fear #4: Machines cannot understand real emotions

People think:

AI can’t be empathetic. It cannot understand the core of our emotions and cannot grasp why we feel the way we feel. Nothing can be done from nothing, and you can’t teach empathy from scratch. There is quite a popular opinion that AI is kind of a metaphor for a psychopath. It cannot feel for itself, but it tracks your emotions, analyzes them and then tries to copy them in order to imitate a successful communication.

Why it’s OK:

Well, to be honest, it’s all up to us. The more information we give to the AI, the more and better it learns. This small idea can, in fact, resolve a whole lot of issues. For example, fight racial bias in facial emotion recognition technology.

Moreover, AI should be ethical. We cannot create new intelligence without making rules for them. This April the European Union published ethical guideline recommendations that can help make human-machine interaction adequate and safe for all.

What you need to keep in mind about Emotion AI

Each of these concerns is justified. There is a whole lot of problems behind them and it is actually good that people can recognize and point them out as our survey showed.

Twitter, 611 respondents (https://twitter.com/NeurodataLab/status/1143152718548258816). Axis X — responses; axis Y — percentage of respondents.

Most of the 611 respondents on Twitter were worried that the emotions in the algorithms were synthetic. People are having a hard time trusting machines because their emotions were planted from the outside and it’s unknown how they will use them. Is it really for the good of humanity or the technologies are just waiting to attack us? It’s pretty reasonable that we ask these questions — for the last 30 years humans vs. the machines has been one of the beloved plots in pop culture. No wonder we are suspicious.

We’re learning how to teach AI correctly and there is still some work to do before it will be able to not only recognize the emotions but to understand them. We need to teach it how to work with context and explain to it that emotions aren’t static but linked to each other. What is more important; we need to understand that any AI is created by humans. We don’t trust the machines, because we don’t trust people and without it, we cannot coexist peacefully.

Facebook, 34 respondents (https://facebook.com/neurodatalab/). Axis X — responses; axis Y — percentage of respondents.

However, there is good news. On Facebook we made another survey. This time we asked people whether they believed that Emotion AI can change the world for the better. And though there weren’t many respondents, for 65% of them it can and it’s pretty reassuring.


References

It Isn’t Emotional AI. It’s Psychopathic AI (2018). Jonathan Cook, Medium

Tech’s dangerous race to control our emotions (2019), Simon Chandler, The Daily Dot

The Dawn of Robot Surveillance AI, Video Analytics, and Privacy (2019), Jay Stanley, ACLU

Emotion-reading tech fails the racial bias test (2019), Lauren Rhue, The Conversation

Ethics guidelines for trustworthy AI (2019), European Commission

 

***

Author: Elizaveta Zaitseva, SMM Manager at Neurodata Lab.


You are welcome to comment on this article in our blog on Medium.