Machines are getting freakishly good at recognizing human emotions

Originally published in Digital Trends.

“People are generating a lot of non-verbal and physiological data at any given moment,” said George Pliev, Founder and Managing Partner at Neurodata Lab, one of the companies whose algorithms were tested for the facial recognition study. “Apart from the facial expressions, there are voice, speech, body movements, heart rate, and respiration rate. A multimodal approach states that behavioral data should be extracted from different channels and analyzed simultaneously.

The data coming from one channel will verify and balance the data received from the other ones. For example, when facial information is for some reason unavailable, we can analyze the vocal intonations or look at the gestures.”

Read the full article at the link.