Blog

Affective computing: can machines achieve emotional intelligence?

Share this post:

Earlier this year, admittedly a little late to the party, I got hooked on ‘Humans’: a TV series in which robots (or ‘synths’) achieve consciousness, only to struggle to find a place in a world that does not want them.

Sam Vincent and Jonathan Brackley’s 2015 series is both uncanny and thought-provoking, and it throws up some uncomfortable questions. Consciousness, self-awareness, emotional intelligence and empathy are uniquely human attributes, after all. But what if a machine were to succeed in emulating these characteristics? Does that make it human? What should its legal status be? Could it enter into loving relationships, or act as a caregiver to someone else?

A fully autonomous, conscious, decision-making humanoid bot is, at the moment, the stuff of fiction. And yet, ‘affective computing’, or the capability to recognize, respond to and even emulate human emotions is present in some of today’s machines. So to what extent can machines exhibit emotional intelligence? And how are these capabilities being put to use?

The beginnings of affective computing

The term ‘affective computing’ describes the ability of machines to detect, interpret and predict human responses. It was coined by Rosalind Picard, computer scientist at MIT and founder of Affectiva – an emotion measurement tech company that spun out of MIT’s explorative Media Lab in 2009.

Picard was interesting in creating software that could recognize human emotions based on facial expressions. To that end, she spent some time as a test subject herself – measuring physical indicators of her various emotional states. The physical giveaways include things like muscle tension, heart rate, pupil dilation and the contraction of various facial muscles, all of which gave valuable data as to how her body expressed what she was feeling.

She found that our bodies display consistent patterns in response to emotions. These could be logged, analysed, and eventually used as raw data with which to teach a wearable device to recognize those patterns when they occur in someone else. With enough data, and through applying machine learning techniques, a wearable device could with some accuracy pick up tiny facial contortions from its user, and examine these to determine emotional engagement.

Affective computing in action

Emotion is a powerful tool and a key ingredient of human perception, in that it helps us separate the important stuff from the irrelevant. Something that provokes a powerful emotional response is unlikely to be disregarded – it lodges in our memory and feeds into our decision-making. Witness the powerful appeal of UK department store John Lewis’ famous Christmas adverts, for instance:

Small wonder then, that tapping into human emotion is something of a holy grail for marketers and the entertainment industry. It could sure help boost your viewing figures if you knew which of your sitcom characters were the funniest, for example, or which are the most tear-jerking moments from the latest Pixar flick. Disney are famously measuring audience reactions in a similar way, with the help of a new algorithm known as ‘factorized variational autoencoders’ (FVAEs). The FVAEs aim to predict which bits of Toy Story 5 audience members will find the funniest, and presumably use that information to churn out other side-splitters in the future.

Compassionate computing

Entertainment aside, affective computing could have other positive use cases too. In the field of healthcare, for example, it can be transformative, especially for those with medical conditions such as facial palsy or paralysis, that compromise their ability to make facial expressions.

Emteq, well-known developers of emotion-sensing technology, have been working on a new piece of equipment that could help people with conditions like these. Their sensor-embedded headset offers a non-intrusive means of measuring the tiny electrical signals emanating from facial muscles, in order to allow the wearer to operate devices remotely with a small facial gesture. There are other possibilities too – by taking information from sensors in the goggles, the headset can translate this information into real-time expressions depicted on a 3D cartoon. Such a tool is useful in teaching people with facial palsy to learn to isolate and exercise individual muscle groups.

Of course, sensor data on its own isn’t enough. Because there’s a deal of cross-over in the muscles of the face, Emteq’s device needs to understand which muscles to read and which to ignore. It is here that two IoT stalwarts come into play: machine learning and Artificial Intelligence.

Behind the scenes: cognitive analytics

Machine learning and AI belong to a set of cognitive capabilities that are to affective computing what life experience is to us. Machine learning, for instance, is the process of feeding a machine huge quantities of information, so that it can offset new data against it in order to interpret what it is seeing. If you can teach a machine what a ‘smile’ looks like – that the teeth might be exposed, or the eyes crinkle – the machine can recognize various permutations of this emotion by comparing them to what it knows about smiling.

IBM has done a lot of work in this field, as anyone who’s familiar with the supercomputer Watson will know. At InterConnect 2016, IBM announced three new Watson APIs: Tone Analyzer, Emotion Analysis, and Visual Recognition. Each of these three capabilities can be fed into different solutions that need to interpret and recognize human emotion.

These three tools can determine the emotional content of text, images or video because they have been trained to recognize patterns in our speech, writing and facial expressions. Vast data sets have been fed into Watson so that it can draw on this data to detect or even predict human perception. By comparing a video of a human’s reactions with the contextual data it has ingested, for example, Watson can accurately pinpoint emotions within content of various types.

These machine learning and AI capabilities are like the backbone affective computing. Without them, sensors detecting facial muscle movement could only tell us that we’ve raised our eyebrows. With them, computers can recognize surprise, doubt, delight and a thousand other expressions – and perhaps even help us understand ourselves a little better.

Learn more

To find out more about affective computing and the cognitive capabilities that feed into it, you might enjoy exploring these resources:

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Blog Stories
By Luc Hatlestad on December 10, 2017

How AT&T migrated 40,000 users to IBM’s IoT solutions

Any telecom provider must always try to avoid network downtime. But to stay competitive these companies also must constantly upgrade their software and systems. So when AT&T decided to migrate more than 40,000 users to comprehensive IBM IoT solutions—to support internal software development and replace its existing disparate solutions—the company knew it was facing a […]

Continue reading

By Jen Clark on December 8, 2017

IoT weekly round-up: Thursday 7th December 2017

It’s been a little turbulent for all things internet and IoT-related this week. Fake feedback comments may delay the net neutrality vote, Volocopter aims to launch its flying taxis by 2020 and there’s a new service to help predict an impending Bitcoin crash. Read on for the latest from the connected world. Bitcoin Bubble Burst […]

Continue reading

By Jen Clark on December 1, 2017

IoT weekly round-up: Thursday 30th November 2017

This week, self-driving car Waymo racks up 4 million miles on the road, and Shopify has a new app to track your packages in real time. Self-driving Waymo passes the 4 million mile mark Waymo, formerly Google’s self-driving car project, now has 4 million self-driven miles to its credit. The distance, which would take a […]

Continue reading