Earlier this year, admittedly a little late to the party, I got hooked on ‘Humans’: a TV series in which robots (or ‘synths’) achieve consciousness, only to struggle to find a place in a world that does not want them.
Sam Vincent and Jonathan Brackley’s 2015 series is both uncanny and thought-provoking, and it throws up some uncomfortable questions. Consciousness, self-awareness, emotional intelligence and empathy are uniquely human attributes, after all. But what if a machine were to succeed in emulating these characteristics? Does that make it human? What should its legal status be? Could it enter into loving relationships, or act as a caregiver to someone else?
A fully autonomous, conscious, decision-making humanoid bot is, at the moment, the stuff of fiction. And yet, ‘affective computing’, or the capability to recognize, respond to and even emulate human emotions is present in some of today’s machines. So to what extent can machines exhibit emotional intelligence? And how are these capabilities being put to use?
The beginnings of affective computing
The term ‘affective computing’ describes the ability of machines to detect, interpret and predict human responses. It was coined by Rosalind Picard, computer scientist at MIT and founder of Affectiva – an emotion measurement tech company that spun out of MIT’s explorative Media Lab in 2009.
Picard was interesting in creating software that could recognize human emotions based on facial expressions. To that end, she spent some time as a test subject herself – measuring physical indicators of her various emotional states. The physical giveaways include things like muscle tension, heart rate, pupil dilation and the contraction of various facial muscles, all of which gave valuable data as to how her body expressed what she was feeling.
She found that our bodies display consistent patterns in response to emotions. These could be logged, analysed, and eventually used as raw data with which to teach a wearable device to recognize those patterns when they occur in someone else. With enough data, and through applying machine learning techniques, a wearable device could with some accuracy pick up tiny facial contortions from its user, and examine these to determine emotional engagement.
Affective computing in action
Emotion is a powerful tool and a key ingredient of human perception, in that it helps us separate the important stuff from the irrelevant. Something that provokes a powerful emotional response is unlikely to be disregarded – it lodges in our memory and feeds into our decision-making. Witness the powerful appeal of UK department store John Lewis’ famous Christmas adverts, for instance:
Small wonder then, that tapping into human emotion is something of a holy grail for marketers and the entertainment industry. It could sure help boost your viewing figures if you knew which of your sitcom characters were the funniest, for example, or which are the most tear-jerking moments from the latest Pixar flick. Disney are famously measuring audience reactions in a similar way, with the help of a new algorithm known as ‘factorized variational autoencoders’ (FVAEs). The FVAEs aim to predict which bits of Toy Story 5 audience members will find the funniest, and presumably use that information to churn out other side-splitters in the future.
Entertainment aside, affective computing could have other positive use cases too. In the field of healthcare, for example, it can be transformative, especially for those with medical conditions such as facial palsy or paralysis, that compromise their ability to make facial expressions.
Emteq, well-known developers of emotion-sensing technology, have been working on a new piece of equipment that could help people with conditions like these. Their sensor-embedded headset offers a non-intrusive means of measuring the tiny electrical signals emanating from facial muscles, in order to allow the wearer to operate devices remotely with a small facial gesture. There are other possibilities too – by taking information from sensors in the goggles, the headset can translate this information into real-time expressions depicted on a 3D cartoon. Such a tool is useful in teaching people with facial palsy to learn to isolate and exercise individual muscle groups.
Of course, sensor data on its own isn’t enough. Because there’s a deal of cross-over in the muscles of the face, Emteq’s device needs to understand which muscles to read and which to ignore. It is here that two IoT stalwarts come into play: machine learning and Artificial Intelligence.
Behind the scenes: cognitive analytics
Machine learning and AI belong to a set of cognitive capabilities that are to affective computing what life experience is to us. Machine learning, for instance, is the process of feeding a machine huge quantities of information, so that it can offset new data against it in order to interpret what it is seeing. If you can teach a machine what a ‘smile’ looks like – that the teeth might be exposed, or the eyes crinkle – the machine can recognize various permutations of this emotion by comparing them to what it knows about smiling.
IBM has done a lot of work in this field, as anyone who’s familiar with the supercomputer Watson will know. At InterConnect 2016, IBM announced three new Watson APIs: Tone Analyzer, Emotion Analysis, and Visual Recognition. Each of these three capabilities can be fed into different solutions that need to interpret and recognize human emotion.
These three tools can determine the emotional content of text, images or video because they have been trained to recognize patterns in our speech, writing and facial expressions. Vast data sets have been fed into Watson so that it can draw on this data to detect or even predict human perception. By comparing a video of a human’s reactions with the contextual data it has ingested, for example, Watson can accurately pinpoint emotions within content of various types.
These machine learning and AI capabilities are like the backbone affective computing. Without them, sensors detecting facial muscle movement could only tell us that we’ve raised our eyebrows. With them, computers can recognize surprise, doubt, delight and a thousand other expressions – and perhaps even help us understand ourselves a little better.
To find out more about affective computing and the cognitive capabilities that feed into it, you might enjoy exploring these resources:
2017 was the year of transformation, with almost every industry investing in IoT. At the end of the year, I wrote a forward-looking blog on the Top 5 IoT trends transforming business in 2018. And “cognitive computing” was on the list because IoT is one of the primary drivers of digital transformation this year and […]
Attention all Maximo Asset Management developers and business partners How do you make a world-leading EAM solution even better? Pump up the ecosystem’s volume, that’s how. IBM is on a mission to engage, inspire, educate and support the IBM® Maximo® app developer community. We’ve been busy talking to partners, clients and IBM technical staff. To […]
The launch of Watson Assistant in March was about the “what if” that an AI-based digital assistant brings to today’s businesses. Namely: what if you could create truly personalized experiences for your customers? What if you could drive engagement with your users while ensuring privacy and trust? What if you could create experiences that learn with your […]