Cars that see: when your car “gets” you

Share this post:

“What’s on your mind?” is the question we ask our colleagues and friends when we notice that they’re deep in thought.  We humans naturally read each other’s thoughts using body language and facial expressions. That allows us to help each other by sharing common concerns and working together to solve problems. What if your car could read your facial expression? Could it help you and your fellow drivers to be more safe?

The IBM Ireland Innovation Exchange is working with partners, including Vicomtech-IK4 and Honda Research Institute Europe in VI-DAS. With project funding by the European Union via the Horizon 2020, they’re exploring how computer vision, in-vehicle edge computing and vehicle-to-vehicle communications can all help improve the understanding of risky road conditions by both drivers and automated vehicles.

How your car “sees”

VI-DAS aims to bring a 720-degree view to support the driving process handover between a human driver and an automated driving system. The 720-degree concept is based on using computer vision to sense the outside world, 360-degree around the car. For example, it can sense pedestrians, bicyclists, other cars and road signs. It also encompasses a 360-degree view of the car interior, focusing mainly on the driver and potential distractions.

Computer vision technology has reached a stage where detailed driving behavior can be extracted from images that capture your facial expressions and body language as you drive. These can be analyzed in real time using machine learning to allow your car to understand things like where on the road you’re looking or if you’re checking your side mirrors. It can even detect improper phone usage, then evaluate immediate risks in the driving situation. When this technology is combined with vehicle-to-vehicle communications, the system(s) can alert you and your car when drivers around you are distracted and not paying adequate attention to driving conditions.

Understanding driver behavior

Knowing a driver’s cognitive awareness in real time is a requirement for the safe transfer of control between human drivers and higher-level automated driving systems. When transferring control back to the driver, the vehicle assistant will need to verify that the driver’s hands are on the steering wheel. It must also confirm that the driver is looking at the road ahead and is aware of the immediate road risks.

Vicomtech-IK4 are applying computer vision to record cues like the driver’s eye movements, gaze direction or emotional states. These can then be fused with vehicle driving performance to construct a comprehensive profile of a driver’s behavior. Honda Research Institute Europe is analyzing driving situations from the driver’s perspective. They’re doing this to estimate the situational risk factors and evaluate the best behavior options available to both driver and vehicle to alleviate risks in specific situations.

IBM is exploring how all the information gathered about driver behavior and driver risk awareness can be correlated with the exterior environmental information. They’re using road conditions and similar information gathered from other cars on the road, to provide drivers with a detailed history of their behavior. This information can be combined and analyzed through cloud services. Using this analysis will help uncover insights for drivers. Ultimately, the ability to monitor the operation of vehicles requires a scalable solution like IBM IoT for Automotive, which also offers pre-packaged services to track and score driver behavior as well as contextual mapping that provides real-time road conditions.

What this means for tomorrow’s drivers

All this data can change the driving experience. For example, a system based on eye-gaze detection could make you aware of the advanced driver-assistance systems (ADAS) alerts provided to you via various human-machine interfaces (HMIs), that you may not be fully using while driving. Gathering such insights from a larger population allows OEMs to improve the HMIs by understanding which drivers aren’t benefiting from them. And facial expression detection could provide important insights about specific locations that have proved difficult for a driver to maneuver. This could be a particular highway merge or lane change that produce high stress levels and emotional changes. Once the system(s) learn a driver’s routines, it/they can offer insights into how other, less stressed drivers have managed that same situation.

Creating a complete driving picture

There also is an opportunity to develop a collective driving assistant that combines all individual driver safety insights, gathered from you and others, with contextual information such as weather or accidents. This data can then be correlated with the detailed road network map to identify patterns of anomalous driver behavior experienced by many drivers. Or it could highlight issues with how the road layout is set, such as misplaced traffic signs.

This would allow individual drivers to learn how their behavior compares to others. That means you could improve your cognitive awareness when similar road scenarios occur. Not just the drivers — safety authorities reading anonymized samples of such driver behavior insights will learn where and why critical situations or accidents ensue. They could learn if they’re caused by poorly designed roads, and they could even know whether a new billboard distracted drivers to the point that they didn’t notice a specific traffic sign.

IBM is exploring information that is collaboratively gathered and transferred from vehicles to the cloud using Watson IoT. There, it can be visualized to show actual driving scenarios in which all drivers’ gazes are shown on a map. Presenting this information from multiple cars can help determine appropriate speed limits and road planning. And it also helps drivers share and learn gaps in their own contextual awareness.

The research and activities described in this article include work that IBM Ireland  and partners are working on as part the VI_DAS project. More on VI-DAS project and the activities of project partners can be found at

This project aligns with IBM’s broader initiative demonstrating that monitoring driver behavior and understanding drivers can lead to innovative safety solutions. Find out more at our “Cars that Care” site.

And find more details on IBM IoT for Automotive visit our IBM Marketplace site.

About the authors:

Dr. Cristian Olariu is a Research Engineer at IBM’s Innovation Exchange in Dublin, Ireland. There, his main research focus is on wireless access technologies for time-critical applications, with an emphasis on automotive scenarios. He has a proven track record in wireless networking, cellular network architectures, software-defined networks and service provisioning for time-critical applications.


Gary Thompson is a Solution Architect and Technical Manager at IBM’s Innovation Exchange team, Dublin, Ireland. He is currently working on automotive projects that focus on V2X communications and cloud infrastructures to support ADAS development. Gary has a technical interest in data management and information modelling methods that enable a better understanding, application and governance between all stakeholders in the connected car ecosystem.


Research Engineer at IBM’s Innovation Exchange

Gary Thompson

Solution Architect and Technical Manager at IBM’s Innovation Exchange

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Uncategorized stories
By Laura Langendorf on January 12, 2018

#AccessibleOlli drives us forward at CES

Did you know that 15 percent of us live with disabilities? That jumps to 25 percent for people 50+. And by the time we’re 65, half of us will have one or more impairments. That’s why #AccessibleOlli was such a draw at this year’s CES with it’s very worthy mission: autonomous for all of us. […]

Continue reading

By Kal Gyimesi on January 11, 2018

Part 4: Developing IoT-enabled Autos: a tsunami of change and leveraging the IoT

In the first three parts of this series we explained the process transformation that is taking place In IoT-enabled autos. I’ve discussed the emergence of model-based systems engineering (MBSE) and the movement toward the Scaled Agile Framework (SAFe) in modern engineering practices. The final component driving automotive development forward is the promise of cognitive systems […]

Continue reading

By Kal Gyimesi on January 4, 2018

Part 3: Developing IoT-enabled autos: a tsunami of change and the journey to cognitive

In my series on IoT-enabled autos, we’ve already recognized the emergence of model-based systems engineering (MBSE) and the wisdom of using the Scaled Agile Framework (SAFe) in modern engineering practices. In part three of my series highlighting the ways that automotive product engineering is evolving, we’ll examine the promise of cognitive systems. It’s the final […]

Continue reading