A job title typically offers a modest description of a profession: accountant, attorney, engineer, scientist. Sometimes, though, they’re spontaneously bestowed on someone who brings a unique point of view to her profession. Which is how Maryam Ashoori, a computer scientist and designer by training, was named IBM’s “cool things czar.”
Maryam joined IBM Research’s Cognitive Environments Lab in May 2014. The lab had cameras, screens, and microphones everywhere. But Maryam felt the environment lacked a critical element – “how do we make this space fun?” Using her computer science, engineering, design, and art skills, she started investigating how to make an already-interactive room truly immersive. Her effort to influence the lab’s art and design caught the attention of Brian Gaucher, the lab’s director at the time.
Profile of a czar
Name: Maryam Ashoori
Occupation: computer scientist and designer at IBM Research
Education: PhD, Systems Design Engineering from the University of Waterloo
Patents: 12 filed
Favorite work of art: Picasso’s Three Musicians
“Brian sent a note out to our organization asking about what kind of ‘cool-factor’ could be added to the lab … and he pointed to some of my ideas, referring to me as the ‘cool things czar.’” Maryam said.
That email not only donned a new title upon Maryam, but also afforded her a summer intern to help make her ideas into reality.
“I hired Kevin Jih, an undergrad studying computer science at the University of California-Santa Barbara, to help make the lab more visual, inspiring, and interactive. For example, we developed a cognitive service to help a team brainstorm creative ideas. So, a person could ask the room to ‘inspire me with fractals’ and the room would not just fill every screen with images of fractals, but also adjust the lighting of the room according to the predominant colors appearing throughout the images.
“[IBM Research Vice President] Dario Gil also wanted to connect the space to external services. So, Kevin and I connected the room to The New York Times trends. Users could connect to The Times with hand gestures – mapping stories to different screens, even zooming in on images, or moving the information onto other connected devices,” Maryam said.
In this Q&A Maryam talks about the lab, recent projects, and how she stays cool.
Inspiration mode – the room’s ambient color is changed to reflect the dominant color of the retrieved images.
What is the big picture goal of the lab and its cognitive objects?
Maryam Ashoori: My grand vision of cognitive computing is that in the future, instead of having robots everywhere, cognitive services will be blended into our everyday objects in an invisible way. I’ve been augmenting the objects that we love and use every day with sensors and actuators, and then empowering them using our Watson cognitive services. For example, our team put pressure sensors on a chair to monitor Parkinson’s patients’ disease progression. Simple data like the time it takes for someone to get out of a chair can be valuable to a healthcare provider.
What does an object in a cognitive environment offer over wearables?
MA: Every day we interact with objects around us – chairs, lamps, desks – without really thinking about that interaction. The way you sit in a chair is a function of the form of that chair. But the moment you sit in the chair, it can also play the role of a wearable and start collecting biometric data, such as your heart rate and respiration.
The advantage of gathering biometrics using a chair, like the one we’re testing for Parkinson’s patients, versus a wearable, is that a chair requires no preparation. A person can just sit in the chair and immediately have their biometrics captured and analyzed. A wearable, such as a smart shirt, requires people to change their clothes, put on a device, or plug the device into something else, which may be cumbersome or conflict with existing clothing the person is wearing.
How do you build a cognitive chair, and where are the sensors?
Office chair with sensors and smart fabric.
MA: When designing the cognitive chair, we recognized that it could carry a wide variety of sensors to measure not only the personal state of the occupant, but the environmental state as well. We instrumented a basic office chair with custom-built pressure sensors and temperature sensors in the seat, along with a microphone on the back; and we used a small computing device running a machine-learning algorithm to detect posture and movement.
How else can a chair like this help a Parkinson’s patient?
MA: We want to monitor Parkinson’s patients over time. According to the Parkinson’s Disease Foundation, it affects about 1 million people in the U.S., and doctors diagnose as many as 60,000 new cases each year. It’s a progressive condition. The symptoms worsen over time, and new ones may appear over time as well. It’s also difficult to estimate how quickly Parkinson’s will progress in each person it afflicts. Our goal is to understand how a cognitive chair can passively measure activities such as how long it takes a patient to rise from a chair, how their weight distribution shifts as they rise, and how much pressure they put on the arm rests to support themselves. This kind of data may be an indicator of the disease’s progress.
What other cognitive object projects are you working on?
MA: I’ve explored a number of different form factors for cognitive objects. For example, I made a cognitive dress that measures pulse and skin conductivity to determine if the wearer is feeling stressed. It also connects to Twitter to show the emotional state of people in real time, based on how frequently they use the hashtags #love and #hate.
Cognitive dress that understands sentiment from tweets.
I’ve also made a cognitive lamp that changes color based on the state of our lab. While the speech recognition engine is listening to a user, the lamp glows green to show that the room is listening. Before the lamp, it was hard to tell whether the cognitive services in the room were operating or not.
Does someone in a room like this activate these experiences, or are they automatic?
MA: In our current implementation, these ambient features activate through voice interaction with a cognitive agent (called Celia). The agent changes the environmental state based on the request of the user. In the future, we expect the agent to automatically trigger different environmental states by sensing the needs and moods of the occupants.
For example, we can measure the vocal stress and ambient loudness of room occupants to infer a tense situation, and diffuse that situation by manipulating the room state using our Zen Garden visualization, which dims the lights, plays calming music, and shows meditative imagery on the walls.
MA: I’m part computer scientist, part user experience analyst, and part interface artist – and I love exploring the intersection between of these fields. After earning a PhD in Systems Design Engineering, I joined the Ontario College of Art and Design for an Art and Design Studio program. That was the starting point of my exploration into wearable media as the next generation of interactive technologies.
To stay ‘cool,’ I study how people interact with their environments and try to create technologies that seamlessly integrate into them – always asking questions such as who am I designing for? How will they use my design? What factors make it desirable to use?
I don’t envision a future as depicted in sci-fi movies, with robots and futuristic screens all over the place. Rather, I imagine that intelligence will be invisibly embedded into the existing objects we already own. This lets us maintain a sentimental relationship with our objects while also transforming them into something more powerful.
Fifty years ago this month, IBM researcher and computing pioneer Edgar Frank Codd published the seminal paper “A Relational Model of Data for Large Shared Data Banks,” which became the foundation of Structured Query Language (SQL), a language originally built to manage structured data with relational properties. Today SQL is one of the world’s most […]
The 45th International Conference on Acoustics, Speech, and Signal Processing is taking place virtually from May 4-8. IBM Research AI is pleased to support the conference as a bronze patron and to share our latest research results, described in nine papers that will be presented at the conference.
IBM inventors were awarded 9,262 U.S. patents – topping, once again, the list for the most U.S. patents received, for the 27th year running. That brings the total number of IBM’s U.S. patents to over 140,000.