December 7, 2017 | Written by: Carrie Kirby
Categorized: New Thinking | thinkLeaders
Share this post:
There are two reasons that Michael Ludden, director of product of IBM Watson Developers Labs & AR/VR Labs, starts his presentations with a video of Star Trek actors playing the virtual reality game Star Trek Bridge Crew.
First, it’s a perfect example of how his labs are melding Watson’s artificial intelligence capabilities with the burgeoning field of VR. The game uses his team’s Virtual Reality Sandbox to allow players to issue verbal commands to the starship computer and non-player characters. Second, he’s just a huge Trekkie, practically jumping up and down when LeVar Burton and Jeri Ryan appear on screen.
Ludden brings the same excitement to the conversation when talking about his vision for the future of VR, which he helps to achieve working with IBM’s development team. He shared with us four elements of his perspective on the future of VR:
1. Voice will be ubiquitous, but not exclusive
The Star Trek game shows how much sense it makes to use vocal commands during VR experiences, whether it’s a game, training program, or creative work session. Ludden’s team combined several IBM technologies to design the VR Sandbox, a ready-made customizable interactive speech interface. The VR Sandbox pairs up Watson’s speech recognition capabilities with microphones on VR headsets to allow users to issue commands naturally, without using proscribed language or wake words (trigger words used to “wake up” an AI). In the team’s VR Sandbox proof of concept, the user can create a large number of objects just by asking them to appear, such as, “give me a tiny blue dragon” or “I want a large black box.
Ludden envisions VR users issuing voice commands for requests that are currently carried out by selecting from menus or performing repetitive movements that could get tiresome, such as changing tools in a surgeon’s training or an artist’s 3D sculpting session. At the same time, actions that feel more natural as movements would stay that way. “You wouldn’t use voice to do the sculpture, you would use your hands,” explains Ludden.
2. The end of skeuomorphism
VR and AR can achieve great things by imitating “real” reality, such as recreating an airplane down to the last seat cushion for a training video or a publicity shoot. Yet, VR will really get its wings when creative minds break past the strictures of imitating reality, and accept the limitless nature of virtual worlds, Ludden asserts.
He likens the progression he expects in VR to the progression we’ve seen in iPhones from the “skeuomorphic”—where we design items and environments that look and feel like their real-world counterparts—to more symbolic items that offer more flexibility than real world items.
Take the notepad feature on early iPhones. “The notepad looks like a physical one, it’s yellow, it has lines, when you type, it writes in cursive. It’s all to make you feel comfortable. [But later], Apple and Google began to take advantage of the unique properties of that platform. The notepad is white and you don’t have to have pagination, I can write on one page forever because it isn’t a physical piece of paper,” Ludden says.
3. AI and VR join forces
At the Virtual Reality Strategy Conference in San Francisco, where Ludden spoke, the imperative combination of AI and VR came up in nearly every session. Ludden’s labs already demonstrated one form of that meld when they created VR Sandbox, since it incorporates two Watson machine learning algorithms, Speech to Text and Conversation, the latter of which is also used by chatbots. Ludden has high expectations for VR Sandbox’s future in enabling developers to create VR experiences with which people can communicate verbally.
Beyond VR Sandbox, Ludden thinks that entertainment will still be a major front for VR and AI to work in concert, since, for example, there are so many opportunities to create truly intelligent NPCs (non-player characters) in video games. He invites the listener to imagine playing Pac-Man where the ghosts don’t repeat predictable patterns, but try new strategies and tricks to catch Pac-Man. “We and many other companies are trying to think of ways to create algorithms and algorithm systems that are easy to consume for game developers that would deliver the benefits of machine learning,” Ludden says.
The potential doesn’t stop at entertainment. “AI is going to be used for VR and AR to do a lot of things around personalization and custom tailoring of systems. Like education: Systems can adapt on the fly to a user’s needs. If they’ve been staring for a long time or they make aggressive gestures in VR and it seems like they’re getting frustrated, something could be seamlessly transitioned into another method of displaying information” to help them get unstuck, Ludden suggests.
4. The next generation will have its own definition of reality
Just as today’s youth are digital natives, kids being born today will likely grow up taking virtual characters and environments for granted. How will that mold their understanding of the nature of reality? Ludden believes the effect will be profound.
“We have hard and fast mental rules about what’s real and what’s not: If it’s physical and I can touch it, smell it, it’s real,” Ludden says. “The next generation will have a more fluid, pliable view of what reality is.” Today’s young children may come to view intelligent assistants or companions, whether within a VR context or in everyday life, as nearly as real as their parents and teachers.
Learn more about our predictions and IBM iX