How Augmented Reality is Moving from Gaming to the Office

Share this post:

Enterprises have long shown interest in mobile app use cases that combine Artificial Intelligence and Augmented Reality, but with very few tangible or practical implementations. Until now.

Simultaneous recent changes to mobile device hardware, cloud technology and cognitive API’s mean that these types of cases can be realized at a fraction of the cost and effort that would have been required just a few years ago. Augmented reality is breaking out of consumer gaming and furniture placement apps to be at the forefront of use cases for transformational employee apps across industries.

So, what can these apps do? The short answer is, whatever you can imagine. But typically the functions fall into five key areas: visual recognition, cognitive diagnosis, augmented assistance, cognitive assistance, and learning. Here’s how they break out:

Visual Recognition. This enables you to point your device at something and the device tells you what it is, with a certain degree of confidence. For an engineer servicing machines that can have many thousands of components each with multiple versions, this is revolutionary. (This is a real use case for Coca-Cola engineers servicing vending machines.)

Cognitive Diagnosis. This is where the user and AI work together. For example, your device may tell you that a capacitor appears discoloured. You then tell the device that the unit is not powering on, and from this combined evidence the device suggests what the problem is, again with a degree of confidence.

Augmented Assistance. This is when your device doesn’t just tell you how to fix the problem, it shows you. It gives you instructions with augmented reality labels or highlighted components as you look through your screen at the failed circuit. And these stay in place even if you need to move around the item. And if you need both hands free, you can simply place the device next to you and view the instructions on a digital twin of your circuit.

Cognitive Assistant. This functionality allows you to just talk (or type) to converse and work through the issue, but in natural language and with natural responses.

Learn. This is critical for these types of apps. They essentially understand what just happened and use the images, actions and results to update their expert knowledge of this type of situation for future cases.

So, what’s changed? Devices for one thing. Today smartphones are equipped with two cameras on the back, one standard lens and one telephoto. This allows depth of field sensing so yes, your device can see in 3D. Cognitive technology has also advanced. Everything from speech processing to complex algorithms to visual recognition has evolved rapidly. Many services are now available via easy to consume API’s and the models are easier and faster to train, with less data.

In addition to these advances, augmented reality, like Apple’s ARKit have brought this technology out of its niche and into the realm of mainstream developers. (The next version in iOS 12 allows multiple users to view the same objects, so great for teams working through an issue)

With all of these advances, user expectations have changed. So many AI technologies have crept into our daily usage that they no longer seem alien in the workplace. Siri and Alexa have got us used to talking to our devices, Pokémon Go and other games have introduced us to AR, and we prefer the speed of a chatbot to waiting in a helpdesk queue.

IBM Watson with Apple’s Core ML

Until now there have been two options for mobile app artificial intelligence. Cloud based extensive models that require the device to be online, or more limited on-device capability (is there a hotdog in this picture). IBM and Apple have broken this restriction allowing complex Watson models to be built and then run natively on the device in offline mode, with learning updates taking place when reconnected. This has broken one of the final barriers to enterprise AI/AR.

In future articles I will explore some of the exciting and unexpected applications of these new use cases to some key industries and job roles.

A version of this story originally appeared in IBM THINK blog in July.

More stories
By IGOR PRAVICA on June 23, 2021

Investing in our people, our partners and our locations

June 23, 2021 We have never lived in such an extraordinary age with monumental changes taking place over the last year. The extent of uncertainty and change everyone has dealt with in all aspects of their lives throughout the pandemic is unparalleled globally. I am so proud and continue to be inspired by how IBMers […]

Continue reading

By ibmblogs on June 22, 2021

Fairness and equality in the workplace is, in fact, decreasing. AI can help.

June 22, 2021 Recent studies show that despite all the effort, gender equality in the workplace is not only not advancing, but actually decreasing. This situation has been, indeed, accelerated by the COVID-19 pandemic. A panel discussion with high level representatives from EU institutions, employers and governments from Central and Eastern Europe will address this […]

Continue reading

By ibmblogs on June 16, 2021

Circeo Joins Growing Ecosystem of Partners Supporting the IBM Cloud for Financial Services

Circeo Intends to Use IBM Cloud for Financial Services to Help Customers Accelerate Transactions with Financial Institutions in a Highly-Secured Environment June 16, 2021 Circeo, a European fintech and lending technology provider delivering a cloud-based lending solution for banks and financial services companies, today announced it has joined IBM’s (NYSE: IBM) growing ecosystem of more […]

Continue reading