August 30, 2018 | Written by: Tony Storr
Categorized: AI | Mobile Computing
Share this post:
If you’re a company that provides physical products or services, then you probably have engineers out in the field inspecting, servicing and maintaining those products. The role of so-called field engineers are by definition highly mobile, and apps are making a huge difference to these workers.
Early apps consolidated workflow, logistics and job-related information. This led to paper reduction and more productive time in the field, but little or no function was provided for the actual task itself.
The new breed of field engineer apps embrace the convergence and maturing of Artificial Intelligence, Augmented Reality and the leaps in the technological capability of mobile devices.
Field engineers can now augment their own senses, their opinion of the situation and the best action to take with the senses and opinion offered by their device, and they work together as a team.
These senses of human and device are complementary. Point the camera at a circuit and it can tell you which version of each component is installed, if anything looks like an issue and give a level of confidence. It may also notice things beyond the human visible spectrum, or even use thermal imaging (currently with a peripheral but it will be in mainstream smartphones before too long)
And it’s not all visual. Put the device on the floor of the elevator you are servicing and let it listen to the sound of the mechanism, feels the vibrations, takes readings with the accelerometer and triangulates to determine exactly where it is.
As the human you may smell burning, or touch things or provide additional information that may not be within the context or abilities of the device. And you discuss the situation and next steps in natural language. This is key, you talk as you would with a colleague in natural language throughout. Expressing frustration, relief or satisfaction actually help with the learning experience of the AI engine.
Rectification of the issue may then use augmented reality to guide you through the fix on-screen while looking at the component or placed alongside showing a digital twin.
Whatever the result, the AI learns, not only adding to the visual, auditory and other knowledge that expand the catalog of encountered situations, but also in the steps in the dialogue and forms of communication that led to the successful or unsuccessful result.
So is it that easy? Largely, yes, and such use cases can be implemented at a fraction of the cost required a few years ago.
The technology is all established and many of our clients have us now developing such apps. And with advances like the combination of IBM Watson with Apple’s Core ML such models can work even in off-line situations. The only real issue is a drop in quality with speech recognition and conversation in a noisy work environment (common to many field engineer roles).
However, the benefits of natural chat combined with having both hands free is so compelling that basic solutions with headsets and microphones are often the simplest means to achieving high quality results.
From the business perspective such apps achieve a compelling return on investment, primarily through increases in the first-time fix rate but additional benefits of reduced training costs and improved employee and customer satisfaction.
Which leads to a natural progression. If some of the same technology is re-used in an enterprise’s consumer app then a new opportunity arises.
Obviously, we wouldn’t expect passengers to start inspecting railroad tracks, but allowing the consumer to use these features to diagnose a simple problem with their dishwasher will lead to much greater self-service or provide additional information to the app and field engineer if they are required.
Enjoy your next conversation with your phone about your refrigerator.