Anyline teaches smartphones to read with deep learning technology

By | 3 minute read | July 21, 2020

home meter reading

Think of workplaces around the world, whether they’re public or private, indoors or outdoors, where people stay in place or move around. One thing they all have in common is characters—specifically the alphanumeric kind. Digitization is becoming an ever-present part of life at work, but for a surprisingly large part of the workforce, manually viewing and recording short strings of analog data is a core part of their everyday tasks.

Factory workers do it for the parts they are assembling. Customs workers do it for traveler ID cards. Transportation officials do it for license plates. And auto insurance adjusters do it for vehicle identification numbers (VINs). It’s a routine task that’s increasingly performed using mobile devices such as cell phones or tablets.

But by doing these tasks manually, human errors—most commonly in transcription—are routinely made. Even if it’s a single, little character, it can lead to big and potentially costly problems.

Teaching smartphones to read

The founders of Anyline recognized that smartphones, for all their advances, still lacked the ability to read characters in the physical, analog format—like people can do. What they envisioned was a way to use recent advances in deep-learning techniques to teach smartphones how to read the characters put before them. In effect, Anyline saw how smartphones —augmented by AI algorithms— can provide a real-time bridge between the analog and digital worlds.

In addition to the core AI technology, the real trick to success was designing a solution that met the European Union’s General Data Protection Regulations (GDPR) rules on consumer data protection and privacy, specifically those regarding the transfer of personal data outside the EU. It meant, quite simply, that AI processing couldn’t be executed in the cloud. It had to happen at the “edge” of the cloud—within the smartphone.

Our answer, both innovative and economical, was to build a platform that performs AI training in the cloud while keeping the execution of the AI models contained in the smartphone. It’s a big deal, and here’s how it works.

Using the IBM Deep Learning as a Service running on IBM Cloud, an element of the IBM Watson Studio offering, our developers run the data through a set of algorithms known as neural networks. By experimenting with the weights underlying these algorithms, they can make the neural networks—which are modeled after the human brain—better at recognizing patterns.

Deep learning makes smartphones smarter

Once these optimized neural networks are deployed on them, smartphones become lightning fast, highly accurate ways of capturing character sequences from the analog video streams they record. As a matter of strategy, we design our solutions to be flexible, so that they can easily be adapted to an almost infinite number of industry use cases.

Among our notable users is co.met GmbH, a provider of meter reading services to utilities, which is using our mobile Optical Character Recognition (OCR) solution to record meter readings in just one step. In addition to saving time by avoiding the manual typing of each meter reading, the solution prevents transmission and reading errors, and storing proof images for later checks.

Our decision to run the deep training algorithms on IBM Cloud has also had important strategic benefits, foremost among them is the ability to rapidly scale our AI processing capability as we grow. That’s not to mention the fact that it has enabled us to bring new solutions to market far quicker than before.

Going forward, we’re also looking at providing our customers with the ability to train their neural networks themselves, and we see the IBM Cloud Kubernetes Service—and the support for containerization it provides—as an integral part of that plan.

Learn more about how Anyline is using AI to create a new generation of OCR solutions by watching the video interview with Andreas Greilhuber: