Share this post:
Integrating AI in everyday enterprise and consumer applications is steadily becoming the new normal. In a mobile-first world, the number of users accessing AI through apps on their devices is rapidly growing. Very often, their experience is hindered by inconsistencies in the quality or availability of network connectivity.
Consider three distinct examples –
1. Fixing issues with Visual Diagnosis
User: A person in the field equipped with an iPhone trying to visually diagnose a problem.
Usage: This could be applied in various forms of diagnosis, from issues in home appliances to jet engines; from faulty wiring to metal pipe rust; from error codes in machines to electronic component damage.
2. Recommendations based on Visual analysis
User: A person using an iPhone or iPad to examine an unfamiliar item and receiving additional information and recommendations.
Usage: This could be used for recommendations – from places or scenes for travel assistance to new products in retail store; from identifying unknown or complex machine parts to classifying plants or food.
3. Visual triggers for a business process
User: An end user and a trigger for a downstream business process
Usage: Business processes range from creating a work order for repair to updating a shopping cart for purchase; from intervening in quality control to safety procedure in manufacturing; from initiating an insurance claim to a collaborative analysis for experts in medicine.
These are a diverse set of scenarios with a common thread running through them: the need for a low-latency and rich insight for a human or a downstream process.
Imagine a scenario where a user is trying to access results while on-the-go (changing network speeds), or in hard-to-reach places (manufacturing plants, buildings, store interiors, remote areas, etc.). This impacts the user’s ability to do the job, which can have a domino effect on business processes and the bottom line for companies.
The most compelling way to empower that user combines relevant AI insights, at the time of need, without the user having to worry about network connectivity issues.
To set this in motion, you need:
1. A technique to handle tradeoffs between immediate insights, irrespective of connectivity, with richer insights from the cloud, allowing the user to focus on the task at hand.
2. Collaborative methods and tools for users, developers and/or data scientists to build solutions in a way that allows them to focus on the higher end of the solution spectrum.
3. An approach with associated technology that enables a process of rapid iteration to keep up with constantly changing data and other surrounding factors.
Components of this solution are being successfully used by enterprises and consumers across industries and geographies.
Watson services on the IBM Cloud provide rich and relevant insights from a variety of public and enterprise data sources to applications. IBM’s approach to data and privacy with Watson ensures that client data and insights are not shared with IBM or third parties, and that client data does not contribute to training a centralized knowledge graph.
Apple Core ML is a foundational machine learning (ML) framework that lets you integrate ML models into your app. Core ML delivers optimized performance for Apple products with minimal memory footprint and battery consumption impact. User privacy is protected as data is stored locally and encrypted by default.
What’s new – Bringing it together with a seamless experience
Available today, Watson Visual Recognition Service for Core ML combines enterprise-grade IBM Watson AI with Apple’s Core ML to take the next step in the evolution of mobile and AI.
These are key aspects of what is now available to the ecosystem of users and developers. Read more about the partnership here.
1. Watson SDK low latency, and offline process for custom Visual Recognition models using Core ML with the rich insights from the Watson services on the cloud.
2. Watson Studio provides a low-code, end-to-end collaborative environment that enables developers to quickly and easily catalog, classify, provision, and train their data and models.
3. Developer assets and best practices including Code Patterns for developers to get started, starter kits to quickly build iOS apps that combine these Watson services with other components, and code examples to get started now.
These offerings are the first step towards mitigating challenges for users, developers, and enterprises. Companies have already started building enhancements to applications that leverage Watson Visual Recognition Service for Core ML.
For the developer, this drives a paradigm for building once and deploying at heterogeneous endpoints. For the user, this translates to growth in the expertise spectrum and higher productivity. For the enterprise, this is an inevitable step toward mobile and AI revolutionizing how we work.
Watch the demo to see how this all comes together for part and issue identification on Arduino boards, a representative for any component.