A look at different types of models in Edge computing and the need for a model management system.
We start by looking at models. Model has become an overused term, especially with the advent of Artificial Intelligence (AI), analytics, and data science. In AI, model-based reasoning refers to an inference method used in expert systems based on a model. A data model is an abstract model that organizes elements of data and standardizes how they relate to one another. Analytical models are mathematical models that have a closed-form solution. Then there are predictive models and adaptive models…you get the picture.
Here are a few simple definitions that should help clear things up:
- Artificial Intelligence is the broad discipline of creating intelligent machines.
- Machine Learning (ML) refers to systems that can learn from experience.
- Deep Learning (DL) refers to systems that learn from experience on large data sets using programmable neural networks to make more accurate predictions without help from humans.
These three concepts are commonly represented as layers, as shown in Figure 1. (This blog post goes deeper into the differences between them.)
Finally, a Machine Learning (ML) model is a mathematical model that generates predictions by finding or extracting patterns from the input data.
Models in Edge computing and the need for a model management system (MMS)
An ML model is improved and kept updated through a cycle of continuous re-training and deployment. The enhanced models are deployed back out onto the Edge devices. In this context, Edge refers to the far edge devices which were described in one of the previous blogs in this series, entitled “Architecting at the Edge.”
But, we hear about intelligence at the Edge, specifically Edge AI. Isn’t that what drives autonomous vehicles and one-armed robots in manufacturing? Yes, Edge AI is the term used to describe AI algorithms processed locally on a hardware device like an Edge node. Edge AI allows for operations like data creation, decision-making, and taking actions in real-time.
Now that we have these models, how do the models get deployed to an Edge endpoint? An Edge solution could entail hundreds of devices with different flavors and different versions of these ML models. How are these models managed and who manages them?
ML models contain not only code but also other metadata and objects. This means that the lifecycle of a ML model is different from the lifecycle of an AI algorithm, which is mostly code. Doing these tasks manually, while possible, is not recommended and does not scale well.
Hopefully we have made the case for a model management system (MMS) in an Edge solution. That is precisely what the IBM Edge Application Manager (IEAM) offers—an MMS that asynchronously updates machine learning models running on Edge nodes. Such updates, done dynamically, can happen continuously every time the model evolves.
More on the model management system (MMS)
IBM Edge Application Manager (IEAM) deploys machine learning (ML) models. In an Edge solution, there could be many models created and deployed. These models need to be managed, and that’s where the model management system (MMS) comes in to play.
The MMS can be used to deploy, manage, and synchronize models across the Edge tiers. It will facilitate the storage, delivery, and security of models/data and other metadata packages needed by Edge and cloud services. Edge nodes can send and receive models and metadata to and from the hybrid cloud.
There are several components that make up the MMS (look for more details in the related links). At a high level, there is a management service designed to simplify the synchronization of cognitive applications between the hybrid cloud and the Edge devices. That service has two components—one running in the hybrid cloud (Cloud Sync Service - CSS) and the other on the remote node (Edge Sync Service - ESS).
Thankfully, developers won’t have to deal with these internal services since IEAM provides APIs that developers and administrators can use to interact with the MMS.
MMS and DevOps
From a user experience perspective, a model deployer would describe the model by giving it a name and providing a type classification. The next step would be to assign a deployment policy to the model. If you remember, policies were described in the blog entitled “Policies at the Edge.”
Within the MMS, the combination of a model description (metadata) and the model itself is called an object. The following notation best defines those objects:
Object :: Metadata + Data.
The cloud component delivers objects (ML/DL models + metadata) to specific nodes or groups of nodes within an IEAM topology or organization. Once those Edge AI objects are delivered, an application programming interface (API) is available to retrieve the object (including the models and metadata) from the Edge node using the Edge component of the management service. This lifecycle is explained in the next section.
When data scientists and cognitive services developers create AI artifacts, they can use any AI modeling tool. It is worth pointing out that MMS integrates well with IBM Watson Studio and the intelligent services running on the Edge nodes. ML/DL models built by data scientists or software developers can be published directly to the MMS, making them immediately available to the Edge nodes.
The IBM Edge Application Manager provides a command language interface (CLI) that facilitates the administration of the ML/DL model objects, which is based on Open Horizon. Each command is prefixed with
hzn mms. For example,
hzn mms status displays the status of the MMS. Sample output of the status command is shown below:
Figure 3 depicts the MMS command syntax. To get help on the syntax, type
hzn mms list --help:
The model management system (MMS) enables a true separation of concerns by allowing users to manage the lifecycle of the objects on their Edge nodes remotely and independently from code updates by securely sending any object to and from the Edge clusters. Figure 4 shows the actors and their corresponding tasks within the model lifecycle. The high-level steps are as follows:
- Create a model
- Deploy the model
- Run inference on device
- Update the model
- Publish the model to the MMS
- Observe changes to inference results on the model
An example: Machine learning and image recognition
The MMS lifecycle and the roles of the different actors is best illustrated by walking through an example.
Let’s say a user has a camera-based recognition system and wishes to deploy a ML application to identify animals in a nearby park. The camera, in this case, represents a far edge device and animals could be substituted by humans entering a secure location or items in a store shelf or cars going through a toll booth.
For this example, let’s start with an image of several animals:
The data scientist creates a ML model to detect and classify animals. TensorFlow.js is used in this simplified example for clarity. Additionally, the software developer creates a metadata file that will be used to publish any model updates to the MMS for distribution to Edge nodes. The metadata file includes important information about the id, type, and destination details for model publishing.
When the initial image is loaded, the analysis of it will be displayed as follows:
Next, the SW developer packages the ML model and publishes it on the IEAM hub as a model object with a policy that describes where to deploy the model.
After the user has registered an image recognition device, a low precision is detected when more than one object is present in the picture (see output above).
The data scientist updates the ML model to a more reliable framework—CocoSSD in this simple example—and notifies the developer of the changes.
The SW developer publishes an updated MMS object.
Once the Model update has been published with MMS services, an update is detected by the image analysis service, and the updated model is downloaded from the monitoring device and initialized by the service without any downtime of the service.
The updated model allows the device to analyze the image of the animals with a higher probability score, as shown:
The assumption here is that the end user is satisfied with the results of the image classification and detection algorithm. The process has minimum impact to the existing cognitive application published to the IEAM hub created by the SW developer.
As noted earlier, the process uses the model management system to send a ML model update to an Edge node. The GitHub link below describes the steps of how Edge nodes can detect the arrival of new version of the model and then deploy the model to the Edge node. Please note that we use TensorFlow.js, a free and open source software library, to perform image detection and classification.
Get started with IBM Edge Application Manager
In this article, we’ve shown you that that whether it is data science, machine learning, or artificial intelligence, the atomic unit is the model, and trained models are very useful in classifying objects.
The IBM Edge Application Manager model management system is essential in the creation and management of such ML models because it enables dynamic updates to models without incurring downtime of the services running the AI algorithm.
Please make sure to check out all the installments in this series of blog posts on Edge computing:
- Part 1: “Cloud at the Edge”
- Part 2: "Rounding out the Edges"
- Part 3: "Architecting at the Edge"
- Part 4: "DevOps at the Edge"
- Part 5: "Policies at the Edge"
- Part 6: "Models Deployed at the Edge"
- Part 7: "Security at the Edge"
- Part 8: "Analytics at the Edge"
- Part 9: "5G at the Edge"
- Model management system overview
- Open Horizon: Edge synchronization service (MMS)
- Horizon Hello Model Management Service (MMS) Example Edge Service
- Edge computing architecture
- What is Edge Computing?
Thanks to David Booz for reviewing the article and Joe Pearson for providing his perspective.