A look at different types of models in Edge computing and the need for a model management system.

We start by looking at models. Model has become an overused term, especially with the advent of artificial intelligence (AI), analytics, and data science. In AI, model-based reasoning refers to an inference method used in expert systems based on a model. A data model is an abstract model that organizes elements of data and standardizes how they relate to one another. Analytical models are mathematical models that have a closed-form solution. Then there are predictive models and adaptive models…you get the picture.

Here are a few simple definitions that should help clear things up:

  • Artificial intelligence (AI) is the broad discipline of creating intelligent machines.
  • Machine learning (ML) refers to systems that can learn from experience.
  • Deep learning (DL) refers to systems that learn from experience on large data sets using programmable neural networks to make more accurate predictions without help from humans.

These three concepts are commonly represented as layers, as shown in Figure 1. (This blog post goes deeper into the differences between them.)

Finally, a machine learning (ML) model is a mathematical model that generates predictions by finding or extracting patterns from the input data.

Figure 1: AI layers.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Models in edge computing and the need for a model management system (MMS)

In edge computing parlance, when we say model, it loosely refers to machine learning models that are created and trained in the cloud or in a data center and deployed onto the edge devices.

An ML model is improved and kept updated through a cycle of continuous re-training and deployment. The enhanced models are deployed back out onto the edge devices. In this context, edge refers to the far edge devices which were described in one of the previous blogs in this series, entitled “Architecting at the Edge.”

But, we hear about intelligence at the edge, specifically edge AI. Isn’t that what drives autonomous vehicles and one-armed robots in manufacturing? Yes, edge AI is the term used to describe AI algorithms processed locally on a hardware device like an edge node. Edge AI allows for operations like data creation, decision-making, and taking actions in real-time.

Now that we have these models, how do the models get deployed to an edge endpoint? An edge solution could entail hundreds of devices with different flavors and different versions of these ML models. How are these models managed and who manages them?

ML models contain not only code but also other metadata and objects. This means that the lifecycle of a ML model is different from the lifecycle of an AI algorithm, which is mostly code. Doing these tasks manually, while possible, is not recommended and does not scale well.

Hopefully we have made the case for a model management system (MMS) in an edge solution. That is precisely what the IBM Edge Application Manager (IEAM) offers—an MMS that asynchronously updates machine learning models running on edge nodes. Such updates, done dynamically, can happen continuously every time the model evolves.

More on the model management system (MMS)

IBM Edge Application Manager (IEAM) deploys machine learning (ML) models. In an edge solution, there could be many models created and deployed. These models need to be managed, and that’s where the model management system (MMS) comes in to play.

The MMS can be used to deploy, manage, and synchronize models across the edge tiers. It will facilitate the storage, delivery, and security of models/data and other metadata packages needed by edge and cloud services. edge nodes can send and receive models and metadata to and from the hybrid cloud.

There are several components that make up the MMS (look for more details in the related links). At a high level, there is a management service designed to simplify the synchronization of cognitive applications between the hybrid cloud and the edge devices. That service has two components—one running in the hybrid cloud (Cloud Sync Service – CSS) and the other on the remote node (Edge Sync Service – ESS).

Thankfully, developers won’t have to deal with these internal services since IEAM provides APIs that developers and administrators can use to interact with the MMS:

Figure 2: Components in the model management system.

MMS and DevOps

From a user experience perspective, a model deployer would describe the model by giving it a name and providing a type classification. The next step would be to assign a deployment policy to the model. If you remember, policies were described in the blog entitled “Policies at the Edge.”

Within the MMS, the combination of a model description (metadata) and the model itself is called an object. The following notation best defines those objects: Object :: Metadata + Data.

The cloud component delivers objects (ML/DL models + metadata) to specific nodes or groups of nodes within an IEAM topology or organization. Once those edge AI objects are delivered, an application programming interface (API) is available to retrieve the object (including the models and metadata) from the edge node using the edge component of the management service. This lifecycle is explained in the next section.

When data scientists and cognitive services developers create AI artifacts, they can use any AI modeling tool. It is worth pointing out that MMS integrates well with IBM Watson Studio and the intelligent services running on the edge nodes. ML/DL models built by data scientists or software developers can be published directly to the MMS, making them immediately available to the edge nodes.

The IBM Edge Application Manager provides a command language interface (CLI) that facilitates the administration of the ML/DL model objects, which is based on Open Horizon. Each command is prefixed with hzn mms. For example, hzn mms status displays the status of the MMS. Sample output of the status command is shown below:

Figure 3 depicts the MMS command syntax. To get help on the syntax, type hzn mms list --help:

Figure 3: MMS command syntax.

MMS lifecycle

The model management system (MMS) enables a true separation of concerns by allowing users to manage the lifecycle of the objects on their edge nodes remotely and independently from code updates by securely sending any object to and from the edge clusters. Figure 4 shows the actors and their corresponding tasks within the model lifecycle. The high-level steps are as follows:

  • Create a model
  • Deploy the model
  • Run inference on device
  • Update the model
  • Publish the model to the MMS
  • Observe changes to inference results on the model

Figure 4: Actors and model development lifecycle.

An example: Machine learning and image recognition

The MMS lifecycle and the roles of the different actors is best illustrated by walking through an example.

Let’s say a user has a camera-based recognition system and wishes to deploy a ML application to identify animals in a nearby park. The camera, in this case, represents a far edge device and animals could be substituted by humans entering a secure location or items in a store shelf or cars going through a toll booth.

For this example, let’s start with an image of several animals:

The data scientist creates a ML model to detect and classify animals. TensorFlow.js is used in this simplified example for clarity. Additionally, the software developer creates a metadata file that will be used to publish any model updates to the MMS for distribution to Edge nodes. The metadata file includes important information about the id, type, and destination details for model publishing.

When the initial image is loaded, the analysis of it will be displayed as follows:

Next, the SW developer packages the ML model and publishes it on the IEAM hub as a model object with a policy that describes where to deploy the model.

After the user has registered an image recognition device, a low precision is detected when more than one object is present in the picture (see output above).

The data scientist updates the ML model to a more reliable framework—CocoSSD in this simple example—and notifies the developer of the changes.

The SW developer publishes an updated MMS object.

Once the Model update has been published with MMS services, an update is detected by the image analysis service, and the updated model is downloaded from the monitoring device and initialized by the service without any downtime of the service.

The updated model allows the device to analyze the image of the animals with a higher probability score, as shown:

The assumption here is that the end user is satisfied with the results of the image classification and detection algorithm. The process has minimum impact to the existing cognitive application published to the IEAM hub created by the SW developer.

As noted earlier, the process uses the model management system to send a ML model update to an edge node. The GitHub link below describes the steps of how edge nodes can detect the arrival of new version of the model and then deploy the model to the edge node. Please note that we use TensorFlow.js, a free and open source software library, to perform image detection and classification.

Link to the demo that shows model building and inferencing.

Get started with IBM Edge Application Manager

In this article, we’ve shown you that that whether it is data science, machine learning, or artificial intelligence, the atomic unit is the model, and trained models are very useful in classifying objects.

The IBM Edge Application Manager model management system is essential in the creation and management of such ML models because it enables dynamic updates to models without incurring downtime of the services running the AI algorithm.

Related resources

Please make sure to check out all the installments in this series of blog posts on edge computing:

Thanks to David Booz for reviewing the article and Joe Pearson for providing his perspective.

Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters