Development overview

The following content provides more details on the software development practices and concepts for IBM Edge Computing for Devices development.

Note: Before you review the detailed development information within this topic, review the following documentation and go through the provided developer examples. This documentation and examples introduce the development practices and concepts for IBM Edge Computing for Devices development:

Introduction

IBM Edge Computing for Devices is built on the open source Horizon software. For more information about this software, see Open Horizon Opens in a new tab.

With IBM Edge Computing for Devices, you can develop any service containers that you want for your edge machines. You can then cryptographically sign and publish your code. Finally, you can specify policies within a IBM Edge Computing for Devices deployment pattern to govern any software installation, monitoring, and updating. After you complete these tasks, you can view the Horizon agents and Horizon agbots forming agreements to collaborate on managing the software lifecycle. These components then manage the software lifecycle details on your IBM Edge Computing for Devices edge nodes fully autonomously based on the registered deployment pattern for each edge node.

The software development process within IBM Edge Computing for Devices is focused on maintaining system security and integrity, while greatly simplifying the effort that is required for active software management on your edge machines. You can build IBM Edge Computing for Devices publishing procedures into your continuous integration and deployment pipeline. When the distributed autonomous agents discover published changes in the software or a policy, such as within the IBM Edge Computing for Devices deployment pattern, the autonomous agents independently act. The agents collaborate to update the software or enforce your policies across your entire fleet of edge machines, wherever they are located.

The software development cycle that you use with IBM Edge Computing for Devices is illustrated in the following diagram:

Development lifecycle

Services and deployment patterns

IBM Edge Computing for Devices services are the building blocks of deployment patterns. Each service can contain one or more Docker containers. Each Docker container can in turn contain one or more long-running processes. These processes can be written in almost any programming language, and use any libraries or utilities. However, the processes must be developed for, and run in, the context of a Docker container. This flexibility means that there are almost no constraints on the code that IBM Edge Computing for Devices can manage for you. When a container runs, the container is constrained in a secure sandbox. This sandbox restricts access to hardware devices, some operating system services, the host file system, and the host edge machine networks. For information on sandbox constraints, see Sandbox.

The cpu2msghub example code consists of a Docker container that uses two other local edge services. These local edge services connect over local private Docker virtual networks by using HTTP REST APIs. These services are named cpu and gps. The agent deploys each service on a separate private network along with each service that declared a dependency on the service. One network is created for cpu2msghub and cpu, and another network is created for cpu2msghub and gps. If there is a fourth service in this deployment pattern that is also sharing the cpu service, then another private network is created for just the cpu and the fourth service. In IBM Edge Computing for Devices, this network strategy restricts access for services to only the other services that are listed in requiredServices when the other services were published. The following diagram shows the cpu2msghub deployment pattern when the pattern runs on an edge node:

Services in a pattern

The two virtual networks enable the cpu2msghub service container to access the REST APIs that are provided by the cpu and gps service containers. These two containers manage access to operating system services and the hardware devices. Although REST APIs are used, there are many other possible forms of communication that you can use to enable your services to share data and control.

Often the most effective coding pattern for edge nodes involves deploying multiple small, independently configurable, and deployable services. For example, Internet of Things patterns often have low-level services that need access to the edge node hardware, such as sensors or actuators. These services provide shared access to this hardware for other services to use.

This pattern is useful when the hardware essentially requires exclusive access to provide a useful function. The low-level service can properly manage this access. The role of the cpu and gps service containers is similar in principle to that of the device driver software in the host's operating system, but at a higher level. Segmenting the code into independent small services, some specializing in low-level hardware access, enables a clear separation of concerns. Each component is free to evolve and be updated in the field independently. Third-party applications can also be securely deployed together along with your proprietary traditional embedded software stack by selectively allowing them access to particular hardware or other services.

For example, an industrial controller deployment pattern might be composed of a low-level service for monitoring power usage sensors and other low-level services. These other low-level services can be used to enable control of the actuators for powering the devices that are monitored. The deployment pattern might also have another top-level service container that consumes the services of the sensor and actuator. This top-level service can use the services to alert operators or to automatically power down devices when anomalous power consumption readings are detected. This deployment pattern might also include a history service that records and archives sensor and actuator data, and possibly complete analysis on the data. Other useful components of such a deployment pattern might be a GPS location service.

Each individual service container can be independently updated with this design. Each individual service might also be reconfigured and composed into other useful deployment patterns without any code changes. If needed, a third-party analytics service can be added to the pattern. This third-party service can be given access to only a particular set of read-only APIs, which restricts the service from interacting with the actuators on the platform.

Alternatively, all of the tasks in this industrial controller example can be run within a single service container. This alternative is not usually the best approach since a collection of smaller independent and interconnected services usually makes software updates faster and more flexible. Collections of smaller services can also be more robust in the field. For more information about how to design your deployment patterns, see Best practices.

Sandbox

The sandbox in which deployment patterns run restricts access to the APIs that are provided by your service containers. Only the other services that explicitly state dependencies on your services are permitted access. Other processes on the host normally do not have access to these services. Similarly, other remote hosts do not normally have access to any of these services unless the service explicitly publishes a port to the host's external network interface.

The following diagram expands on the previous diagram to show the expected and allowed connections. For example, services that have explicit dependencies. The diagram also shows the disallowed connections, such as services that did not state any dependencies. The disallowed connections are shown with a large X at the point where the service is seeking a connection.

Development tasks

Services that use other services

Edge services often use various API interfaces that are provided by other edge services to acquire data from them, or to deliver control commands to them. These API interfaces are commonly HTTP REST APIs, like the ones provided by the low-level cpu and gps services in the cpu2msghub example. However, those interfaces can really be anything that you want, such as shared memory, TCP, or UDP, and can be with or without encryption. Since these communications typically take place within a single edge node, with messages never leaving this host, often encryption is unnecessary.

As an alternative to REST APIs, you can use a publishing and subscribing interface, such as the interface that is provided by MQTT. When a service only provides data intermittently, a publishing and subscribing interface is usually simpler than repeatedly polling a REST API as the REST APIs can time out. For example, consider a service that monitors a hardware button, and provides an API for other services to detect whether a button press occurred. If a REST API is used, the caller cannot simply call the REST API and wait for a reply that would come when the button was pressed. If the button remained unpressed for too long, the REST API would time out. Instead, the API provider would need to respond promptly to avoid an error. The caller must repeatedly and frequently call the API to be sure not to miss a brief button press. A better solution is for the caller to subscribe to an appropriate topic on a publishing and subscribing service and block. Then, the caller can wait for something to be published, which might occur far in the future. The API provider can take care of monitoring the button hardware and then publish only the state changes to that topic, such as button pressed, or button released.

MQTT is one of the more popular publishing and subscribing tools that you can use. You can deploy an MQTT broker as an edge service, and have your publisher and subscriber services require it. MQTT is also frequently used as a cloud service. The IBM Watson IoT Platform, for example, uses MQTT to communicate with IoT devices. For more information, see IBM Watson IoT Platform Opens in a new tab. Some of the Open Horizon project examples use MQTT. For more information, see Open Horizon examples.

Another popular publishing and subscribing tool is Apache Kafka, which is also frequently used as a cloud service. IBM Event Streams, which is used by the cpu2msghub and sdr2msghub examples to send data to the IBM Cloud, is also based on Kafka. For more information, see IBM Event Streams Opens in a new tab.

Any edge service container can provide or consume other local edge services on the same host, and edge services provided on nearby hosts on the local LAN. Containers might communicate with centralized systems in a remote corporate or cloud provider data center. As a service author, you determine with whom and how your services communicate.

You might find it useful to review the cpu2msghub example again to see how the example code uses the other two local services. For instance, how the example code specifies dependencies on the two local services, declares and uses configuration variables, and communicates with Kafka. For more information, see cpu2msghub example.

Service definition

In every IBM Edge Computing for Devices project, you have a horizon/service.definition.json file. This file defines your edge service for two reasons. One of these reasons is to enable you to simulate the running of your service by the Horizon agent, similar to how it runs in production. This simulation is useful for working out any special deployment instructions that you might need, such as port bindings, and hardware device access. The simulation is also useful for verifying communications between service containers on the docker virtual private networks that the agent creates for you. The other reason for this file is to enable you to publish your service to Horizon exchange. In most of the provided examples, the horizon/service.definition.json file is provided for you within the GitHub repository that you clone as part of the example. However, within the quickstart example, the hzn dev service new command creates this file for you.

Open the horizon/service.definition.json file that contains the Horizon metadata for one of the example service implementations, for example, the cpu2msghub.

Every service that is published in Horizon needs to have a url that uniquely identifies it within your organization. This field is not a URL at all. Instead, the url field helps to form a globally unique identifier, when added with your organization name, and a specific implementation version and arch fields. You can edit the horizon/service.definition.json file to provide appropriate values for url and version. For the version value, use a semantic versioning style value. Use the new values when you push, sign, and publish your service containers. Alternatively, you can edit the horizon/hzn.json file and the tools substitute any variable values that are found there, in place of any variable references used in the horizon/service.definition.json file.

The requiredServices section of the horizon/service.definition.json file itemizes any service dependencies, such as any other edge services that this container uses. The hzn dev dependency fetch tool makes it easy to add dependencies to this list, so you do not need to manually edit the list. Once dependencies are properly added, then whenever the agent runs the container, those other requiredServices are also automatically run. For example, when you use hzn dev service start or when you register a node with a deployment pattern that contains this service. For more information about required services, see cpu2msghub.

In the userInput section, you declare the configuration variables that your Service can consume to configure itself for a particular deployment. You provide variable names, data types, and default values here and you might also provide a human-readable description for each. When you use hzn dev service start or when you register an edge node with a deployment pattern containing this service, you need to provide a userinput.json file to define values for these variables. For more information about the userInput configuration variables and userinput.json files, see cpu2msghub.

The horizon/service.definition.json file also contains a deployment section, toward the end of the file. The fields in this section name each Docker container image that implements your logical Service. The name of each record that is used here in the services array is the name that other containers use to identify the container on the shared virtual private network. If this container provides a REST API for other containers to consume, you can access this REST API within the consuming container by using curl <name>/<your-rest-api-uri>. The image field for each name provides a pointer to the corresponding Docker container image, such as within DockerHub or some private container registry. Other fields in the deployment section can be used to change the way that the agent indicates to Docker to run the container. For more information, see Horizon deployment strings Opens in a new tab.

Interacting With Horizon exchange

When you build and publish the example programs you interact with the Horizon exchange to publish your services and deployment patterns. You also use the Horizon exchange to register your edge nodes to run a particular deployment pattern. The Horizon exchange acts as a repository for shared information, which enables you to indirectly communicate with other components of IBM Edge Computing for Devices. As a developer, you need to understand how to work with the Horizon exchange.

Development tasks

Within this diagram, only the developer cloud code, cloud services, and data paths to and from the cloud services, represent the development of cloud services.

Although IBM Edge Computing for Devices is most often used to manage the edge side of applications, such as the application that is shown in this diagram, IBM Edge Computing for Devices can also manage cloud software deployments. That is, you can use IBM Edge Computing for Devices to deploy Horizon software on cloud machines and to manage the lifecycle of your cloud services as just another type of edge machine. There are many alternative options for the cloud side, including Docker container services like the IBM Kubernetes Service, and serverless computing options like IBM Functions.

This diagram also shows the agents that must be running inside every edge node, and the agbots that must be configured for every deployment pattern typically in the cloud or in a centralized corporate data center.

IBM Edge Computing for Devices developers generally use the hzn command to interact with the Horizon exchange. Specifically, the hzn exchange command is used for all interactions with the Horizon exchange. You can type hzn exchange --help to see all of the subcommands that can follow hzn exchange on the command line. Then, you can use hzn exchange <subcommand> --help to get more details on the <subcommand> of your choice.

The following commands are useful for interrogating the Horizon exchange:

For even more fine-grained interaction with the Horizon exchange, you might want to directly use the underlying REST APIs it provides. To start, use the -v flag on the hzn exchange ... commands. When -v is passed, more detailed output is provided, including details of the REST API URIs that are used.

For more information, see IBM Edge Computing for Devices APIs.

Agents and agbots

Understanding the roles of the agents and agbots and exactly how they communicate is important. This knowledge can be helpful for diagnosing and fixing problems when something goes wrong. The following figure enhances the previous figure by showing the lines of communication between the agbots and the agents.

Development tasks

The agents and agbots never directly communicate with each other. The agent for each edge node must establish a mailbox for itself in the Horizon switchboard and create a node resource in the Horizon exchange. Then, when it wants to run a particular deployment pattern it registers itself for this pattern in the Horizon exchange.

When that deployment pattern is originally published, an agbot is assigned to be responsible for that pattern. This agbot can be either an existing agbot or a new agbot. Redundant agbots can also be created for use in case an agbot instance fails.

The agbot for a pattern continuously polls the Horizon exchange to find edge nodes that register for the pattern. When a new edge node registers to the pattern where the agbot is assigned, the agbot reaches out to the local agent on that corresponding edge node. The agbot reaches out through the Horizon switchboard. At this point, all the agbot can know about the agent is its public key. The agbot does not know the IP address of the edge node or anything else about the edge node other than the fact that it is registered for the specific deployment pattern. The agbot reaches out to the agent, through the Horizon switchboard to propose that they collaborate to manage the software lifecycle of this deployment pattern on this edge node.

The agent for each edge node continuously polls the Horizon switchboard to see whether there are any messages in the mailbox. When the agent receives a proposal from an agbot, it evaluates this proposal based on the policies the edge node owner set when the edge node was configured and chooses whether to accept the proposal.

When a deployment pattern proposal is accepted, the agent proceeds to pull the appropriate service containers from the appropriate Docker registry, verify the service signatures, configure the service, and run the service.

All of the communications between the agents and agbots that go through the Horizon switchboard are encrypted by the two participants. Even though these messages are stored in the central Horizon switchboard, the Horizon switchboard is not able to decrypt and eavesdrop on those conversations.

Deploying service software updates

After you deploy your software to your fleet of edge nodes, you might want to update the code. Software updates can be accomplished with IBM Edge Computing for Devices. Normally, you do not need to do anything on the edge nodes to update the software that runs on the edge nodes. As soon as you sign and publish an update, the agbots and the agents that run on each edge node coordinate to deploy the latest version of your deployment pattern to every edge node that is registered for the updated deployment pattern. One of the key benefits of IBM Edge Computing for Devices is the ease with which it facilitates a software update pipeline all the way to your edge nodes.

The following diagram shows the upgrade flow for the software lifecycle within IBM Edge Computing for Devices.

Software updates

To release a new version of software, complete the following steps:

The Horizon agbots quickly detect the deployment pattern changes. The agbots then reach out to each agent whose edge node is registered to run te deployment pattern. The agbot and the agent coordinate to download the new containers, stop and remove the old containers, and start the new containers.

This process results in every one of your edge nodes that are registered to run the updated deployment pattern quickly to run the new service container version, regardless of where the edge node is geographically located.

Updating the Horizon agent software

When you need to update your edge node host operating system, update the Horizon software on the edge node, or change individual configurations on the edge node, you need to access the edge node.

For example, to update the Horizon software, run the following command:

apt update && apt install bluehorizon

To update your edge node operating system, refer to the product documentation for your operating system. The IBM Edge Computing for Devices documentation does not provide instructions for updating edge node operating systems.

What to do next

For more information about developing edge node code, review the following documentation:

Best practices

Review the important principles and best practices for IBM Edge Computing for Devices software development.

Using IBM Cloud Container Registry with Horizon

With IBM Edge Computing for Devices, you can optionally put service containers into the IBM private secure container registry instead of the public Docker Hub. For instance, if you have a software image that includes assets that are not appropriate to include in a public registry, you can use a private Docker container registry like the IBM Cloud Container Registry.

IBM Edge Computing for Devices APIs

IBM Edge Computing for Devices provides RESTful APIs for enabling components to collaborate, and to enable your organization's developers and users to control the components.