Software-defined radio full-featured code

The following software-defined radio (SDR) example provides you with a full-featured example of an IBM Edge Computing Manager for Devices application. Use this example to help you learn how to create your own feature-rich edge computing applications.

Before you begin

Complete the prerequisite steps that are required for creating an edge service. For more information, see Preparing to create an edge service.

Specifically, set up your Horizon credentials. These credentials are used by the commands that are referenced and used with this example. You also need to log in to your Docker registry and create your cryptographic signing keys, which are used for publishing.

About this example

This example contains edge node software that uses software-defined radio hardware. If the hardware is not available, the example code can simulate the hardware to provide you with the capability to test the application during development.

When the application is started, the software on the edge node that hosts the application receives radio signals. The edge node software completes some local analysis so that only a low volume of higher-value data needs to be sent to the cloud for further processing.

The example application also contains a cloud back-end implementation. This back end receives data from the edge nodes for further analysis with IBM Watson APIs. The back-end implementation can also present you with a web-based UI to display a map that shows your edge nodes and to display the data analysis.

Software-defined radio receives radio signals by using the digital circuitry in a computer CPU to handle the work to require a set of specialized analog circuitry. That analog circuitry is usually restricted by the breadth of radio spectrum it can receive. An analog radio receiver built to receive FM radio stations, for example, can't receive radio signals from anywhere else on the radio spectrum. Software-defined radio can access large portions of the spectrum.

The code for this example is available within the Open Horizon GitHub examples repository. To view this repository, see Open Horizon GitHub examples Opens in a new tab.

Within this code, there are three primary components:

The following diagram shows the architecture for this software-defined radio example:

Example architecture

The following content provides details about the structure and use of the application components.

sdr low-level service

The lowest level of the software stack for this service includes the sdr service implementation. This service accesses local software-defined radio hardware by using the popular librtlsdr library and the derived rtl_fm and rtl_power utilities along with the rtl_rpcd daemon. For more information about the librtlsdr library, see librtlsdr Opens in a new tab.

Review the source code for this sdr service implementation within the Open Horizon GitHub repository. For more information, see sdr implementation Opens in a new tab.

Exclusive access

The sdr service directly controls the software-defined radio hardware to tune the hardware to a particular frequency to receive transmitted data, or to measure the signal strength across a specified spectrum. A typical workflow for the service can be to tune to a particular frequency to receive data from the station at that frequency. Then, the service can process the collected data.

If multiple higher-level services are accessing the software-defined radio simultaneously, you can encounter issues with tuning the frequency. As a result, the software-defined radio service is configured for exclusive access within the Horizon service definition. The service definition includes a configuration setting for controlling whether the access is shareable. The software-defined radio service is set as "sharable": "exclusive".

In contrast, a single read-only gps service instance can safely be used by any number of client services simultaneously, so this service is configured with the setting "sharable": "singleton". Another option, multiple, is a possible setting for the sharable field for a service like the gps service. This setting causes the Horizon agent on the edge node to create a separate instance of the gps service for each client service that wants to use the service.

Service implementation build

Tip: See Conventions used in this document for more information about command syntax.

Access to the sdr service comes through REST API. Client services can ask for a list of strong FM station frequencies with a /freqs request. The client services can also ask for a block of audio data with a /audio/<frequency> request. For more information about the REST API, see REST API Opens in a new tab.

The sdr service is implemented as a Go language program. The source files for the program consist of main.go and bbcfake/bbcfake.go. The latter implements a mock radio audio stream by pulling audio data from the BBC website. For more information about this site, see BBC Opens in a new tab. When the sdr service cannot find the software-defined radio hardware, or sufficiently strong radio signals, the service offers a frequency of 0.0. The service also provides the audio stream in place of radio data when that frequency is requested. This audio stream replacement can be useful for testing and development purposes to implement simulated hardware sources when you are writing reusable service implementations.

To deploy the full stack of software required for the sdr service, the librtlsdr-related software must be built and installed. The Go language code must also be compiled. The set of compilers and other utilities that are required to create these two sets of software is large. However, it is best to keep the size of service containers small, if possible for faster download and install times, and for less resource consumption on the edge nodes. It is a best practice to build libraries, utilities, and service programs separately. Then, you can include only their binary files in the production container image and not the source code or build tools that are used to create the binary files. The sdr service example code illustrates this best practice with Docker multi-stage builds. For more information about these builds, see Docker multi-stage builds Opens in a new tab.

Docker multi-stage builds

You can use Docker multi-stage builds to reduce the size of your containers. Docker files for this example are provided in the sdr service edge code implementation for several hardware architectures to illustrate targeting multiple hardware platforms from a single component. The Docker files all have the following form:

FROM ... as rtl_build
RUN ... <various build steps for the librtlsdr utilities and daemon>

FROM ... as go_build
RUN ... <build steps for the Go program>

FROM ...
...
COPY --from=go_build /bin/rtlsdrd /bin/rtlsdrd
COPY --from=rtl_build /usr/local/bin/rtl_rpcd /bin/rtl_rpcd
COPY --from=rtl_build /usr/local/bin/rtl_fm /bin/rtl_fm
COPY --from=rtl_build /usr/local/bin/rtl_power /bin/rtl_power
COPY --from=rtl_build /usr/local/lib/librtlsdr.so.0 /usr/local/lib/librtlsdr.so.0

WORKDIR /
CMD ["/bin/rtlsdrd"]

There are three FROM statements in this Dockerfile. The first two of these statements are used to build the required binary files. The first builds the librtlsdr utilities and the daemon. The second statement builds the Go program, which includes the REST API implementation. The only items from these Docker files that are included in the final container image are the items that are explicitly copied from the Docker files by the third FROM statement.

The final container includes rtl_rpcd, rtl_fm, rtl_power and librtlsdr.so.0 from the first FROM build section, and rtlsdrd from the second FROM build section. The last FROM section begins from a small Linux distribution, such as the Alpine Linux distribution. This last FROM section adds a few utilities and the binary files. The build section then sets the container file system root as the current directory and sets the default command to be the Go program. The resulting final container is a smaller container that contains this set of binary files, but does not include the source code and tools that are used to build those binary files.

These containers are built by using the sdr service implementation Makefile.

Makefile

The example Makefile for the sdr service defines the following main targets:

The target target/build_$(ARCH) builds the Docker container for the specified architecture by using Dockerfile.$(ARCH). It uses the name and version number that is defined in the Makefile. To use this target, you must specify the ARCH value as part of the target name. The build target is usually more convenient to use.

The build target builds the Docker container for the hardware architecture that is specified by the environment variable, ARCH. It is equivalent to the target target/build_$(ARCH). If the ARCH variable is not set in your environment, then the SYSTEM_ARCH variable computer in the Makefile is used.

The publish target pushes the built container into Docker Hub. You must be logged in to Docker Hub with the specified DOCKERHUB_ID.

The publish-service target publishes the built container into Horizon exchange. You need to have the usual environment variables set for this, including HZN_EXCHANGE_USER_AUTH, PRIVATE_KEY_FILE, PUBLIC_KEY_FILE.

A clean target is also provided to stop and remove running containers, remove the container image, and clean up the Docker network.

The code within the Makefile that is run for each of these targets is usually just a single command. You can also use the -n flag to the make command to have it show the commands that it runs, without running any of them. For example, make -n build shows the additional commands that run when you run the make build command.

Implementation choices

The sdr service example is written in the Go programming language. Go is a compiled language, which can build self-contained programs. These self-contained programs help to keep the code efficient, and the size of the Docker container small. This efficiency and size reduction can be an important factor since many edge platforms are resource-constrained, and might have limited network connectivity. Use a compiled language with multi-stage builds and a small Linux distribution for optimal deployments to edge platforms. Go is a programming language like the C language. Go provides several features that are designed to facilitate parallelism. Most of the Horizon infrastructure is written in Go. For more information about the programming language, see Go Opens in a new tab.

In contrast, if you choose to implement in an interpreted language, your Docker container needs to include the corresponding language interpreter. This added interpreter can increase the Docker container size. Similarly, if you choose to implement on a large Linux distribution you can increase the container size by hundreds of megabytes. Increasing the container size can cause longer deployment times for container transmission and verification, and possibly less efficient execution speeds. If these results are not a concern, and if your edge platforms can handle larger code distributions, then you can use interpreted language implementations or larger Linux distributions. Often library availability or other external considerations make tools like Python, NodeJS, or Java and distributions like Ubuntu or Red Hat the best choice.

sdr2evtstreams high-level service

The sdr2evtstreams high-level service implementation uses both the sdr service REST API and the gps service REST API over the local private virtual Docker network. For more information about the sdr2evtstreams service, see sdr2evtstreams Opens in a new tab. For more information about the gps service, see gps Opens in a new tab.

The code for the sdr2evtstreams service is structured similarly to the reusable sdr service. There are Makefile and Dockerfile.* files for various architectures. This service is also implemented in Go.

Using other services

To use the services that are provided by the sdr and gps services, the sdr2evtstreams service uses their names on their shared Docker virtual network. The services that are deployed together are configured with well-known network names to facilitate communication between them. You can experiment with this communication by using docker exec to run commands or open a shell in the context of the sdr2evtstreams service. With this tool, you can use these REST APIs yourself. First, use the docker ps command to get the container ID of the sdr2evtstreams service from the CONTAINER ID column. Then, pass that to docker exec -it along with the command you want to run. For example, you can ping the sdr service with a command similar to the following command:

docker exec -it <container id> ping sdr

Similarly, you might run curl commands to interact directly with the REST APIs of the lower-level services from the context of the higher-level container. For example:

docker exec -it <container id> curl -s sdr:5427/freqs
...
docker exec -it <container id> curl -s gps:31779/v1/gps/location
...

Note: The commands that you run in the container context must be installed in that container context. When container size is being minimized, some useful tools like curl or the bash shell are not included. Usually you can rely on some tools, such as the Bourne shell (/bin/sh) being included. As a debugging technique, consider running the command docker exec -it <container id> /bin/sh to open a new shell within the container's context. Then, install software as needed within that container instance. For example, on the Alpine images that are used in this example, software package management is handled by apk. When you are using the Bourne shell, you can use a command similar to the following to install curl: apk add curl. To facilitate debugging, consider permanently adding utilities like curl and jq to your container image builds, even though the utilities can cause a small increase in image size. The cost in size of including bash can be worth it for future convenience. If these tools are preinstalled in your image, you do not need to add and install the tools whenever you want or need to access the container, such as for debugging.

Although REST APIs are a common way for services to communicate, there are others ways that the services can interact over IP on their shared virtual network. For example, the services can use direct TCP/IP streams, or UDP datagrams, or other higher level messaging tools like MQTT or Kafka. These communications techniques can also be used to communicate with entities on other hosts, such as hosts in the cloud.

Off-host communication

This example application is structured in small pieces as a best practice. Having multiple small components in separate containers enables independent updates, one component at a time. In general, consider designing your single services with 1 of 2 patterns:

  1. A local service pattern, similar to the example sdr and gps services. Local services typically access hardware like sensors or actuators or access parts of the local host operating systems. The local services provide an API that other services can use on their shared local private virtual network. As a best practice, do not create this type of service design pattern to be accessible remotely.

  2. A communicating service pattern, similar to the example sdr2evtstreams service. Typically this type of pattern is only a consumer of local services' APIs. It interacts with these services, performs local analytics, and communicates off the host to remote machines typically with cloud hosts. Those remote machines can then consume data from the communicating service. The remote machines can also send instructions to that communicating service to have it act on the edge node. For example, the sdr2evtstreams service uses Kafka to communicate with the IBM Event Streams service. As a best practice, this type of service design pattern does not have direct privileged access to the local host hardware or the operating system. Instead, the communicating service accesses hardware or the operating system through a local service when the communicating service needs access.

The cloud side

For details about the cloud side of this example application, such as the detailed coverage of the IBM Cloud services that are used in this example, review the IBM Cloud documentation. For more information, see IBM Cloud Opens in a new tab.

The following diagram shows the software-defined radio service code architecture:

Example architecture

The edge code is shown in the diagram. The previous descriptions refer to the sdr service code, and the sdr2evtstreams service code. The sdr2evtstreams service receives data from the sdr service, and completes some local inference on the data to select the best stations for speech. Then, the sdr2evtstreams service uses Kafka to publish audio clips to the cloud by using IBM Event Streams.

The remaining software components are an action and a _trigger component that are deployed by using IBM Functions, two IBM Watson Analytics services, an IBM databases, and a web-based presentation system.

IBM Functions

IBM Functions orchestrate the cloud side of the example software-defined radio application. IBM functions are based on OpenWhisk and enables serverless computing. Serverless computing means that code components can be deployed without any supporting infrastructure, such as an operating system, or programming language system. By using IBM Functions you can concentrate on your own code, and leave the scaling, security, and ongoing maintenance of everything else to IBM to handle for you. No hardware to provision; no VMs, and no containers are required.

Serverless code components are configured to trigger (run) in response to events. In this example, the triggering event is the receipt of messages from your edge nodes in IBM Event Streams whenever audio clips are published by edge nodes to IBM Event Streams. The example actions are triggered to ingest the data and act on it. They use the IBM Watson Speech-To-Text (STT) service to convert the incoming audio data into text. Then, that text is sent to the IBM Watson Natural Language Understanding (NLU) service to analyze the sentiment that is expressed to each of the nouns it contains.

The IBM Functions action code for the example software-defined radio application is written in JavaScript. For more information, see IBM Functions action code Opens in a new tab.

IBM database

The IBM Functions action code concludes by storing the computed sentiment results into IBM databases. The web server and client software then work to present this data to user web browsers from the database.

Web interface

The web user interface for the software-defined radio application allows users to browse the sentiment data, which is presented from IBM databases. This user interface also renders a map that shows the edge nodes that provided the data. The map is created with data from the IBM-provided gps service, which is used by the edge node code for the sdr2evtstreams service. The gps service can either interface with GPS hardware, or receive information from the device owner about location. In the absence of both, the GPS hardware and the device owner location, the gps service can estimate the edge node location by using the edge node IP address to find the geographic location. By using this service, the sdr2evtstreams can provide location data to the cloud when the service sends audio clips.

The software-defined radio application web UI code is written in Node.js, with React and GraphQL. For more information, see software-defined radio application web UI code Opens in a new tab.

Deploying the cloud side

The IBM Functions, IBM databases, and web UI code can be deployed in the IBM Cloud with a single command after you create a paid account in the IBM Cloud.

The deployment code is located in the examples/cloud/sdr/deploy/ibm repository. For more information, see deployment repository Opens in a new tab. This code consists of a README.md file with detailed instructions, and a deploy.sh script that handles the workload. The repository also contains a Makefile as another interface into the deploy.sh script. Review the repository instructions to learn more about deploying your own cloud back end for the software-defined radio example.

Important: This deployment process creates paid services that incur charges on your IBM Cloud account.

Verifying cloud side components

The cloud side components of the software-defined radio example can be independently checked and verified. You can either use the ibmcloud CLI, or in some cases you can use the IBM Cloud web interface. The following guide shows you the steps to verify the cloud side of the software-defined radio example code after you deploy it by using the previous instructions.

The deployment script completes the following actions on the cloud side:

With the cloud side components, you can complete the following processes:

IBM Cloud CLI tool

Begin by installing the IBM Cloud CLI tool. Instructions are available for installing the IBM Cloud CLI on various host hardware architectures and operating systems. For more information, see Download CLI Opens in a new tab.

Logging in to the IBM Cloud

Use the IBM Cloud CLI login command:

ibmcloud login

Online documentation is available within the ibmcloud command. Enter any partial command, and add --help to see a description of that command and a list of the additional options that are available.

ibmcloud service --help

Organization configuration

When you are logged in to IBM Cloud, configure the IBM Cloud CLI to work with your organization ID and with the space name that you want to use. The values must be the same organization and space that you used to deploy the software-defined radio back-end implementation:

ibmcloud target -o <org ID> -s <space ID>

For example:

ibmcloud target -o someone@somewhere.com -s somespace

If you do not know your organization ID and space name, you can view the values within IBM Cloud. Use a web browser to log in to IBM Cloud with the following link IBM Cloud Opens in a new tab and find your organization and space. If you do not have an organization and space, you need to create the entities before you can use the container registry.

Service instances

The deployment script includes variable definitions at the beginning of the file. These variables contain the names of the service instances that were created, and their corresponding access credentials. For example, the following code snippet lists variable definitions:

# event streams/message hub configuration
MH_INSTANCE="${SERVICE_PREFIX}-es"
MH_INSTANCE_CREDS="${MH_INSTANCE}-credentials"
MH_SDR_TOPIC="sdr-audio"
MH_SDR_TOPIC_PARTIONS=2
...

# watson speech-to-text service
STT_INSTANCE="${SERVICE_PREFIX}-speech-to-text"
STT_INSTANCE_CREDS="${STT_INSTANCE}-credentials"
...

# watson natural language understanding
NLU_INSTANCE="${SERVICE_PREFIX}-natural-language-understanding"
NLU_INSTANCE_CREDS="${NLU_INSTANCE}-credentials"
...

# db
DB_INSTANCE="${SERVICE_PREFIX}-compose-for-postgresql"
DB_INSTANCE_CREDS="${DB_INSTANCE}-credentials"
...

# functions
FUNC_PACKAGE="${SERVICE_PREFIX}-message-hub-evnts"
FUNC_MH_FEED="Bluemix_${MH_INSTANCE}_${MH_INSTANCE_CREDS}/messageHubFeed"
FUNC_TRIGGER="${SERVICE_PREFIX}-message-received-trigger"
FUNC_ACTION="${SERVICE_PREFIX}-process-message"
FUNC_ACTION_CODE="../../data-processing/ibm-functions/actions/msgreceive.js"
FUNC_RULE="${SERVICE_PREFIX}-message-received-rule"

# ui
UI_SRC_PATH="../../ui/sdr-app"
UI_APP_NAME="${SERVICE_PREFIX}-sdr-poc-app"
UI_APP_ID_INSTANCE="${SERVICE_PREFIX}-app-id"
...

You can use these names with the ibmcloud command to verify your service instance. To begin, list your service instances by running the following command:

ibmcloud service list

For each listed service, you can see the name that you gave the service, the service type, and the payment plan that is attached to the service. You can also see any applications that are bound to the service endpoint, and the last operation that was completed on the service instance.

To examine any of the service instances that are named in that list, complete the following steps:

  1. Retrieve the service key with the following command, where <service_instance_name> is the name of the service instance:

     ibmcloud service keys "<service_instance_name>"
    

    For example, for a service that is named sdr-poc-es, which is the default name that the deployment script uses for the IBM Event Streams instance, you can use this command:

     ibmcloud service keys sdr-poc-es-test-es
    
  2. With the service key and name, use the key-show command to show the actual credentials. For example, for a key that is named sdr-poc-es-credentials you can use this command:

     ibmcloud service key-show sdr-poc-es sdr-poc-es-credentials
    

    The output shows the JSON that you were provided when you created the IBM Event Streams instance. This JSON includes the API key, admin URL, brokerlist, and more. You can use the admin URL to create and verify Kafka topics and more.

IBM Functions

In the ibmcloud service list output, you can view the IBM Functions action instances that are created. This list can resemble the following output:

name                                       service                          plan                    bound apps              last operation
...
sdr-poc-natural-language-understanding     natural-language-understanding   standard                                        create succeeded
sdr-poc-speech-to-text                     speech_to_text                   standard                                        update succeeded

As with the previous example service, you can use the ibmcloud CLI to extract the credentials for these services. Use the service name and key name to help you find the credential information. However, for IBM Functions there are also some convenient specialized commands that you can use. For example, this command lists the IBM Functions packages, actions, triggers, and rules that you defined:

ibmcloud -q fn list

You can also use the following individual commands:

ibmcloud -q fn package list
ibmcloud -q fn action list
ibmcloud -q fn trigger list
ibmcloud -q fn rule list

The names that are returned from those queries can also be used with get commands to get detailed information about the particular named item or to interact with it. For example, you can use the following commands to retrieve information:

ibmcloud -q fn action get <action-name>
ibmcloud -q fn trigger get <trigger-name>

The action get command, for example, provides detailed information about the parameters that are expected by the action.

For debugging purposes, you can also use the CLI tool to manually invoke an action that you defined. To use the CLI tool to invoke an action, you must pass all of the required parameter values. For example, the following commands show how to invoke an action:

ibmcloud -q fn action invoke -r <action-name> --param-file <json-file>

Similarly, you can manually fire a trigger by passing the required parameters values in a command:

ibmcloud -q fn trigger fire <trigger-name> --param-file <json-file>

With the IBM Cloud web user interface, you can monitor your IBM Functions. Log in to the IBM Cloud web page with your account credentials and go to the IBM Functions Monitor page Opens in a new tab.

You can also view the logs from your IBM Functions actions and triggers in the monitor. To access the monitor, go to your IBM Functions dashboard and select Monitor in the sidebar.

IBM databases

The example code also creates a Postgres database in IBM databases. When you have database access, you can use this access to verify that the IBM functions that you previously checked are populating the database as expected. To use the database, complete the following steps:

  1. Log in to your IBM Cloud account to work with your database.

  2. Obtain the name of your database service. You can obtain the service name by using the same ibmcloud service list command that you used previously to obtain information for IBM Functions.

    When you run the ibmcloud service list command, look for the service that has the type compose-for-postgresql. Then, run the following command to show the detailed information for that service:

     ibmcloud service show <service name>
    

    The output from that command gives you the service dashboard URL where you can explore the IBM Cloud web user interface for your database.

  3. Next, retrieve the credentials for the database by running the following command:

     ibmcloud service keys <service name>
    
  4. Run the key-show command with the service name and key to show the access details:

     ibmcloud service key-show <service name> <service-key>
    

    The output from the key-show command contains the URI you need to connect to the database from the CLI tool. Near the end of the JSON output, find the field that is named uri. You need the value of this field for the psql commands that you need to run.

  5. Install the postgresql client program. Use the following psql commands to install the program if the program is not already installed:

     apt-get install -y postgresql-client
    

With the psql command installed, you can run postgresql commands on your database from your CLI tool by using the uri URI. To use the command, edit the URI to remove compose from the end, and replace the string with sdr?sslmode=require. Then, create your postgres query, and add it with -c. The resulting command has the following form:

psql "<postgres URI, with compose removed>/sdr?sslmode=require" -c "<postgresql query>"

For example, if you have a URI from key-show of postgres://admin:VPMUSJCJHTTQXBBI@sl-us-south-1-portal.44.dblayer.com:15439/compose, and a query of select * from edgenodes, you can use a command similar to the following command:

psql "postgres://admin:VPMUSJCJHTTQXBBI@sl-us-south-1-portal.44.dblayer.com:15439/sdr?sslmode=require" -c "select * from edgenodes"

The preceding example command lists all of the edge nodes that sent data to your software-defined radio Kafka topic in IBM Event Streams. From that the data ingest service, your IBM Functions trigger runs your IBM Functions action. This action uses the IBM Watson STT and NLU services, and then pushes the results into the IBM databases Postgres database. You can examine the database schema and run other queries to get any additional information that you want from the database.

This process is also how the web UI portion of this example application works. The web UI queries the database and renders the retrieved data in your browser to show the edge nodes on a map. The UI also provides tables of the sentiment analysis data.

Cloud Foundry application

You can verify that your web UI is working by going to your web UI URL in a web browser. To obtain the URL, complete the following steps:

  1. Log in to the IBM Cloud web UI and access your IBM Cloud application resources list. Open the following URL in a web browser to access this list: IBM Cloud application resources list Opens in a new tab.

  2. Scroll to the Cloud Foundry Applications section to see your application. Select your application to view the status page.

On the status page, there is a pane where you can access the web server logs. From this pane, you can also select Logs to view the logs in Kibana and select Monitoring to monitor the availability of your application. Click the link that is labeled Visit App URL. This link opens the web UI for your application.

Summary

By following this example, you learned about designing, building, testing, deploying, and monitoring a complex application with portions that are running on edge nodes and in the cloud. This example uses the IBM Edge Computing Manager for Devices and a small subset of the many services that are offered by the IBM Cloud.

What to do next