Installing the agent
Note: IBM Edge Application Manager (IEAM) agent installation requires cluster admin access on the edge cluster. Additionally, the jq command-line JSON processor must be installed prior to running the agent install script.
Begin by installing the IEAM agent on one of these types of Kubernetes edge clusters:
- Installing the agent on OCP Kubernetes edge cluster
- Installing the agent on K3s and MicroK8s edge clusters
- Installing the agent on other Kubernetes edge clusters
Then, deploy an edge service to your edge cluster:
If you need to remove the agent:
Installing the agent on OCP Kubernetes edge cluster
This content describes how to install the IEAM agent on your OCP edge cluster. Follow these steps on a host that has admin access to your edge cluster:
-
Log in to your edge cluster as admin:
oc login https://<api_endpoint_host>:<port> -u <admin_user> -p <admin_password> --insecure-skip-tls-verify=trueAlternatively, you can log in with a token. For example, for IBM Cloud Managed OpenShift Platform (ROKS), select Copy Login Command from the drop down menu under the user name on the OpenShift Console.
The login command has the following format:
oc login --token=<token> --server=https://<api_endpoint_host>:<port> -
If you have not completed the steps in Creating your API key, do that now. This process creates an API key, locates some files, and gathers environment variable values that are needed when you set up edge nodes. Set the same environment variables for this edge cluster. Set the
HZN_NODE_IDof the edge cluster.export HZN_EXCHANGE_USER_AUTH=iamapikey:<api-key> export HZN_ORG_ID=<your-exchange-organization> export HZN_FSS_CSSURL=https://<management-hub-ingress>/edge-css/ export HZN_NODE_ID=<edge-cluster-node-name> export HZN_EXCHANGE_URL=https://<management-hub-ingress>/edge-exchange/v1 export HZN_AGBOT_URL=<edge-agreement-bot-url> export HZN_FDO_SVC_URL=<edge-fdo-service-url> -
Set the agent namespace variable to its default value (or whatever namespace you want to explicitly install the agent into):
export AGENT_NAMESPACE=openhorizon-agent -
Configure the storage class that you want the agent to use.
- To configure an OpenShift Container Platform storage class, complete the following steps:
-
View the storage classes that are available for your cluster:
oc get storageclass -
To configure your storage class, set the value of the
EDGE_CLUSTER_STORAGE_CLASSenvironment variable. For example:export EDGE_CLUSTER_STORAGE_CLASS=rook-ceph-cephfs-internal
-
- To configure a storage class if you are using a Kubernetes environment from a public cloud provider, complete the following steps:
-
View the storage classes that are available for your cluster:
kubectl get storageclasses -
To configure your storage class, use the following storage class examples as a guide:
- For Red Hat OpenShift Kubernetes Service (ROKS), you might use
ibmc-vpc-block-10iops-tieroribmc-vpc-file-dp2. To use theibmc-vpc-file-dp2storage class, ensure that the File Storage for VPC driver is installed. See Enabling the IBM Cloud File Storage for VPC add-on. - For Red Hat OpenShift Service on AWS (ROSA), you might use
gp2. - For Azure Red Hat OpenShift (ARO), you might use
managed-csi. - For Red Hat OpenShift Container Platform, you might use
rook-cephfs. - For all other Kubernetes environments, the available storage classes depend on how Kubernetes is configured. For more information, see Configuring a storage class.
- For Red Hat OpenShift Kubernetes Service (ROKS), you might use
-
- To configure an OpenShift Container Platform storage class, complete the following steps:
-
Specify the type of image registry that you want to use: remote image registry or edge cluster local registry. The image registry is the location of the agent image and agent cronjob image. For more information about how to choose the image registry, see the following sections.
-
To download the agent-install.sh script from the Cloud Sync Service (CSS) and give permission for the script to execute, run the following commands:
curl -u "$HZN_ORG_ID/$HZN_EXCHANGE_USER_AUTH" -k -o agent-install.sh $HZN_FSS_CSSURL/api/v1/objects/IBM/agent_files/agent-install.sh/data chmod +x agent-install.sh -
Run agent-install.sh to get the necessary files from CSS, install and configure the Horizon agent, and register your edge cluster with the policy:
./agent-install.sh -D cluster -i 'css:'Notes:
- To see all of the available flags, run: ./agent-install.sh -h
- If agent-install.sh fails because of an error, correct the error, then run agent-install.sh again. If that does not resolve the error, uninstall the agent by running agent-uninstall.sh, then run agent-install.sh again. For more information about agent-uninstall.sh, see Removing agent from edge cluster.
-
Set the current project to the agent namespace, then verify that the agent pod is running:
oc project $AGENT_NAMESPACE oc get pods -
(Optional) The agent is installed on your edge cluster now. To learn about the Kubernetes resources that are associated with the agent, you can run the following commands:
oc get namespace $AGENT_NAMESPACE oc project $AGENT_NAMESPACE # ensure this is the current namespace/project oc get deployment -o wide oc get deployment agent -o yaml # get details of the deployment oc get configmap openhorizon-agent-config -o yaml oc get secret openhorizon-agent-secrets -o yaml oc get pvc openhorizon-agent-pvc -o yaml # persistent volume -
Often, when an edge cluster is registered for a policy, but does not have a user-specified node policy, none of the deployment policies deploy edge services to the edge cluster. That is the case with the Horizon examples. To set the node policy so that an edge service is deployed to the edge cluster, complete the steps in Deploying services to your edge cluster.
Setup edge cluster local image registry for ocp
Note: Skip this section if you are using a remote image registry.
-
Verify that a default route for the OpenShift image registry is created and that it is accessible from outside of the cluster:
oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'If the command response indicates the default-route is not found, you need to expose it (see Exposing the registry for details):
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge -
Retrieve the repository route name that you need to use:
export OCP_IMAGE_REGISTRY=`oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'` -
Create a new project to store your images:
export OCP_PROJECT=$AGENT_NAMESPACE oc new-project $OCP_PROJECT -
Create a service account with a name of your choosing:
export OCP_USER=<service-account-name> oc create serviceaccount $OCP_USER -
Add a role to your service account for the current project:
oc policy add-role-to-user edit system:serviceaccount:$OCP_PROJECT:$OCP_USER -
Set your service account token to the following environment variable:
- Determine if you can extract the token with this command:
oc serviceaccounts get-token $OCP_USER - If the above command returns a token, run:
export OCP_TOKEN=`oc serviceaccounts get-token $OCP_USER` - If the command from the step 1 did not return a token, run:
export OCP_TOKEN=`oc serviceaccounts new-token $OCP_USER`
- Determine if you can extract the token with this command:
-
Get the OpenShift certificate and allow docker to trust it:
echo | openssl s_client -connect $OCP_IMAGE_REGISTRY:443 -showcerts | sed -n "/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p" > ca.crtOn Linux:
mkdir -p /etc/docker/certs.d/$OCP_IMAGE_REGISTRY cp ca.crt /etc/docker/certs.d/$OCP_IMAGE_REGISTRY systemctl restart docker.serviceOn macOS:
mkdir -p ~/.docker/certs.d/$OCP_IMAGE_REGISTRY cp ca.crt ~/.docker/certs.d/$OCP_IMAGE_REGISTRYOn macOS, use the Docker Desktop icon on the right side of the desktop menu bar to restart Docker by clicking Restart in the dropdown menu.
-
Log in to the OCP Docker host:
echo "$OCP_TOKEN" | docker login -u $OCP_USER --password-stdin $OCP_IMAGE_REGISTRY -
Configure additional trust stores for image registry access:
oc create configmap registry-config --from-file=$OCP_IMAGE_REGISTRY=ca.crt -n openshift-config -
Edit the new
registry-config:oc edit image.config.openshift.io cluster -
Update the
spec:section:spec: additionalTrustedCA: name: registry-config -
The agent-install.sh script stores the IEAM agent in the edge cluster container registry. Set the registry user, password, and the full image path without the tag:
export EDGE_CLUSTER_REGISTRY_USERNAME=$OCP_USER export EDGE_CLUSTER_REGISTRY_TOKEN="$OCP_TOKEN" export IMAGE_ON_EDGE_CLUSTER_REGISTRY=$OCP_IMAGE_REGISTRY/$OCP_PROJECT/amd64_anax_k8s or export IMAGE_ON_EDGE_CLUSTER_REGISTRY=$OCP_IMAGE_REGISTRY/$OCP_PROJECT/s390x_anax_k8s
Note: The IEAM agent image is stored in the local edge cluster registry because the edge cluster Kubernetes needs ongoing access to it, in case it needs to restart it or move it to another pod.
Installing the agent on K3s and MicroK8s edge clusters
This content describes how to install the IEAM agent on K3s or MicroK8s , lightweight and small Kubernetes clusters:
-
Log in to your edge cluster as root.
-
If you have not completed the steps in Creating your API key, do that now. This process creates an API key, locates some files, and gathers environment variable values that are needed when setting up edge nodes. Set the same environment variables on this edge cluster:
export HZN_EXCHANGE_USER_AUTH=iamapikey:<api-key> export HZN_ORG_ID=<your-exchange-organization> export HZN_FSS_CSSURL=https://<ieam-management-hub-ingress>/edge-css/ -
Copy the agent-install.sh script to your new edge cluster.
curl -u "$HZN_ORG_ID/$HZN_EXCHANGE_USER_AUTH" -k -o agent-install.sh $HZN_FSS_CSSURL/api/v1/objects/IBM/agent_files/agent-install.sh/data chmod +x agent-install.sh -
Run a command like this to set the agent namespace variable to its default value:
export AGENT_NAMESPACE=openhorizon-agent -
Note: Skip this step if using remote image registry. If
USE_EDGE_CLUSTER_REGISTRYis set totrue, the agent-install.sh script stores the IEAM agent in the edge cluster image registry. Set the full image path to use, without the tag. For example:-
On K3s:
REGISTRY_ENDPOINT=$(kubectl get service docker-registry-service | grep docker-registry-service | awk '{print $3;}'):5000 export IMAGE_ON_EDGE_CLUSTER_REGISTRY=$REGISTRY_ENDPOINT/<agent-namespace>/amd64_anax_k8s or export IMAGE_ON_EDGE_CLUSTER_REGISTRY=$REGISTRY_ENDPOINT/<agent-namespace>/s390x_anax_k8s -
On MicroK8s:
export IMAGE_ON_EDGE_CLUSTER_REGISTRY=localhost:32000/<agent-namespace>/amd64_anax_k8s or export IMAGE_ON_EDGE_CLUSTER_REGISTRY=localhost:32000/<agent-namespace>/s390x_anax_k8s
Note: The IEAM agent image is stored in the local edge cluster registry because the edge cluster Kubernetes needs ongoing access to it, in case it needs to restart it or move it to another pod.
-
-
Instruct agent-install.sh to use the default storage class:
-
On K3s:
export EDGE_CLUSTER_STORAGE_CLASS=local-path -
On MicroK8s:
export EDGE_CLUSTER_STORAGE_CLASS=microk8s-hostpath
-
-
Run agent-install.sh to get the necessary files from CSS (Cloud Sync Service), install and configure the Horizon agent, and register your edge cluster with policy:
./agent-install.sh -D cluster -i 'css:'Notes:
- To see all of the available flags, run: ./agent-install.sh -h
- If an error occurs causing agent-install.sh to not complete successfully, correct the error that is displayed, and run agent-install.sh again. If that does not work, run agent-uninstall.sh (see Removing agent from edge cluster) before running agent-install.sh again.
- If you are not logged in as root or elevated to root, begin the agent-install.sh script command with
sudo -s -E.
-
Verify that the agent pod is running:
kubectl get namespaces kubectl -n openhorizon-agent get pods -
Usually, when an edge cluster is registered for policy, but does not have any user-specified node policy, none of the deployment policies deploy edge services to it. This is expected. Proceed to Deploying services to your edge cluster to set node policy so that an edge service will be deployed to this edge cluster.
Installing the agent on other Kubernetes edge clusters
To install the IEAM agent on other Kubernetes environments, you must first set the appropriate environment variables. Complete the following steps:
-
Log in to your edge cluster. If the edge cluster is a Kubernetes cluster that is managed by a cloud provider, authenticate with your cloud provider.
-
Create an API key to access your Kubernetes edge cluster. For more information, see Creating your API key. This procedure creates an API key, locates some files, and gathers environment variable values that you need to configure edge nodes.
-
Set the following environment variables on the edge cluster. Use the environment variable values from the previous step.
export HZN_EXCHANGE_USER_AUTH=iamapikey:<api-key> export HZN_ORG_ID=<your-exchange-organization> export HZN_FSS_CSSURL=https://<ieam-management-hub-ingress>/edge-css/ -
To copy the
agent-install.shscript to your edge cluster, run a command like this:curl -u "$HZN_ORG_ID/$HZN_EXCHANGE_USER_AUTH" -k -o agent-install.sh $HZN_FSS_CSSURL/api/v1/objects/IBM/agent_files/agent-install.sh/data chmod +x agent-install.sh -
To set the agent namespace variable on your edge cluster, run a command like this:
export AGENT_NAMESPACE=<openhorizon_namespace><openhorizon_namespace>is the namespace where you want to install the agent. -
Set the environment variables for a remote image registry. For more information about how to set the environment variables for a remote image registry, see Remote image registry.
-
Configure a storage class for the agent. If you are using a Kubernetes environment from a public cloud provider, complete the following steps:
- View the storage classes that are available for your cluster:
kubectl get storageclasses - To configure your storage class, use the following storage class examples as a guide:
- For IBM Cloud Kubernetes Service, you might use
ibmc-vpc-block-10iops-tieroribmc-vpc-file-dp2. To use theibmc-vpc-file-dp2storage class, ensure that the File Storage for VPC driver is installed. See Enabling the IBM Cloud File Storage for VPC add-on. - For AWS Elastic Kubernetes Service (EKS), you might use
ebs-sc. To use theebs-scstorage class, ensure that the Amazon Elastic Block Store (EBS) driver is installed. See Amazon EBS CSI driver. - For Google Cloud Google Kubernetes Engine (GKE), you might use
standard-rwo. - For all other Kubernetes environments, the available storage classes depend on how Kubernetes is configured. For more information, see Configuring a storage class.
- For IBM Cloud Kubernetes Service, you might use
- View the storage classes that are available for your cluster:
-
Run the
agent-install.shscript to get files from the Cloud Sync Service (CSS). Theagent-install.shscript installs and configures the Horizon agent, and registers your edge cluster with the policy:./agent-install.sh -D cluster -i 'css:'To see all the available flags, run the script
./agent-install.sh -h.If the
agent-install.shscript fails due to an error, correct the error and run the script again. If the issue persists, uninstall the agent by runningagent-uninstall.sh, then runagent-install.shagain. For more information, see Removing agent from edge cluster. -
To verify the agent pod is running, run commands like these:
kubectl get namespaces kubectl -n openhorizon-agent get pods
Deploying services to your edge cluster
Setting node policy on this edge cluster can cause deployment policies to deploy edge services here. This content shows an example of doing that.
-
Set some aliases to make it more convenient to run the
hzncommand. (Thehzncommand is inside the agent container, but these aliases make it possible to runhznfrom this host.)cat << 'END_ALIASES' >> ~/.bash_aliases alias getagentpod='kubectl -n openhorizon-agent get pods --selector=app=agent -o jsonpath={.items[].metadata.name}' alias hzn='kubectl -n openhorizon-agent exec -i $(getagentpod) -- hzn' END_ALIASES source ~/.bash_aliases -
Verify that your edge node is configured (registered with the IEAM management hub):
hzn node list -
To test your edge cluster agent, set your node policy with a property that deploys the example helloworld operator and service to this edge node:
cat << 'EOF' > operator-example-node.policy.json { "properties": [ { "name": "openhorizon.example", "value": "nginx-operator" } ] } EOF cat operator-example-node.policy.json | hzn policy update -f- hzn policy listNote:
- Because the real hzn command is running inside the agent container, for any
hzncommands that require an input file, you need to pipe the file into the command so its content will be transferred into the container.
- Because the real hzn command is running inside the agent container, for any
-
After a minute, check for an agreement and the running edge operator and service containers:
hzn agreement list kubectl -n openhorizon-agent get pods -
Using the pod IDs from the previous command, view the log of edge operator and service:
kubectl -n openhorizon-agent logs -f <operator-pod-id> # control-c to get out kubectl -n openhorizon-agent logs -f <service-pod-id> # control-c to get out -
You can also view the environment variables that the agent passes to the edge service:
kubectl -n openhorizon-agent exec -i <service-pod-id> -- env | grep HZN_
Changing what services are deployed to your edge cluster
-
To change what services are deployed to your edge cluster, change the node policy:
cat <new-node-policy>.json | hzn policy update -f- hzn policy listAfter a minute or two, the new services will be deployed to this edge cluster.
-
Note: On some VMs with MicroK8s, the service pods that are being stopped (replaced) might stall in the Terminating state. If that happens, run:
kubectl delete pod <pod-id> -n openhorizon-agent --force --grace-period=0 pkill -fe <service-process> -
If you want to use a pattern, instead of policy, to run services on your edge cluster:
hzn unregister -f hzn register -n $HZN_EXCHANGE_NODE_AUTH -p <pattern-name>
Configuring a storage class
Use the following information to help you configure your storage class:
-
A PVC is created during the installation of the agent and is used by the agent to store data for the agent and cron job. The storage class must satisfy the following requirements:
- Supports read and write access to the volume.
- Supports
ReadWriteOnceorReadWriteManyaccess mode. - Is available immediately.
-
If a storage class is used that doesn't exist on the cluster, the PVC doesn't bind to the persistent volume. The agent pod stays in the
Pendingstatus and the installation of the agent times out. You can run the following command to view the error message for the specified storage class:kubectl describe persistentvolumeclaims -n <agent_namespace> openhorizon-agent-pvc