October 17, 2019 By Sandip A Amin 7 min read

Introducing the Portworx container-native storage and data managment solution for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud

Today, we are announcing the support of the Portworx software-defined storage solution for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud that can now be provisioned in Kubernetes or OpenShift clusters via the IBM Cloud Catalog

This integration allows for the use of the Portworx solution via an IBM Cloud Pay-As-You-Go or Subscription account, in which charges incur on an hourly basis and integrated billing for Portworx is supported.


The Portworx container-native storage solution provides the following capabilities for stateful workloads:

  • Container-granular volumes that give you the ability to provision volumes as small as 1G and dynamically expand to large multi-terabyte volumes as your workload needs to grow—all without application disruption.
  • Declaratively specify the I/O profile of your application by leveraging one of the application-aware storage classes that are predefined by Portworx.
  • Block and shared volume support.
  • Globally namespaced volumes give support and availability of volumes across a multizone Kubernetes or OpenShift cluster. 
  • Replicated and synchronous volume support.
  • Volume encryption via both IBM Key Protect and other key management systems.
  • Local volume snapshots and volume snapshots in IBM Cloud Object Storage.
  • Role-based access control.
  • Application crash-consistent (multi-container) snapshots.
  • Support for both hyper-converged and storage-rich deployment topologies.
  • Ability to perform multi-cluster and multicloud application migrations for Kubernetes resources and data.

For more information, see the Portworx feature list.

What are the available storage deployment topologies in IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud?

Portworx aggregates and tiers physical storage into a virtual storage pool. It does this by automatically discovering free available raw block storage on your worker nodes. To create the storage cluster, Portworx requires a minimum of three physical worker nodes with additional storage. 

The easiest way to add worker nodes with additional block storage is to leverage SDS worker nodes that already come with extra local ssd disks. Alternatively, it is also possible to attach raw, unformatted block storage to non-SDS nodes

To gain the best performance and allow Portworx to schedule your workloads where the actual container volume data resides, it is recommended to utilize a hyper-converged deployment topology as shown in the following diagram:

In this diagram, the default worker pool is created with SDS worker nodes that come with physical local storage. The physical storage is evenly spread across the worker nodes in a hyper-converged topology.

It is also possible to deploy Portworx in a storage-heavy/storage-rich topology where the actual physical storage is centralized on a subset of worker nodes in the cluster. In this topology, it is recommended that you add a new worker pool for the physical storage pool as shown in the following diagram:

In this diagram, the default worker pool is created with virtual server worker nodes where the workloads run, while the storage worker pool is created with SDS worker nodes. Because Portworx runs as a daemon set on every worker node in the cluster, Portworx storage can be accessed by all applications that run in the cluster regardless of the worker pool that the app belongs to.

How do I deploy Portworx on a Red Hat Openshift cluster from the IBM Catalog?


Before you begin installing Portworx on your Red Hat OpenShift cluster, follow the steps to prepare your cluster for the Portworx installation:

  1. Ensure that you created a Red Hat OpenShift cluster with at least three worker nodes with raw unformatted block storage. 
  2. Configure an IBM Cloud Databases for etcd service instance to store Portworx metadata and configuration information. Make sure that you store the credentials to access your service instance in a Kubernetes secret in your cluster. Note the name of your secret and the API endpoint for your Databases for etcd service instance as this information is used during the Portworx installation.
  3. Determine if you want to encrypt volumes by using IBM Key Protect. To encrypt your volumes, you must set up an IBM Key Protect service instance and store your service information in a Kubernetes secret.
  4. Follow the instructions to install the Helm client version 2.14.3 or higher on your local machine, and to install the Helm server (tiller) with a service account in your cluster.

After you finish setting up your cluster, you will have created two Kubernetes secrets, assuming that you decided to  to encrypt your volumes by using IBM Key Protect:

  • A secret that is named px-etcd-certs in the kube-system namespace that holds the credentials to connect to your Databases for ectd service instance.
  • A secret that is named px-ibm in the portworx namespace that holds the credentials to connect to your IBM Key Protect service instance.

The following image shows a sample OpenShift cluster that is named myocpcluster-pxstorage and that is created with three SDS worker nodes with extra local storage by using the ms3c-16x64_ssd_encrypted machine flavor. The worker nodes are evenly spread across three zones to achieve a hyper-converged storage topology.

Installing Portworx from the IBM Cloud Catalog

After you complete all of the prerequisites and create the appropriate Kubernetes secrets in your cluster, you are now ready to install Portworx from the IBM Cloud Catalog as shown in the following screenshot: 

After you select the region and resource group where your OpenShift cluster is located, complete the fields as follows: 

  1. Enter a memorable name for your Portworx service, such as Portworx-Enterprise-openshift-hyperconverged.
  2. In the Tag field, enter the name of the OpenShift cluster where you want to install Portworx. By using the tag, you associate the Portworx service instance with the OpenShift cluster and you can assist in Day-2 lifecycle operations, such as using the management console, PX-Central.
  3. Enter an IBM Cloud API Key to retrieve the list of clusters that you have access to. If you don’t have an API key, see Managing user API keys. The list of clusters that are located in the selected region and resource group is dynamically populated as shown in input field 5. 
  4. Enter the API endpoint for the IBM Databases for etcd service instance that you created and retrieved as part of the prerequisites. 
  5. Select the cluster from the drop-down list where you want to install Portworx. In the example above, myocpcluster-pxstorage is selected.
  6. Enter a unique name for your Portworx cluster. 
  7. Enter the name of the Kubernetes secret that you created in your cluster to store the Databases for etcd service credentials. In this example, px-etcd-certs is entered.
  8. Optionally, select whether you want to encrypt your volumes by using a custom key that is defined in a Kubernetes Secret or by using IBM Key Protect. In the example above, we selected IBM Key Protect.

After you entered all the information, you can proceed to create the Portworx service instance. You can navigate to the Resource list to see the Portworx provisioning status:

Verifying your Portworx installation

When your Portworx service instance shows a Provisioned status, check the status of the Portworx storage layer before you start deploying stateful applications that use Portworx volumes. To do this, run the following verification steps:  

  1. Verify that all the required Portworx pods run in your cluster. You must see one portworx, stork, and stork-scheduler pod for each worker node in your cluster. Because the OpenShift multizone cluster in this example has three worker nodes, you see a total of nine pods in the kube-system namespace.
    kubectl get pods -n kube-system | grep 'portworx\|stork'kubectl
  2. Log in to one of the Portworx pods and run the /opt/pwx/bin/pxctl status command. This can be easily done via the OpenShift Cluster Console by opening a terminal session to one of the Portworx pods. The command output shows that the status of the Portworx cluster is Online and that it has automatically discovered extra 1.7 TB storage per worker node to form a total storage capacity of 5.2 TB. This total physical storage capacity is available to create Portworx-backed persistent volumes that you can mount to your application deployments.

Deploying MySQL by using a Portworx encrypted volume

Now that you verified the status of the Portworx cluster, you can deploy a stateful application by using a Portworx volume. In this example, we’ll create a persistent Portworx volume that is encrypted with IBM Key Protect by using the following persistent volume claim. The persistent volume claim references the storage class portworx-db-sc, which is configured by default and optimized to run database workloads in the cluster. To enable volume encryption, the px/secure annotation must be set to true.

kind: PersistentVolumeClaim
apiVersion: v1
  name: secure-pvc
    px/secure: "true"
  storageClassName: portworx-db-sc
  - ReadWriteOnce
      storage: 200Gi

After you create the persistent volume, verify that the persistent volume named secured-pvc is in a Bound status.

Next, you use the following Kubernetes deployment to deploy MySQL in your cluster that uses the secure-pvc persistent volume claim that you created earlier. 

apiVersion: extensions/v1beta1
kind: Deployment
  name: mysql
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
        app: mysql
      schedulerName: stork
      - name: mysql
        image: mysql:5.6
        imagePullPolicy: "Always"
        - name: MYSQL_ROOT_PASSWORD
          value: password
        - containerPort: 3306
        - mountPath: /var/lib/mysql
          name: mysql-data
      - name: mysql-data
          claimName: secure-pvc

After you create the MySQL deployment in your OpenShift cluster,  you can check the status of the mysql pod from the OpenShift Console as shown below:

More information

The Portworx container-native software-defined storage solution for IBM Cloud Kubernetes Service and RedHat OpenShift on IBM Cloud provides a variety of features to support stateful applications. To learn more, visit the following links:

If you have questions, engage our team via the IBM Cloud Kubernetes Service Slack. Log in to Slack by using your IBM ID and post your question in the #portworx-on-iks channel. If you do not use an IBM ID for your IBM Cloud account, request an invitation to this Slack

More from Announcements

IBM Hybrid Cloud Mesh and Red Hat Service Interconnect: A new era of app-centric connectivity 

2 min read - To meet customer demands, applications are expected to be performing at their best at all times. Simultaneously, applications need to be flexible and cost effective, and therefore supported by an underlying infrastructure that is equally reliant, performant and secure as the applications themselves.   Easier said than done. According to EMA's 2024 Network Management Megatrends report only 42% of responding IT professionals would rate their network operations as successful.   In this era of hyper-distributed infrastructure where our users, apps, and data…

IBM named a Leader in Gartner Magic Quadrant for SIEM, for the 14th consecutive time

3 min read - Security operations is getting more complex and inefficient with too many tools, too much data and simply too much to do. According to a study done by IBM, SOC team members are only able to handle half of the alerts that they should be reviewing in a typical workday. This potentially leads to missing the important alerts that are critical to an organization's security. Thus, choosing the right SIEM solution can be transformative for security teams, helping them manage alerts…

IBM and MuleSoft expand global relationship to accelerate modernization on IBM Power 

2 min read - As companies undergo digital transformation, they rely on APIs as the backbone for providing new services and customer experiences. While APIs can simplify application development and deliver integrated solutions, IT shops must have a robust solution to effectively manage and govern them to ensure that response times and costs are kept low for all applications. Many customers use Salesforce’s MuleSoft, named a leader by Gartner® in full lifecycle API management for seven consecutive times, to manage and secure APIs across…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters