Connecting to IBM Cloud Object Store in Kubernetes

Share this post:

Operationalizing IBM SQL Query: Part 2

In this second part of a four-part series on Operationalizing IBM SQL Query, we’ll take a look at storing and retrieving data in IBM Cloud Object Storage, the gateway to Watson Studio. Be sure to check out Part 1 of our series here.

Do you want to run an AI learning model across your application data? Analyze your NoSQL data using ANSI SQL? Do real-time analysis of sentiment from a chatbot? You’ll need somewhere to store and retrieve that data.

IBM Cloud Object Storage (COS) is a datatype-agnostic storage service hosted on the IBM Cloud. It’s a great place to store just about any type of data, and it serves as an entry point into other IBM Cloud services. Data stored in Parquet, CSV, and JSON format can be scrubbed using Data Refinery and imported into many of the Watson AI tools through Watson Studio.

In this article, we’ll take a look at the best practices for connecting to COS from docker containers deployed in the IBM Cloud Kubernetes Service.

Let’s get started . . .

Creating a Cloud Object Storage bucket

Before we can store anything in Cloud Object Service (COS), we need a location to store it. COS uses the abstraction of a bucket of data, and it conceptually acts like a globally unique folder. To create a bucket, first, you’ll need to sign up for the Cloud Object Storage service. Navigate to the IBM Cloud Console and search for Cloud Object Storage in the catalog.

Cloud Object Storage is pay-as-you-go, so there are no fees for creating the service. You only pay for the storage capacity you use. Give your service a name, choose a plan, and click the Create button.

Once your service creation is complete, you’ll be directed to the storage dashboard. From here, you can create a new bucket or set up credentials so we can access the service via the API. Check out the Cloud Object Storage documentation for more information on the Cloud Object Storage API and how to connect to it.

Now that we’ve provisioned the COS service into our IBM Cloud account, let’s create a microservice that can save data to a COS bucket.

Creating a COS Kubernetes service

Before we jump into connecting up our IBM COS instance with Kubernetes, it’ll be helpful to clear up a possible misunderstanding.

Kubernetes services are essentially just a network name within Kubernetes that can be resolved using a Domain Name Server. The actual implementation of functionality for a service can be handled in a number of ways. Typically, services either point to a workload, which is an application deployed in Kubernetes, or they redirect to another network location, such as an IP address or a network alias.

In Rancher, we can create a new service by clicking on the Service Discovery tab of our Kubernetes cluster and clicking on the Add Record button.

Once there, we can determine what name we want to use to access our service and figure out how to resolve that name to either a deployed service, IP address, or external hostname. To connect to COS, we’ll use the COS endpoint from the COS service we provisioned above. Click on the Endpoint link and choose the endpoint closest to your location.

You can then create your new service from the Add Record button in Rancher. Specify that your service resolves to an external hostname by selecting the appropriate option, and use the URL for your COS endpoint as the target hostname.

Click Create and you’ll now see your new service in the Service Discovery tab of Rancher.

You’ll now have access to your COS endpoint using the named service ibmcos, and the Kubernetes DNS system will automatically resolve it for you.

Using a Config Map to storage service details

If you don’t want to use a Kubernetes service to store your hostname, or if there are other configuration settings you’d like to share across pods, it might be helpful to set up a Config Map instead. Select Config Map from the Resources drop-down menu and click Add Config Map. Give your Config Map a name and then add your variable to it.

When we deploy the workload, we’ll take a look at how to inject the items in the config map as environment variables to our container.

Storing credentials using Kubernetes Secrets

The COS service gets us part way, but to gain complete access to COS, we’ll need to authenticate into the service using credentials. We’ll also want a central way to store those credentials so that all of our containers can securely access them. While there are a number of solutions for this, let’s take a look at using Kubernetes Secrets to achieve this. Kubernetes Secrets allows us to store credentials in a single, cluster-wide store and expose these credentials to specific containers as either a mounted data volume or as environment variables.

Creating our COS credentials

First, we’ll create some access credentials for our COS bucket. Head to the IBM Cloud Object Storage console, select the Service Credentials tab from the left-hand menu, and click the New Credential button.

In the dialog, give the credential a name and select Writer in the role type. You can leave the optional fields blank.

Once the popup closes, you should see your new credentials in the credentials list. Click on the View credentials dropdown to see them.

These credentials, specifically the apikey, are required to access your COS instance.

Storing credentials using Kubernetes Secrets

Now that we have our access credentials, let’s store them securely using Kubernetes Secrets.

Go back to Rancher and select the Secrets item from the Resources drop-down menu. Then, click the Add Secret button to create a new secret.

Credentials are stored as key-value pairs, so we’ll need to read the JSON object provided by IBM COS and add each credential in turn. Click Save on the bottom of the screen to save the credentials.

Connecting to COS from a Kubernetes Container

We now have everything we need to connect to COS from a Kubernetes Container. Let’s try deploying a workload that’s capable of connecting to COS. We can access the S3 API with simple HTTP call, so let’s just select a basic alpine linux image to get us started.

First, head to the Workloads section of Rancher and click on the Deploy button.

Then, give your new container a name and type “alpine” into the Image section. You’ll also want to untoggle the Environment Variables section. Note that we have the option to either add environment variables directly or add from another source. We’ll want to choose the latter.

Click on the Add From Source button, select a type of secret, and choose your secret from the Source drop-down. Then, select All from the Key field. This will inject each of the secrets we stored as environment variables using the Key field in our secret.

If you chose to store your endpoints in a Config Map, you can add another source and choose your Config Map from there.

Click Launch to launch your new container. This should only take a moment since we’re using such a minimal docker image.

Now that our container is launched, let’s run a shell on our container and test our connection. You should have been automatically switched back over to the Workloads section, but if not, you can select the Workloads tab from the top menu of Rancher. On the right side of your newly created container, select the context menu and choose the Execute Shell item.

From the command line that pops up, we should be able to access all of our COS credentials as environment variables and our COS kubernetes service. Type env just to make sure it worked; you should see the secrets you entered as environment variables in the list.

IBM COS exposes a RESTful API using an S3-compatible instruction set. You can find a comprehensive set of the available instructions in the API documentation. The simplest test for our connection is to list all of the buckets in our account.

Using the Config Map:

curl "https://${endpoint}/" -H "Authorization: bearer ${apikey}" -H "ibm-service-instance-id: ${resource_instance_id}"

Using the Kubernetes Service:

curl "https://ibmcos/" -H "Authorization: bearer ${apikey}" -H "ibm-service-instance-id: ${resource_instance_id}"

If all went well, you should see a list of buckets in XML format:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ListAllMyBucketsResult xmlns="">

Wrapping up and looking ahead

Now that you have access to COS from your Kubernetes containers, you can start storing and saving data directly into your COS account using the S3 API with simple RESTFul API calls. You can also use the S3 libraries in your choice language to access COS. The next article in this series will demonstrate how to save objects directly into COS from applications written in a variety of programming languages.

Additional resources

Developer Advocate, IBM Watson and Cloud Platform

More How-tos stories
August 13, 2018

CI/CD Pipeline for OpenWhisk Functions Using Whisk Deploy

The article presents a technique for developing a CI/CD pipeline in IBM Cloud for OpenWhisk functions using Whisk Deploy configuration cataloged in GitHub.

Continue reading

August 8, 2018

Creating A Microservice Data Lake With IBM Cloud Object Storage and IBM SQL Query

Is your application's data a stream trickling into a puddle or a rising tide overwhelming the levees? Either way, IBM has you covered with tools to store, retrieve, query, and gain insights from data of any size.

Continue reading

August 1, 2018

What’s Included in the IBM Cloud Developer Tools CLI Version 2.1.0

See what's included in version 2.1.0 of the IBM Cloud Developer Tools CLI.

Continue reading