Deploying Applications on IBM Cloud with Kubernetes and IBM Cloud Databases for PostgreSQL
12 min read
IBM Cloud Databases for PostgreSQL + IBM Cloud Kubernetes Services
Deploying a cloud-native application and integrating IBM Cloud Databases is easy using the IBM Cloud Kubernetes Service. In this tutorial, we’ll show you how to set up IBM Cloud Databases for PostgreSQL and deploy a Node.js application using the database with the IBM Cloud Kubernetes Service. The example requires minimal setup, letting you deploy a secure, fast, and scaleable cloud-native application within minutes.
IBM Cloud Kubernetes Service allows you to quickly set up and deploy production-grade applications on the IBM Cloud. Integrating your applications with other IBM Cloud services on Kubernetes is also straightforward, and in this article, we’ll show you how to do it by deploying a Node.js application to the cloud that uses IBM Cloud Databases for PostgreSQL. We’ll cover everything from provisioning a Kubernetes cluster and a PostgreSQL deployment to having a working, cloud-native “Hello World” example running on the web within minutes.
Let’s get started setting things up.
Setting up the IBM Cloud Kubernetes Service
We first need to set up an IBM Cloud Kubernetes Service. To do that, you can provision the service through your IBM Cloud dashboard. Click on Create Resource and then select Kubernetes Service.
Once you’re on the Kubernetes Service page, click on the Create button, which will take you to a new page to create a new Kubernetes cluster.
For the cluster location, select Dallas. This region allows you to create a free cluster with a single worker node. You can name your cluster whatever you’d like it to be; we’ll use the default name
mycluster for this example.
After you’ve clicked on the Create Cluster button, you will be taken to the Access tab for your Kubernetes cluster. Follow the directions to access your Kubernetes cluster via the terminal using the
ibmcloud CLI. It will take a few minutes for your cluster to get set up.
Setting up the IBM Cloud Container Registry
Since our application will need to be built as a Docker image and stored privately in a registry, we’ll use the IBM Cloud Container Registry to do that. When we deploy the Kubernetes application, all we’ll need to do is point to the application’s container image and Kubernetes will take care of correctly deploying it.
Make sure you target the IBM Cloud resource group of your Kubernetes cluster. You can do that by running from the terminal:
Next, in order for the Docker images to be private, we need to create a namespace in the registry that will create a unique URL to your image repository. To create a namespace, run:
And that’s all you need to do for the moment. Now, let’s provision PostgreSQL database.
Create IBM Cloud Databases for PostgreSQL database
Using the IBM Cloud CLI, you can provision a PostgreSQL database from the terminal with the
ibmcloud resource service-instance-create command. This command takes a service instance name, a service name, a plan name, and a location. Since we’re provisioning the IBM Cloud Databases for PostgreSQL service, we’ll create a database service called “example-postgresql” and the service name will be
databases-for-postgresql. We’ll provision the database using the standard plan in the
us-southregion. The command looks like this:
Remember the database service name you created because we’ll need that when binding your database to your Kubernetes cluster.
Binding PostgreSQL to Kubernetes
After your Kubernetes cluster is running and your PostgreSQL database is provisioned, you’ll need to bind the services together so that Kubernetes will have access to your database credentials. To do that, run the following command in the terminal:
This command binds your Kubernetes cluster located in your
default namespace with the
example-postgresql service, which is the name of the PostgreSQL database you provisioned.
Once you’ve bound the database to Kubernetes, it will return a response with the Kubernetes Secret. Make sure to remember the name of the Secret because you’ll need it to deploy your application:
Secret Name: binding-example-postgresql
A Secret is for storing, in memory, encrypted sensitive or non-sensitive data. Secrets are delivered only to the nodes that run the Kubernetes pods that need them. Therefore, our PostgreSQL credentials will be delivered to the pods that run our application since we’re not creating microservices that would have Secrets delivered only to specific pods.
Now that you’ve bound the database to Kubernetes and got the Secret name, let’s clone the sample application then configure the Deployment manifest file to deploy it on IBM Cloud.
Setting up the application
Start by cloning the Node.js “Hello World” PostgreSQL example application that we’ve built from Github. You can do that by running from the terminal:
When you clone the repository, make sure that you include
-b node to clone only the Node.js branch. Each branch includes examples using different programming languages. Once you’ve cloned the Node.js branch of the repository, go into the
postgresql directory which contains all the files you’ll need to deploy the application.
The two files from this directory that we’re primarily concerned with are
clouddb-deployment.yml (also referred to as a deployment manifest).
server.js runs the Node.js application using Express and extracts the necessary database credentials from
process.env.BINDING so that we can connect to PostgreSQL, create a table, and store and retrieve data.
clouddb-deployment.yml contains the Deployment manifest and defines the Service that will listen on a specified port to the application in each pod and expose it on a specified port on our cluster. Let’s take you briefly through both parts.
The first part of the file is called a Deployment manifest. It’s responsible for deploying the Node.js application and updating it declaratively on Kubernetes. When we deploy our Node.js application, Kubernetes creates a ReplicaSet resource underneath which then creates, replicates, and manages the pods that your application runs on. You’ll notice that you can define the number of replicas or pods from:
containers is where we’ll have to make some edits. Where you have:
You will need to provide the URL of the Docker image of the Node.js application. Since we have the IBM Cloud Container Registry set up already, we can build a Docker image of this sample application since the application itself doesn’t need to be modified.
You’ll need the region of your Container Registry. To get that, run:
This will give you a URL like:
Now, using the registry URL and the private namespace you created above, use the following command to build the Docker image. We called our image
This will build the image and return
OK if the image was successfully built and stored in your registry. To see the images in your registry, you can run the following command:
And you’ll get a list of the images like the following:
With the repository name, you can edit your Deployment manifest.
env, you’ll have to provide the name of the Kubernetes Secret that you got when you bound your PostgreSQL database to your Kubernetes cluster. This is how we will extract the credentials of our PostgreSQL database.
Notice the name
BINDING. This is an arbitrary name, which is used only for the example. If you look in
server.js, you’ll see that this is the environment variable
process.env.BINDING that contains the credentials of our PostgreSQL database—it’s the variable that contains our Secret. So, where you have
<postgres-secret-name>, you’ll need to replace that with your Kubernetes Secret. Since Kubernetes stores the credentials as a string, you need to use
JSON.parse to parse it as JSON.
Below the Deployment manifest, we create a Kubernetes Service. Services allow you to access pods and define policies to access them, and they are essentially what keeps your application up while allowing pods to die and replicate. In this Service, we’re running it as a
NodePort, which means that you can expose a set of pods to external clients using a reserved port—in this case
30081. So, in our case, we have the Service as follows:
selector, we must use the same label key-value pair
run: clouddb-demo as our deployment so that the service can identify which pods to expose. The
protocol we’re using to expose the ports is TCP. This is what’s used by default even though we defined it in the YAML file.
With the Deployment and Service defined, we’re ready to deploy the application to IBM Cloud.
Deploying the application
To deploy the application, you’ll need to access your Kubernetes cluster using the kubectl command line tool. From your application’s directory, run the following command to use the
clouddb-deployment.yml to deploy your application:
If successful, you’ll see the following:
You can inspect the three pods that you created:
The result will look like this:
There are several commands that you can run using kubectl in order to check out your deployment, but what we really want to see is the application running in the browser. To get the public IP for your cluster, run:
This will give you something like the following, showing the single worker node since we’re using the free plan:
Using the public IP, go to port
30081, which is the exposed port from your Kubernetes Service. You should then see your cloud-native application running successfully and you can start adding words to your database and see them updating in the browser.
This tutorial showed you how to set up a simple web application using IBM Cloud Databases for PostgreSQL and deploy it with IBM Cloud Kubernetes service. Once you’ve deployed this application, you can experiment with other Kubernetes configurations and start deploying more complex applications. As we’ve shown, it’s pretty simple to set up and get started. All you really need are the basic building blocks: a cluster, container registry, some code, and a database. IBM Cloud Kubernetes and IBM Cloud Databases for PostgreSQL takes care of the management of your cluster and the database so that you can concentrate on building the applications that run your business, instead of worrying about high availability, elasticity, and compliance
Enjoyed this article? Get started with Databases for PostgreSQL now.
Databases for PostgreSQL is a fully managed, enterprise-ready PostgreSQL service with built-in security, high availability, and backup orchestration. Learn more here.