Databases

Deploying Applications on IBM Cloud with Kubernetes and IBM Cloud Databases for PostgreSQL

Share this post:

IBM Cloud Databases for PostgreSQL + IBM Cloud Kubernetes Services

Deploying a cloud-native application and integrating IBM Cloud Databases is easy using the IBM Cloud Kubernetes Service. In this tutorial, we’ll show you how to set up IBM Cloud Databases for PostgreSQL and deploy a Node.js application using the database with the IBM Cloud Kubernetes Service. The example requires minimal setup, letting you deploy a secure, fast, and scaleable cloud-native application within minutes.

IBM Cloud Kubernetes Service allows you to quickly set up and deploy production-grade applications on the IBM Cloud. Integrating your applications with other IBM Cloud services on Kubernetes is also straightforward, and in this article, we’ll show you how to do it by deploying a Node.js application to the cloud that uses IBM Cloud Databases for PostgreSQL. We’ll cover everything from provisioning a Kubernetes cluster and a PostgreSQL deployment to having a working, cloud-native “Hello World” example running on the web within minutes.

Let’s get started setting things up.

Setting up the IBM Cloud Kubernetes Service

We first need to set up an IBM Cloud Kubernetes Service. To do that, you can provision the service through your IBM Cloud dashboard. Click on Create Resource and then select Kubernetes Service.

Once you’re on the Kubernetes Service page, click on the Create button, which will take you to a new page to create a new Kubernetes cluster.

For the cluster location, select Dallas. This region allows you to create a free cluster with a single worker node. You can name your cluster whatever you’d like it to be; we’ll use the default name mycluster for this example.

After you’ve clicked on the Create Cluster button, you will be taken to the Access tab for your Kubernetes cluster. Follow the directions to access your Kubernetes cluster via the terminal using the ibmcloud CLI. It will take a few minutes for your cluster to get set up.

Setting up the IBM Cloud Container Registry

Since our application will need to be built as a Docker image and stored privately in a registry, we’ll use the IBM Cloud Container Registry to do that. When we deploy the Kubernetes application, all we’ll need to do is point to the application’s container image and Kubernetes will take care of correctly deploying it.

Make sure you target the IBM Cloud resource group of your Kubernetes cluster. You can do that by running from the terminal:

     ibmcloud target -g <resource_group_name>

Next, in order for the Docker images to be private, we need to create a namespace in the registry that will create a unique URL to your image repository. To create a namespace, run:

     ibmcloud cr namespace-add <your_namespace>

And that’s all you need to do for the moment. Now, let’s provision PostgreSQL database.

Create IBM Cloud Databases for PostgreSQL database

Using the IBM Cloud CLI, you can provision a PostgreSQL database from the terminal with the ibmcloud resource service-instance-create command. This command takes a service instance name, a service name, a plan name, and a location. Since we’re provisioning the IBM Cloud Databases for PostgreSQL service, we’ll create a database service called “example-postgresql” and the service name will be databases-for-postgresql. We’ll provision the database using the standard plan in the us-south region. The command looks like this:

     ibmcloud resource service-instance-create example-postgresql databases-for-postgresql standard us-south

Remember the database service name you created because we’ll need that when binding your database to your Kubernetes cluster.

Binding PostgreSQL to Kubernetes

After your Kubernetes cluster is running and your PostgreSQL database is provisioned, you’ll need to bind the services together so that Kubernetes will have access to your database credentials. To do that, run the following command in the terminal:

     ibmcloud ks cluster-service-bind mycluster default example-postgresql

This command binds your Kubernetes cluster located in your default namespace with the example-postgresql service, which is the name of the PostgreSQL database you provisioned.

Once you’ve bound the database to Kubernetes, it will return a response with the Kubernetes Secret. Make sure to remember the name of the Secret because you’ll need it to deploy your application:

     Namespace:     default
     Secret Name:   binding-example-postgresql

A Secret is for storing, in memory, encrypted sensitive or non-sensitive data. Secrets are delivered only to the nodes that run the Kubernetes pods that need them. Therefore, our PostgreSQL credentials will be delivered to the pods that run our application since we’re not creating microservices that would have Secrets delivered only to specific pods.

Now that you’ve bound the database to Kubernetes and got the Secret name, let’s clone the sample application then configure the Deployment manifest file to deploy it on IBM Cloud.

Setting up the application

Start by cloning the Node.js “Hello World” PostgreSQL example application that we’ve built from Github. You can do that by running from the terminal:

     git clone -b node git@github.com:IBM-Cloud/clouddatabases-helloworld-kubernetes-examples.git

When you clone the repository, make sure that you include -b node to clone only the Node.js branch. Each branch includes examples using different programming languages. Once you’ve cloned the Node.js branch of the repository, go into the postgresql directory which contains all the files you’ll need to deploy the application.

The two files from this directory that we’re primarily concerned with are server.js and clouddb-deployment.yml (also referred to as a deployment manifest). server.js runs the Node.js application using Express and extracts the necessary database credentials from process.env.BINDING so that we can connect to PostgreSQL, create a table, and store and retrieve data.

clouddb-deployment.yml contains the Deployment manifest and defines the Service that will listen on a specified port to the application in each pod and expose it on a specified port on our cluster. Let’s take you briefly through both parts.

The first part of the file is called a Deployment manifest. It’s responsible for deploying the Node.js application and updating it declaratively on Kubernetes. When we deploy our Node.js application, Kubernetes creates a ReplicaSet resource underneath which then creates, replicates, and manages the pods that your application runs on. You’ll notice that you can define the number of replicas or pods from:

     spec:
       replicas: 3

Under containers is where we’ll have to make some edits. Where you have:

     containers:
           - name: cloudpostgres-nodejs-app
             image: "registry.<region>.bluemix.net/<namespace>/icdpg" # Edit me
             imagePullPolicy: Always

You will need to provide the URL of the Docker image of the Node.js application. Since we have the IBM Cloud Container Registry set up already, we can build a Docker image of this sample application since the application itself doesn’t need to be modified.

You’ll need the region of your Container Registry. To get that, run:

     ibmcloud cr region

This will give you a URL like:

     You are targeting region 'us-south', the registry is 'registry.ng.bluemix.net'.

Now, using the registry URL and the private namespace you created above, use the following command to build the Docker image. We called our image icdpg:

     ibmcloud cr build -t registry.ng.bluemix.net/<namespace>/icdpg .

This will build the image and return OK if the image was successfully built and stored in your registry. To see the images in your registry, you can run the following command:

     ibmcloud cr images

And you’ll get a list of the images like the following:

     REPOSITORY                                      TAG      DIGEST         NAMESPACE   CREATED          SIZE          SECURITY STATUS
     registry.ng.bluemix.net/mynamespace/icdpg       latest   b4e44bd05387   mynamespace 54 seconds ago   27 MB         No Issues

With the repository name, you can edit your Deployment manifest.

Next, under env, you’ll have to provide the name of the Kubernetes Secret that you got when you bound your PostgreSQL database to your Kubernetes cluster. This is how we will extract the credentials of our PostgreSQL database.

         env:
             - name: BINDING
               valueFrom:
                 secretKeyRef:
                   name: <postgres-secret-name> # Edit me
                   key: binding

Notice the name BINDING. This is an arbitrary name, which is used only for the example. If you look in server.js, you’ll see that this is the environment variable process.env.BINDING that contains the credentials of our PostgreSQL database—it’s the variable that contains our Secret. So, where you have <postgres-secret-name>, you’ll need to replace that with your Kubernetes Secret. Since Kubernetes stores the credentials as a string, you need to use JSON.parse to parse it as JSON.

Below the Deployment manifest, we create a Kubernetes Service. Services allow you to access pods and define policies to access them, and they are essentially what keeps your application up while allowing pods to die and replicate. In this Service, we’re running it as a NodePort, which means that you can expose a set of pods to external clients using a reserved port—in this case 30081. So, in our case, we have the Service as follows:

     apiVersion: v1
     kind: Service
     metadata:
       name: cloudpostgres-service
       labels:
         run: clouddb-demo
     spec:
       type: NodePort
       selector:
         run: clouddb-demo
       ports:
       - protocol: TCP
         port: 8080
         nodePort: 30081

For the selector, we must use the same label key-value pair run: clouddb-demo as our deployment so that the service can identify which pods to expose. The protocol we’re using to expose the ports is TCP. This is what’s used by default even though we defined it in the YAML file.

With the Deployment and Service defined, we’re ready to deploy the application to IBM Cloud.

Deploying the application

To deploy the application, you’ll need to access your Kubernetes cluster using the kubectl command line tool. From your application’s directory, run the following command to use the clouddb-deployment.yml to deploy your application:

     kubectl apply -f clouddb-deployment.yml

If successful, you’ll see the following:

     deployment.apps/icdpostgres-app created
     service/cloudpostgres-service created

You can inspect the three pods that you created:

     kubectl get pods -o wide

The result will look like this:

     NAME                             READY   STATUS    RESTARTS   AGE   IP               NODE
     icdpostgres-app-d6d7695d-gwd8d   1/1     Running   0          4h    172.30.173.156   10.76.193.194
     icdpostgres-app-d6d7695d-nxsdq   1/1     Running   0          4h    172.30.173.155   10.76.193.194
     icdpostgres-app-d6d7695d-v7kkg   1/1     Running   0          4h    172.30.173.154   10.76.193.194

There are several commands that you can run using kubectl in order to check out your deployment, but what we really want to see is the application running in the browser. To get the public IP for your cluster, run:

     ibmcloud ks workers <your_cluster_name>

This will give you something like the following, showing the single worker node since we’re using the free plan:

     ID                                                 Public IP      Private IP      Machine Type   State    Status   Zone    Version
     kube-hou02-pa765698911fd04d1193c586e1454ddf5e-w1   184.173.5.54   10.76.193.194   free           normal   Ready    hou02   1.10.11_1537

Using the public IP, go to port 30081, which is the exposed port from your Kubernetes Service. You should then see your cloud-native application running successfully and you can start adding words to your database and see them updating in the browser.

Summing up

This tutorial showed you how to set up a simple web application using IBM Cloud Databases for PostgreSQL and deploy it with IBM Cloud Kubernetes service. Once you’ve deployed this application, you can experiment with other Kubernetes configurations and start deploying more complex applications. As we’ve shown, it’s pretty simple to set up and get started. All you really need are the basic building blocks: a cluster, container registry, some code, and a database. IBM Cloud Kubernetes and IBM Cloud Databases for PostgreSQL takes care of the management of your cluster and the database so that you can concentrate on building the applications that run your business, instead of worrying about high availability, elasticity, and compliance

Enjoyed this article? Get started with Databases for PostgreSQL now.

Databases for PostgreSQL is a fully managed, enterprise-ready PostgreSQL service with built-in security, high availability, and backup orchestration. Learn more here.

Software Engineer and Evangelist - IBM Cloud Databases

More Databases stories
April 18, 2019

Getting Started with IBM Cloud Databases for Elasticsearch and Kibana

In this article, we’ll show you how to use Docker to connect your Databases for Elasticsearch deployment to Kibana—the open source tool that lets you add visualization capabilities to your Elasticsearch database.

Continue reading

April 8, 2019

IBM Cloud Databases: Announcing Read Replicas and LogDNA Integration

IBM Cloud Databases is announcing the availability of database read replicas and IBM Log Analysis with LogDNA.

Continue reading

April 3, 2019

End of Life Coming to MongoDB Version 3.4 and How to Start Upgrading

The end-of-life (EOL) date for MongoDB 3.4 is approaching, and you should plan on migrating your databases over to MongoDB version 3.6 or version 4.0 and making any modifications to your applications that may need to be made.

Continue reading