June 23, 2020 By Eric Jodet 7 min read

Learn how to create the “Develop a Kubernetes app with Helm” toolchain and continuously deliver an app to a Kubernetes cluster using a Helm Chart.

In this tutorial, you will create a toolchain from the Helm template that will deploy a sample Node application to a Kubernetes cluster using Helm charts. You modify the code and observe your changes being automatically deployed to IBM Cloud.

Note: This tutorial is a simplified version of the Use the “Develop a Kubernetes app with Helm” toolchain tutorial. Review this tutorial summary for more information.


Task 1: Create the toolchain

1. Navigate to the “Develop a Kubernetes app with Helm” toolchain creation page.

2. Click on the About tab to review the diagram of the toolchain. The diagram shows each tool integration in its lifecycle phase in the toolchain:

3. Click back on the Create tab, and review the Toolchain’s default settings:

  • Toolchain name: You may choose a name of your choice
  • Region
  • Resource Group

4. Important: If you want/need to switch to a different account (that has access to a shared Kubernetes cluster), click the Profile avatar icon in the banner and select the account:

5. Review the Git Repo and Issue Tracking settings. Each toolchain comes with a sample app, but you can select another repo to use. Learn more about Track deployment of code changes option by reading this blog.

6. Click the Delivery Pipeline tab. These fields are displayed:

  • App name: Enter the name for your application. If you’d like, you can use the default value.
  • IBM Cloud API Key: Click on the New button next to the IBM Cloud API Key field, and select OK in the resulting dialog box to create a new, unique API Key.
  • Container Registry Region: Select the region in which you want the container images to be created. The default setting is to use the same registry region and cluster region.
  • Container Registry Namespace: This namespace is your folder in the global image registry in a region, which is used to manage your set of images. Either enter or select a namespace. 
  • Cluster Region: Select the region for the target cluster. This region is the region of the Kubernetes cluster that you created at the start of the tutorial or the cluster in the shared account. 
  • Resource Group: Select the resource group for your delivery pipeline. For more information about Resource Groups, see “Best practices for organizing resources in a resource group.”
  • Cluster Name: Select the name of the Kubernetes cluster that you created at the start of the tutorial or the cluster in the shared account. 
  • Cluster Namespace: Use explicit namespaces in clusters to separate deployed resources. Use distinct namespaces to insulate deployments within the same Kubernetes cluster. You can leave this setting at its default prod value.

Note: In these steps, you provided only the names for these resources. The toolchain automatically creates the matching resources for you. If these resources already exist, they are reused:

Note the Secret Picker icon next to the New API key button that enables you to pick your API key from a secrets store like Key Protect:

7. Click Create. After a few moments, your new toolchain overview page opens:

Task 2: Explore the pipeline

The following steps will help you to explore Delivery Pipeline in your toolchain. 

1. On the toolchain Overview page, click Delivery Pipeline to see your toolchain as it is being built and deployed:

The pipeline is automatically triggered on every Git commit push.

Your pipeline might still be running:

Review a stage’s jobs by accessing its configuration and selecting the Configure Stage option:

Each job in the pipeline is set up to include all the commands inline. A better way to maintain the pipeline is to externalize the scripts so that they are part of the repo and are source-controlled. You can copy the content of the Build script in the job and create a ./scripts/do.sh file. You also need to change the Build script in the pipeline job to run only the script; for instance, source ./scripts/do.sh. Then, you can update the pipeline jobs by updating the script files in the repo.

For more information about the delivery pipeline, see the Pipeline overview in the IBM Cloud Docs.

To learn about the practice of creating and using delivery pipelines, see “Automate continuous delivery through a delivery pipeline.”

2. Review the BUILD stage:

  • The Fetch code job clones the repository.
  • It also records a few environment variables into a build.properties file.

3. Click CANCEL to return to the pipeline.

4. On the BUILD stage, click View logs and history. This enables you to review the stage/jobs execution.

5. On the pipeline’s log page, click the Back arrow to return to the pipeline page.

6. On the CONTAINERIZE stage, click the Configure Stage icon to explore the stage.

  • Input tab: Observe that this stage’s input is set to be the output of the previous stage:
  • Jobs tab: Select the Check vulnerabilities job. This job is configured to be advisory so that if it fails, it does not block the pipeline. You can change this behavior by selecting the Stop running this stage if this job fails check box:

7. Click CANCEL to return to the pipeline.

8. On the CONTAINERIZE stage, click View logs and history. This stage runs the Vulnerability Advisor on the image to check for known vulnerabilities. If a vulnerability is found, the stage fails, preventing the image from being deployed. This safety feature prevents apps with security holes from being deployed. The image has no vulnerabilities, so it passes. For more information about reviewing vulnerabilities, see Reviewing image security in the IBM Cloud Docs.

9. On the pipeline’s log page, click the Back arrow to return to the pipeline page.

10. On the DEPLOY stage, click the Configure Stage icon to explore the stage. The Deploy with Helm job checks for cluster readiness and namespace existence, configures the cluster namespace, grants access to the private image registry, configures tiller, and checks the Helm releases in the namespace. It also sets environment variables and deploys the Helm chart into the Kubernetes cluster.

11. Click CANCEL to return to the pipeline.

12. On the DEPLOY stage, click View logs and history and then click the Deploy with Helm job. This job deploys the app into the Kubernetes cluster. At the end of the log file, find the link to http://IP:PORT:

13. On the pipeline’s log page, click the Back arrow to return to the pipeline page.

14. Browse to http://IP:PORT to see the running application:

Task 3: Make, commit, and deploy a change

1. Back on the main toolchain page, click the Orion Web IDE tile:

2. Navigate to and edit app.js:

3. At line 28, modify the app’s welcome message and change the text to “IBM Cloud DevOps in action!”

4. Click on the Git icon on the left-hand side:

5. Add a comment and commit your changes by clicking the Commit button on the right:

6. Push your changes to the Git repository:

7. Navigate back to your toolchain, and click on the Delivery Pipeline.

Observe that the Pipeline has been triggered by your commit. Check the Job’s build input:

8. Wait for the pipeline to complete. Ensure that the DEPLOY stage is successful. Optionally, check the Deploy with Helm job’s history:

9. Verify that your changes were deployed successfully by refreshing the deployed app’s page:


Congratulations! You created a toolchain that uses Helm charts to deploy an app to a secure container in a Kubernetes cluster. You updated the app, and pushed the updates to the Git repo. After the delivery pipeline redeployed the app, you verified the update.

What’s next?

Add more tools to your toolchain:

  • DevOps Insights: Use IBM Cloud DevOps Insights to help improve build quality for your projects. DevOps Insights is a tool that provides data for team insights and deployment risk. Get insights about your application by uploading unit tests, code coverage, functional verification tests, and static security scans for each application to DevOps Insights.
  • Slack: Slack is a cloud-based, real-time messaging and notification system. Slack provides persistent chat, which is a more interactive alternative to email for team collaboration. 

Additional resources

Report a problem or look for help

Get help fast directly from the IBM Cloud development teams by joining us on Slack.

More from Cloud

Hybrid cloud examples, applications and use cases

7 min read - To keep pace with the dynamic environment of digitally-driven business, organizations continue to embrace hybrid cloud, which combines and unifies public cloud, private cloud and on-premises infrastructure, while providing orchestration, management and application portability across all three. According to the IBM Transformation Index: State of Cloud, a 2022 survey commissioned by IBM and conducted by an independent research firm, more than 77% of business and IT professionals say they have adopted a hybrid cloud approach. By creating an agile, flexible and…

Tokens and login sessions in IBM Cloud

9 min read - IBM Cloud authentication and authorization relies on the industry-standard protocol OAuth 2.0. You can read more about OAuth 2.0 in RFC 6749—The OAuth 2.0 Authorization Framework. Like most adopters of OAuth 2.0, IBM has also extended some of OAuth 2.0 functionality to meet the requirements of IBM Cloud and its customers. Access and refresh tokens As specified in RFC 6749, applications are getting an access token to represent the identity that has been authenticated and its permissions. Additionally, in IBM…

How to move from IBM Cloud Functions to IBM Code Engine

5 min read - When migrating off IBM Cloud Functions, IBM Cloud Code Engine is one of the possible deployment targets. Code Engine offers apps, jobs and (recently function) that you can (or need) to pick from. In this post, we provide some discussion points and share tips and tricks on how to work with Code Engine functions. IBM Cloud Code Engine is a fully managed, serverless platform to (not only) run your containerized workloads. It has evolved a lot since March 2021, when…

Sensors, signals and synergy: Enhancing Downer’s data exploration with IBM

3 min read - In the realm of urban transportation, precision is pivotal. Downer, a leading provider of integrated services in Australia and New Zealand, considers itself a guardian of the elaborate transportation matrix, and it continually seeks to enhance its operational efficiency. With over 200 trains and a multitude of sensors, Downer has accumulated a vast amount of data. While Downer regularly uncovers actionable insights from their data, their partnership with IBM® Client Engineering aimed to explore the additional potential of this vast dataset,…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters