April 21, 2020 By Powell Quiring 5 min read

As a developer or administrator, learn how to use Tekton pipelines in your CI/CD pipelines in the IBM Cloud.

IBM has been providing tools for DevOps and CI/CD for decades. One component of IBM Cloud DevOps is the Delivery Pipeline, which has recently been expanded to include Tekton 10.1. The Tekton Pipelines project is new, open source, and still evolving. The project has support and active commitment from leading technology companies, including IBM, Red Hat, Google, and CloudBees.

See the following video for an overview of Tekton:

This is part two of multi-part tutorial that will guide you through the Tekton Pipelines technology. See the following for some background: 

Before you begin

This second part of the multi-part series is focused on parameters and Secrets. You can step through the initialization portion of the previous post to create the Tekton Delivery Pipeline. In addition, I’m using the GitHub fork powellquiring, but you should use your own.

Configure the Tekton Pipeline

  • Open the Devops Toolchain resource created in Part 1.
  • Open the Delivery Pipeline.
  • Open Configure Pipeline.
  • Select the Definitions panel and edit it to resemble the following. If you are continuing from the previous post, simply replace the Path with lab2-parameters:
  • Select the Triggers panel and add manual triggers for all EventListeners. Name each trigger the same as the EventListener name. This will result in the following:
    • task-default-variable
    • pipeline-supplied-variable
    • user-defined-variable
    • user-defined-secret-variable

Hit close to return to the Delivery Pipeline Dashboard.

Define a parameter for a Task step

The Task below has a parameter specification containing var with a default VALUE:

kind: Task
metadata:
  name: the-var-task
spec:
  inputs:
    params:
      - name: var
        description: var example in task
        default: VALUE
  steps:
    - name: echoenvvar
      image: ubuntu
      env:
        - name: VAR
          value: $(inputs.params.var)
      command:
        - "/bin/bash"
      args:
        - "-c"
        - "echo 01 lab2 env VAR: $VAR"
    - name: echovar
      image: ubuntu
      command:
        - /bin/echo
      args:
        - $(inputs.params.var)
    - name: shellscript
      image: ubuntu
      env:
        - name: VAR
          value: $(inputs.params.var)
      command: ["/bin/bash", "-c"]
      args:
        - |
          echo this looks just like a shell script and the '$ ( inputs.params.var ) ' is subbed in here: '$(inputs.params.var)'
          env | grep VAR
          echo done with shellscript

Click Run Pipeline and choose task-default-variable. When it completes, click on the pipeline run results to see the following output:

[pdv : echoenvvar]
01 lab2 env VAR: VALUE

[pdv : echovar]
VALUE

[pdv : shellscript]
this looks just like a shell script and the $ ( inputs.params.var )  is subbed in here: VALUE
VAR=VALUE
done with shellscript

WARNING: The Tekton parameter specification $(inputs.params.var) looks like a bash shell variable, but it is not. The Tekton parameter substitution will be done before invoking the container.

Define parameters from a Pipeline to a Task

But how do I get parameters to the Task? In the Pipeline:

kind: Pipeline
metadata:
  name: pipeline-supplied-variable
spec:
  tasks:
    - name: psv
      params:
        - name: var
          value: PIPELINE_SUPPLIED
      taskRef:
        name: the-var-task

The Pipeline Task is supplying parameters to the same the-var-task. Click Run Pipeline, chose pipeline-supplied-variable, and check out the results.

Define parameters from a user to a task

How do I get values from a user clicking in the Delivery Pipeline console UI into the Pipeline? A parameter specification is declared in the TriggerTemplate. The PipelineRun parameter $(param.var) is expanded when the PipelineRun is created, just like it was expanded above within the Task. In our example, this is done when the Run Pipeline button is clicked, invoking EventSource and the associated TriggerTemplate:

kind: TriggerTemplate
metadata:
  name: trigger-user-supplied-variable
spec:
  params:
    - name: var
      description: var example
  resourcetemplates:
    - apiVersion: tekton.dev/v1alpha1
      kind: PipelineRun
      metadata:
        name: pipelinerun-$(uid)
      spec:
        pipelineRef:
          name: pipeline-input-parameter-variable
        params:
          - name: var
            value: $(params.var)

Similarly, the Pipeline has a parameter specification and the Pipeline Task is enhanced with a parameter expansion:

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: pipeline-input-parameter-variable
spec:
  params:
    - name: var
      description: var example in pipeline
  tasks:
    - name: pipv
      params:
        - name: var
          value: $(params.var)
      taskRef:
        name: the-var-task

To see this in action in the GUI, you will create an environment property:

  • Click on Configure Pipeline.
  • Click Environment Properties and add text property var with a value like defined in environment properties.  
  • Click Save and click Close.

Now click Run Pipeline with the manual trigger user-defined-variable. The environment properties that have a name matching the TriggerTemplate parameter specification will be expanded. In our case, the var environment property will be expanded in the PipelineRun.

Check the output and verify the parameters were passed through correctly.

Secure Property

If the Environment Property is created as a Secure Property, you will notice that it will not be displayed in the logs—instead, it will be hidden. All occurrences of this string that are in the logs and stored on IBM’s servers will be replaced by a string of asterisks.   

In the Environment Properties, delete the var property and add a new varSecure Property—and notice the output change.

Secrets

A Kubernetes Secret object can be created by the TriggerTemplate. This can be handy if one member of your team owns the Secret and a different member of the team is responsible for running the Pipeline.  Below, we’ll create a Secret object named secret-object. The apikey  parameter below identifies a property in the Environment Properties Panel of the Pipeline configuration. The Secret object holds key: value}pairs. In this case, {secret_key: $(params.apikey)}.

kind: TriggerTemplate
metadata:
  name: trigger-user-supplied-secret-variable
spec:
  params:
    - name: apikey
      description: the ibmcloud api key
  resourcetemplates:
    - apiVersion: v1
      kind: Secret
      metadata:
        name: secret-object
      type: Opaque
      stringData:
        secret_key: $(params.apikey)

Note that the parameter is not passed through the PipelineRun into the Pipeline. Instead, the Task can pull the value from the Secret object:

  • secret-object: name of the Secrets resource
  • secret_key: key of the {key: value} pair
kind: Task
spec:
  steps:
    - name: echoenvvar
      env:
        - name: API_KEY
          valueFrom:
            secretKeyRef:
              name: secret-object
              key: secret_key

To see this in action in the GUI, follow these steps:

  • Click on Configure Pipeline.
  • Click on Environment Properties.
  • Add a Secure property apikey with a value like “veryprivate.”
  • Click Save and then Close.

Now Run Pipeline with the manual trigger user-defined-secret-variable, and when it completes, click on the PipelineRun results to verify the output.

Learn more

There is a vibrant, open source community working on the Tekton Pipeline project, and this is a great chance to join the fun. Tekton has been integrated into the IBM DevOps environment, and you can leverage the IBM Cloud so that you can work on your business instead of your infrastructure. 

This is Part 2 of a multi-part tutorial series—more Tekton posts are coming to explain topics like workspaces and sharing.

Report a problem or ask for help

Get help fast directly from the IBM Cloud development teams by joining us on Slack.

More from Cloud

Hybrid cloud examples, applications and use cases

7 min read - To keep pace with the dynamic environment of digitally-driven business, organizations continue to embrace hybrid cloud, which combines and unifies public cloud, private cloud and on-premises infrastructure, while providing orchestration, management and application portability across all three. According to the IBM Transformation Index: State of Cloud, a 2022 survey commissioned by IBM and conducted by an independent research firm, more than 77% of business and IT professionals say they have adopted a hybrid cloud approach. By creating an agile, flexible and…

Tokens and login sessions in IBM Cloud

9 min read - IBM Cloud authentication and authorization relies on the industry-standard protocol OAuth 2.0. You can read more about OAuth 2.0 in RFC 6749—The OAuth 2.0 Authorization Framework. Like most adopters of OAuth 2.0, IBM has also extended some of OAuth 2.0 functionality to meet the requirements of IBM Cloud and its customers. Access and refresh tokens As specified in RFC 6749, applications are getting an access token to represent the identity that has been authenticated and its permissions. Additionally, in IBM…

How to move from IBM Cloud Functions to IBM Code Engine

5 min read - When migrating off IBM Cloud Functions, IBM Cloud Code Engine is one of the possible deployment targets. Code Engine offers apps, jobs and (recently function) that you can (or need) to pick from. In this post, we provide some discussion points and share tips and tricks on how to work with Code Engine functions. IBM Cloud Code Engine is a fully managed, serverless platform to (not only) run your containerized workloads. It has evolved a lot since March 2021, when…

Sensors, signals and synergy: Enhancing Downer’s data exploration with IBM

3 min read - In the realm of urban transportation, precision is pivotal. Downer, a leading provider of integrated services in Australia and New Zealand, considers itself a guardian of the elaborate transportation matrix, and it continually seeks to enhance its operational efficiency. With over 200 trains and a multitude of sensors, Downer has accumulated a vast amount of data. While Downer regularly uncovers actionable insights from their data, their partnership with IBM® Client Engineering aimed to explore the additional potential of this vast dataset,…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters