April 10, 2020 By Henrik Loeser 4 min read

A hands-on example demonstrating how you can increase (multi)cloud security.

Both the Key Protect and Hyper Protect Crypto Services on IBM Cloud are appreciated as highly secure tools to manage the lifecycle of encryption keys. But did you know that both can also be used as vaults for credentials? Their fine-granular access control and audit logging come in handy when there is a need to protect external credentials in a multicloud environment.

In this blog post, I am going to provide some hands-on insights on how to utilize Key Protect as such a vault. In the example, I demonstrate how to encode and upload to Key Protect JSON-structured credentials (an example could be the key ID and secret to access the AWS S3 storage for importing data into IBM Cloud). I will then show a Cloud Function that retrieves the access data and makes it available.

Encode credentials as a standard key

Key Protect and Hyper Protect Crypto Services are solutions that can help manage encryption keys on IBM Cloud. Both integrate with many cloud services, and you can generate new keys or bring or keep your own encryption keys (BYOK/KYOK). Basically any base64-encoded string can be imported as so-called standard keys. This allows you to store custom credentials, not just encryption keys. 

The following shows a JSON object with a userID and password that could be stored:

{
"USERID": "my-user-id",
  "PASSWORD": "my-password"
}

What you want to do, in this case, is to encode the JSON object to a base64 string and to store (create) the resulting string as new key in Key Protect. Assuming the JSON object is stored in a variable creds in a bash script, the following lines would encode the object and then create the key:

encoded=$(base64 -w 0 - <<< ${creds} )

ibmcloud kp create MY_CREDS --standard-key --key-material $encoded -i my-key-protect-instance-id --output json

You can find the full script with comments in the linked Gist on GitHub. The script makes use of the IBM Cloud CLI with the plugin for Key Protect as well as the standard “base64” command. Some required values are read from environment variables. 

Upon completion, the script will return an output similar to the following. It is the metadata for the new key, showing its ID and the resource identifier (CRN):

Sample output for new key with key ID and CRN.

Security configuration

You need to set up user roles and permissions for access to the Key Protect instance and the managed keys (credentials). Key Protect has a special ReaderPlus role. Similar to the Reader role, it allows you to retrieve information about keys. In addition, users with the ReaderPlus role can access the actual key material for standard keys (i.e., the stored credentials). This means that the ReaderPlus role is needed for all users and service IDs which need to access the credentials. 

The Writer role is required to create/store new keys. Note that it is possible to grant access to individual keys, further enhancing security by fine-tuning the need-to-know access.

In the introduction, I promised to provide a Cloud Function action for retrieving the credentials off of the vault. Cloud Functions organize objects such as actions and packages in namespaces. They are controlled by IAM concepts—hence IAM namespaces. Each namespace maps to an IAM service ID. In order for the action to be able to access the key in Key Protect, we need to grant the service ID the ReaderPlus service access role. Optionally, access could be limited to the keys (credentials) needed to perform the designed tasks.

You can grant the access directly to the service ID. The best practice, however, is to first create an access group. It is used to manage the relationship between users and service IDs in the group and the assigned privileges. The following screenshot show an access group “KP Access” with the assigned ReaderPlus role for a specific Key Protect instance.

ReaderPlus service access role to retrieve key material.

Retrieve credentials from the vault

Once access to Key Protect is configured, the designated users or apps can access the stored credentials. I implemented a Cloud Function action in Python to retrieve the credentials and pass them on in a JSON object again. The following actions in a sequence can then use the credentials to access an external service (e.g., an Amazon AWS S3 storage bucket or an emailing service).

All it takes is an IAM access token and information about the Key Protect instance and the key in question. This information can be passed in or obtained from the execution environment:

def getKeyFromKeyProtect(access_token, kpInstId, kpKeyId):
    # replace REGION with, e.g., us-south
    url="https://REGION.kms.cloud.ibm.com/api/v2/keys/%s" % kpKeyId
    headers = { "accept" : "application/vnd.ibm.kms.key+json", "bluemix-instance": kpInstId,
                "Authorization": "Bearer " + access_token}
    response  = requests.get( url, headers=headers).json()
    # credentials are in payload
    kpPayload=response["resources"][0]["payload"]
    return json.loads(base64.b64decode(kpPayload).decode())

You can find the source for the whole Cloud Function action in this Gist on GitHub. Of course, you can use similar code in a (containerized) app on Cloud Foundry, Kubernetes, or OpenShift.

Conclusions

It is fairly easy to use Key Protect as a vault for credentials. A common use case is the secure management of external credentials (e.g., in a multicloud scenario). By making use of the IBM Cloud IAM capabilities, it is possible to grant users or service IDs (read) privileges on a need-to-know basis. All access is logged for compliance.

If you want to read more, look here:

 If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn

Was this article helpful?
YesNo

More from Cloud

IBM Cloud expands its VPC operations in Dallas, Texas

3 min read - Everything is bigger in Texas—including the IBM Cloud® Network footprint. Today, IBM Cloud opened its 10th data center in Dallas, Texas, in support of their virtual private cloud (VPC) operations. DAL14, the new addition, is the fourth availability zone in the IBM Cloud area of Dallas, Texas. It complements the existing setup, which includes two network points of presence (PoPs), one federal data center, and one single-zone region (SZR). The facility is designed to help customers use technology such as…

Apache Kafka use cases: Driving innovation across diverse industries

6 min read - Apache Kafka is an open-source, distributed streaming platform that allows developers to build real-time, event-driven applications. With Apache Kafka, developers can build applications that continuously use streaming data records and deliver real-time experiences to users. Whether checking an account balance, streaming Netflix or browsing LinkedIn, today’s users expect near real-time experiences from apps. Apache Kafka’s event-driven architecture was designed to store data and broadcast events in real-time, making it both a message broker and a storage unit that enables real-time…

Primary storage vs. secondary storage: What’s the difference?

6 min read - What is primary storage? Computer memory is prioritized according to how often that memory is required for use in carrying out operating functions. Primary storage is the means of containing primary memory (or main memory), which is the computer’s working memory and major operational component. The main or primary memory is also called “main storage” or “internal memory.” It holds relatively concise amounts of data, which the computer can access as it functions. Because primary memory is so frequently accessed,…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters