Financial Services clients are increasingly looking to modernize their applications. This includes modernization of code development and maintenance (helping with scarce skills and allowing innovation and new technologies required by end users) as well as improvement of deployment and operations, using agile techniques and DevSecOps.
As part of their modernization journey, clients want to have flexibility to determine what is the best “fit for purpose” deployment location for their applications. This may be in any of the environments that Hybrid Cloud supports (on premises, on a private cloud, on a public cloud or on the edge). IBM Cloud Satellite® fulfills this requirement by allowing modern, cloud-native applications to run anywhere the client requires while maintaining a standard and consistent control plane for the administration of applications across the hybrid cloud.
Moreover, many of these financial services applications support regulated workloads, which require strict levels of security and compliance, including Zero Trust protection of the workloads. IBM Cloud for Financial Services fulfills that requirement by providing an end-to-end security and compliance framework that can be used to implement and/or modernize applications securely across the hybrid cloud.
In this paper, we showcase how to easily deploy a banking application on both IBM Cloud for Financial Services and Satellite, using automated CI/CD/CC pipelines in a common and consistent manner. This requires a deep level of security and compliance throughout the entire build and deployment process.
The purpose of IBM Cloud for Financial Services is to provide security and compliance for financial services companies. It does so by leveraging industry standards like NIST 800-53 and the expertise of more than a hundred financial services clients who are part of the Financial Services Cloud Council. It provides a control framework that can be easily implemented by using Reference Architectures, Validated Cloud Services and ISVs, as well as the highest levels of encryption and continuous compliance (CC) across the hybrid cloud.
IBM Cloud Satellite provides a true hybrid cloud experience. Satellite allows workloads to be run anywhere without compromising security. A single pane of glass grants the ease of seeing all resources in one dashboard. To deploy applications onto these varying environments, we have developed a set of robust DevSecOps toolchains to build applications, deploy them to a Satellite location in a secure and consistent manner and monitor the environment using the best DevOps practices.
In this project, we used a loan origination application that was modernized to use Kubernetes and microservices. To deliver this service, the bank application employs an ecosystem of partner applications interoperating using the BIAN framework.
The application used in this project is a loan origination application developed as part of the BIAN Coreless 2.0 initiative. A customer obtains a personalized loan through a safe and secure online channel offered by a bank. The application employs an ecosystem of partner applications interoperating on the BIAN architecture, which is deployed on the IBM Cloud for Financial Services. BIAN Coreless Initiative empowers financial institutions to select the best partners to help bring new services to market quickly and efficiently through BIAN architectures. Each component or BIAN Service Domain is implemented through a microservice, which is deployed on an OCP cluster on IBM Cloud.
Application Components based on BIAN Service Domains
An agile DevSecOps workflow was used to complete the deployments across the hybrid cloud. DevSecOps workflows focus on a frequent and reliable software delivery process. The methodology is iterative rather than linear, which allows DevOps teams to write code, integrate it, run tests, deliver releases and deploy changes collaboratively and in real-time while keeping security and compliance in check.
The IBM Cloud for Financial Services deployment was achieved in a secure landing zone cluster, and infrastructure deployment is also automated using policy as code (terraform). The application is comprised of various components. Each component was deployed using its own Continuous integration (CI), Continuous Delivery (CD) and Continuous Compliance (CC) pipeline on a RedHat OpenShift Cluster. To achieve the deployment on Satellite the CI/CC pipelines were reused, and a new CD pipeline was created.
Each component of the IBM Cloud deployment had its own CI pipeline. A set of recommended procedures and approaches are included in the CI toolchain. A static code scanner is used to inspect the application repository for any secrets stored in the application source code, as well as any vulnerable packages used as dependencies within the application’s code. For each Git commit, a container image is created, and a tag is assigned to the image based on the build number, timestamp and commit ID. This tagging system ensures the image’s traceability.Prior to creating the image, the Dockerfile is tested. The created image is saved in a private image registry. The access privileges for the target cluster deployment are automatically configured using API tokens, which can be revoked. A security vulnerability scan is performed on the container image. A Docker signature is applied upon successful completion. The addition of the created image tag instantly updates the deployment record. The use of an explicit namespace within a cluster serves the purpose of isolating each deployment. Any code that is merged into the specified branch of the Git repository, expressly for deployment on the Kubernetes cluster, is automatically constructed, verified and implemented.
Details of each docker image is stored in an inventory repository, which is explained in detail in the Continuous Deployment section of this blog. In addition, evidence is gathered throughout every pipeline run. This evidence describes what tasks were carried out in the toolchain, such as vulnerability scans and unit tests. This evidence is stored in a git repository and a cloud object storage bucket, so that it can be audited if necessary.
We reused the current CI toolchains used for the IBM Cloud deployment stated above for the Satellite deployment. Because the application remained unchanged, it was unnecessary to rebuild the CI pipelines for the new deployment.
The inventory serves as the source of truth regarding what artifacts are deployed in what environment/region; this is achieved using git branches to represent environments, with a promotion pipeline updating environments in a GitOps-based approach. In previous deployments, the inventory also hosted deployment files; these are the YAML Kubernetes resource files that describe each component. These deployment files would be updated with the correct namespace descriptors, along with the newest version of the Docker image for each component.
However, we found this approach difficult for a few reasons. From the applications’ perspective, having to change so many image tag values and namespaces using YAML replacement tools (such as YQ) was crude and complicated. For Satellite itself, we are using the direct upload strategy, with each YAML file provided counting as a “version”. We would prefer to have a version correspond to the entire application, not just one component or microservice.
A different approach was desired, so we rearchitected the deployment process to use a Helm chart instead. This allowed us to parametrize the important values, such as namespaces and image tags, and inject them in at deployment time. Using these variables takes out a lot of the difficulty associated with parsing YAML files for a given value. The helm chart was created separately and stored in the same container registry as the built BIAN images. We are currently working to develop a specific CI pipeline for validating helm charts; this will lint the chart, package it, sign it for veracity (this would be verified at deployment time) and store the chart. For now, these steps are done manually to develop the chart. There is one issue with using helm charts and Satellite configurations together: helm functionality requires a direct connection with a Kubernetes or OpenShift cluster to operate most effectively, and Satellite, of course, will not allow that. So, to solve this problem, we use the “helm template” to output the correctly formatted chart and then pass the resulting YAML file to the Satellite upload function. This function then leverages the IBM Cloud Satellite CLI to create a configuration version containing the application YAML. There are some drawbacks here: we cannot use some useful functionality Helm provides, such as the ability to rollback to a previous chart version and the tests that can be done to ensure the application is functioning correctly. However, we can use the Satellite rollback mechanism as a replacement and use its versioning as a basis for this.
The CC pipeline is important for continuous scanning of deployed artifacts and repositories. The value here is in finding newly reported vulnerabilities that may have been discovered after the application has been deployed. The latest definitions of vulnerabilities from organizations such as Snyk and the CVE Program are used to track these new issues. The CC toolchain runs a static code scanner at user-defined intervals on the application repositories that are provided to detect secrets in the application source code and vulnerabilities in application dependencies.
The pipeline also scans container images for security vulnerabilities. Any incident issue that is found during the scan or updated is marked with a due date. Evidence is created and stored in IBM Cloud Object Storage at the end of every run that summarizes the details of the scan.
DevOps Insights is valuable to keep track of issues and the overall security posture of your application. This tool contains all the metrics from previous toolchain runs across all three systems: continuous integration, deployment and compliance. Any scan or test result is uploaded to that system, and over time, you can observe how your security posture is evolving.
Getting CC in a cloud environment is significant for highly regulated industries like financial services that want to protect customer and application data. In the past, this process was hard and had to be done by hand, which puts organizations at risk. But with IBM Cloud Security and Compliance Center, you can add daily, automatic compliance checks to your development lifecycle to help reduce this risk. These checks include various assessments of DevSecOps toolchains to ensure security and compliance.
Based on our experience with this project and other similar projects, we created a set of best practices to help teams implement hybrid cloud solutions for IBM Cloud for Financial Services and IBM Cloud Satellite:
In this blog, we showcased our experience implementing a banking application based on BIAN across the hybrid cloud, that is, using DevSecOps pipelines to deploy the workload both on IBM Cloud as well as in a Satellite environment. We discussed the pros and cons of different approaches and the best practices we derived after going through this project. We hope this can help other teams achieve their hybrid cloud journey with more consistency and speed. Let us know your thoughts.