Part 3 of the Hybrid Cloud Architecture series: Security
Hey everyone, we’re excited to close out the hybrid cloud architecture series with our last installment on Security. Security continues to be incredibly important as government regulations become more strict and data breaches get larger and more expensive to combat. In this lightboarding video, I’m going to cover the basics of hybrid cloud security by hitting three major topics:
North-south traffic: Requests between end-users and your applications, like a frontend request from a web app.
East-west traffic: Requests between your environments, such as backend requests between microservices.
DevSecOps: The practice of inserting security considerations and implementation as part of your existing DevOps.
We’ll cover best practices—like the principle of least privilege for authorizations—and mTLS encryption with Istio. We’ll also talk about general cloud-provided capabilities, like checks to ensure your images are secure and free of vulnerabilities. There’s a lot to cover with the three categories I mentioned, and I’ll touch on the most important aspects of security when planning your hybrid cloud architecture. Tune in and let us know what you think!
If you haven’t caught up on the other parts of this series, please see the links below.
Introduction—An overview of the topics to be covered in the hybrid cloud architecture lightboarding video series. (View video here.)
Part 1: Connectivity—How do we securely connect between all the environments in our hybrid cloud architecture? (View video here.)
Part 2: Modernization—What are key strategies to modernize legacy applications by utilizing hybrid cloud capabilities? (View video here.)
Learn more about hybrid cloud
If you’re interested in learning more about hybrid cloud, its capabilities, and how it fits in with public cloud and private cloud, please check out the resources below.
Hi everyone, my name is Sai Vennam, and I’m a developer advocate with IBM. Today, I want to talk about security with hybrid cloud architectures. This is going to be Part 3 of the hybrid cloud architecture series.
North-south vs. east-west network traffic
Security is a nuanced topic, but to kind of help me explain, I’m going to start with two major concepts: north-south network traffic vs. east-west network traffic.
On-prem, north-south network traffic
When I walked into the office today, I had to pull out my badge and scan to get into the building. This is something called perimeter security, and it’s a core part of north-south network traffic. Essentially, what that refers to is any traffic that’s traveling from end-user applications to your data centers or public or private cloud environments.
Let’s take a step back and kind of explain these pieces here. So, we talked about this in the previous videos, but what we’ve got here is the Stock Trader monolith, which is going to be on an on-prem data center. We’ve got a couple of services here—maybe something to help us talk to the cloud and maybe a data store as well.
So, we mentioned perimeter security, and that’s something you, honestly, take as a given with data centers—that you have that firewall sitting in front of that data center giving you a private network for your actual data center and the applications and workloads running within it. This made security a lot easier to tackle when working with monolithic applications but it did put the onus of security on the application—the enterprise application developer.
The main thing here to actually secure these endpoints was to make sure that all the capabilities that this monolith exposes (those API endpoints) were secured. And to do that, we could take advantage of something like an API gateway. So, traditionally what we would see is a gateway that’s set up in front of that on-prem application with key capabilities exposed that may be required by that frontend to render the application. And potentially the same for a mobile app as well. That, I think, helps tackle security with north-south network traffic on the on-prem side.
North-south traffic in the public or private cloud
Let’s shift gears here for a second and talk about the public cloud side or even potentially a private cloud.
I’ll talk about the different components here later in the video but let’s start with this piece right here, which is the Kubernetes worker. Within the Kubernetes worker, we can assume that we have a couple of services that we need to actually render the Stock Trader application, whether it’s mobile or in a web app. We have a couple of services and can assume they talk to one another.
So, what happens when an end user actually accesses the application? Well, one, they’ll actually have to hit that endpoint that becomes available, at which point they will enter the public cloud. At that layer, we get things like denial of service protection and other things that the cloud provider offers you to make sure that those requests are maybe authenticated or, you know, they’re they’re coming in in a safe manner.
The next thing that happens, that request will get forwarded to your actual Kubernetes worker node with the capabilities that it exposes. So, at that level, we have a couple of options for securing those endpoints.
Layer 3 vs. Layer 7 security
Let’s say, you know, we want to hit this first microservice running in a Kubernetes worker—there’s two ways that we can kind of configure security policies. The first is going to be at Layer 3, which is, if you’re familiar, it’s things like IPs and ports—basically, it allows you to configure policies for any network interface. That’s gonna be done with things like Calico or the native Kubernetes API policies. So, that handles the Layer 3 security level.
The other option we have here is to use something like Istio for Layer 7 network policies and routing for security. Together, with those two capabilities, we can cover everywhere from Layer 3 to Layer 7 network security policies.
Security for ingress and egress application flow
So, the request comes in and, you know, granted that it passes those policies, it gets forwarded to your worker and whatever services it might hit. So, this is the ingress application flow. And then, for external requests that a service might make (for egress calls), the same can be configured in Istio or Calico, going everywhere from Layer 3 to Layer 7.
So that kind of talks about north-south traffic—ingress and egress—communication with the clients as well as a data center or a public/private cloud environment. So, that tackles north-south network flows.
East-west network traffic
Next, let’s talk about east-west. So these are going to be, essentially, communication happening between services running on-prem or in your public/private cloud environments.
So, for east-west, going back to my analogy—I badged into my building, they let me into the perimeter, but to actually get to my floor where I work every day, I have to badge again. That’s going to be on the third floor of the building, right? So, I go up to the third floor, and I’m forced to actually scan my badge again. If I try to enter the fourth floor, I actually wouldn’t be allowed to enter as I’m not on the design team.
Segmentation in the application infrastructure
So, essentially, what that refers to is a concept called segmentation. So, within the actual building or an application infrastructure—maybe a public cloud environment—we want to create segments of what users are allowed to access, what admins allowed access, what processes are allowed to access when talking to one another.
So, at that level, we actually call this in Kubernetes environments, we call that micro-segmentation. In the customer-managed environment, what that would look like is, essentially, setting up—using something like Istio—TLS between all requests going between microservices.
The thing about encryption—it’s one of those things that you want to encrypt any requests as early as possible and decrypt as late as possible. So, with traditional Kubernetes microservices architectures, you want to make sure that all of those requests are being encrypted at the earliest level possible.
That kinda handles microservice-to-microservice architecture, but we didn’t really need to consider that with the monolith because, again, as we mentioned, monoliths would be using RPC or remote procedure calls, software-based calls, which remove the requirement of talking over a network so we wouldn’t actually have to take advantage of TLS. But, you can imagine that you do want to make sure that the network calls made to the database would be secured TLS.
The Kubernetes master node and etcd
The next concept I want introduce is what we have sketched out here on the cloud-managed side of our cloud. So, what we’ve got here is the Kubernetes master node. And one thing to remember here is that when you’re working with a managed Kubernetes service, the master node is actually going to be managed by the cloud provider. So, whereas you control the worker nodes, the master is completely managed and houses a very important piece of the architecture—the etcd data store.
So, in the Kubernetes world, the etcd data store is something that you want to be really careful about protecting because that has all the information about your services, your deployments, and all of the Kubernetes API resources. So, securing the etcd is going to be very important; it’s paramount to your security architecture.
Authentication, authorization, and admission controller
And to secure that, the cloud provider, traditionally, will have a kind of a three-phase process. So what we’ll have is everything from—we’ll start with step one, which is authentication, so TLS. Next, we’ve got RBAC, which is Kubernetes role-based access control for authorization. And then, finally, over here, the last piece of the puzzle is gonna be the admission controller, which is a Kubernetes concept that, you know, once you’ve made it through the authentication and authorization, there’s another level of security to make sure that those API requests aren’t mutated or, you know, massaged and made sure that they’re in the right format to access that data.
So, they’ll access that etcd data, and to send that back to your worker node, where your application pods need to request that information or, you know, pass information to it, there’s an open VPN server. And there’s also going to be a client, as well. But, that’s going out enable you to basically access that etcd data store and return data back into the Kubernetes worker.
So that kind of covers the pattern of how Kubernetes is set up in a cloud provider service, with the master node being managed and the worker node being able to, kind of work, with that master node in a secure fashion to make sure your assets are protected at all times. The other thing I want to mention here—that etcd data store is going to be backed up in a cloud object storage capability to make sure that, you know, worst case scenario, you do have those assets in a secure place.
So I think that covers north-south network traffic as well as east-west, where we talked about network traffic coming in from clients or, at least, network traffic going between services in your data center and in your private or public cloud environments.
The last thing I want to talk about is a concept called DevSecOps. You’ll notice here that it’s DevOps with the word security right in the middle, and, essentially, it’s a way to ensure that security is something that you think about from the ground up when you start architecting the application all the way until you move into production. And that’s something you want to take advantage of to make sure that, you know, you don’t have any issues when moving to production. You don’t want to architect an application the incorrect way and then realize you have to go back and rework all of that. So, thinking about security from the beginning is going to be an important thing.
When working with a cloud-provided Kubernetes service there’s something that makes it a little bit easier to make sure your flows are secure. One consideration you want to have here is to make sure that your CI workflow—that DevOps flow—has security embedded within it and is automated.
So, you can imagine, maybe you have your favorite kind of code repository holding your application code—your Docker files whatever they might be. We’re going to automate that process and make sure that, you know, maybe only the developers who building that could have access to that Git repo.
Next, you want to make sure you have a trusted signer to make sure that that code, when it gets pushed into a registry, will go ahead and sign it as a trusted image to something that’s available with the cloud manage registry.
So, we’ll push that image into the registry. Once there, there’s a capability called Vulnerability Advisor that’s gonna scan that image and make sure that if there’s any issues or any vulnerabilities that are detected—everywhere from the base operating system to maybe the runtime that you’re using—that if a vulnerability is detected, you’ll be made aware of it.
Once it passes that vulnerability assessment, you can tie that in to build that image and push it directly into Kubernetes. At that stage, you can use something like an admission controller (which we talked about in the Kubernetes master) to make sure that that image is, again, secure and without vulnerabilities.
Finally, there’s a live-scanning capability to allow you to scan your images running in production to make sure that there are no vulnerabilities in there.
So, DevSecOps is a very important concept that ensures that, from the ground up, you’re managing security when doing DevOps.
Our clients want the same consistent control plane across all their cloud properties. This post outlines how to deploy IBM Cloud Private on AWS, but the same approach applies to other cloud infrastructure platforms.
OpenShift is a Kubernetes distribution from Red Hat, similar to IBM Cloud Private, that is loaded with features to make developers' lives easier. However, there is one key feature that Kubernetes supports and OpenShift doesn't—the ability to deploy Helm charts. This post documents steps for deploying a Helm Chart into OpenShift.