Part 1 of the hybrid cloud architecture video series: Connectivity
As promised in the introduction to our hybrid cloud architecture lightboarding video series, we’re excited to bring you Part 1: Connectivity.
In this lightboarding video, I’m going to explain how you can connect your various cloud environments in the overall hybrid cloud. Topics that we’ll cover include the following:
How do you connect private cloud and public cloud environments?
How does a service mesh work to allow your applications and microservices to work together as a single unit?
What kind of integration tools will make it easier to connect with your services and other third-party services?
Using the example of a sample application called Stock Trader, we’re going to break down the essentials of how to stay connected throughout your hybrid cloud arc
Stay tuned for the rest of the videos in this hybrid cloud architecture series, which are coming soon.
Part 3: Security—What are the right solutions to ensure that you can take advantage of your existing on-prem assets (and the security and the ease of use that you have there) while securely moving some of your assets to a public or private cloud? (Video coming soon.)
Learn more about hybrid cloud
If you’re interested in learning more about hybrid cloud, its capabilities, and how it fits in with public cloud and private cloud, please check out the resources below.
Hi everyone, my name is Sai Vennam, and I’m a developer advocate with IBM. Today, I want to start the first part of the Hybrid Cloud Architecture series with a deep dive into Connectivity.
Connectivity is an important concern when you’re starting with your hybrid cloud strategy, and that’s why I want to start with it first. By establishing connectivity, we can then start thinking about other requirements and then move on to the other parts of the series.
The three important aspects of connectivity
There are three major parts about connectivity that I want to hit with this video—starting with, very simply, how do you actually connect private and public cloud environments?
Next, I’ll be moving on to the service mesh—essentially enabling your applications and microservices to work with one another as one singular mesh.
And, we’ll close it off by talking about some integration tools that we have available to make it easier to connect up your services and third-party services and that kind of thing.
Introducing the Stock Trader sample application
To better explain and set the stage for the rest of the topics, I want to introduce a Stock Trader sample application that we’ll be kind of revisiting with this architecture. So, let’s get started—over here, we have a consumer application—where it’s a mobile app, a web browser, whatever it might be—and whenever a user accesses the Stock Trader application, they’ll be hitting the private cloud endpoint. At this point, they’ll be fed into the Kubernetes cluster that we have here. And within this Kubernetes cluster, we have a number of services.
The first service that they’re going to hit is the Trader. So this will be the front end of the application. So there will be kind of an exposure from the Trader to the outside of that cluster.
The Trader, in turn, goes and creates Portfolios. So this, essentially, is the reason why people use Stock Trader—to create these portfolios to manage their investments and their trades and that kind of thing.
This Portfolios app then, in turn, has a couple of services that it takes advantage of which it actually pulls from the public cloud. One—it actually needs to get the price of a stock, and to do that, we have a service in the public cloud which we’ll call Get, which actually goes off to the Investors Exchange API (IEX) to access the current stock price.
So it’ll take advantage of that, and then to kind of feed that data back, we have an egress set up—external API request—that allows the Portfolio app to work directly with the service that we have in the public cloud.
Another service that we have that backs this Stock Trader application is the MQ service, which is, essentially, a message queuing capability. And we’re going to be using that to keep track of the loyalty levels that a user has when working with their portfolios. So, various commissions would be changed based on how long that they’ve kept a particular stock within their portfolio.
And the same thing here; so, in addition to Portfolios working with the public cloud, the MQ service is also going to be accessing the public cloud. However, the MQ service isn’t concerned with getting stock prices; instead, it wants to notify users whenever there is a change in their loyalty or in their portfolio. And to do that, we’re actually going to take advantage of serverless capabilities using Cloud Functions, which, in turn, will go and send a message to the user using a Slack integration.
So, this kind of sets the stage for the various parts that we have within the Stock Trader application. And actually, before I forget, there’s one more piece—to actually persist the data for the Portfolios, we have a dedicated database service that’s hosted in the private cloud outside of the cluster that the Portfolios application will be using to persist the data.
Connect: Making private cloud applications work with public cloud services
This kind of sets the stage for us to jump into the very first piece that I want to talk about, which is Connect. So, although we’ve laid out the architecture here, we haven’t really talked about how these applications are able to work with the public cloud services, although you know that, generally, a private cloud is going to be behind a firewall—it’s going to be in a restricted network.
There’s one very easy way to expose services from a private cloud to a public cloud and that is by taking advantage of a VPN tunnel—that’s one of the easiest ways to get started. An IPsec VPN tunnel essentially exposes a subnet of IPs that can be exposed from the private cloud and public cloud, enabling those connections to happen. So, we’ll create that VPN tunnel between the two environments.
And one key thing to note here is that this is all happening over public internet. So this has some caveats; although it was very easy to set up, the problem is that when you’re working over the public internet, you can be affected by variability in the amount of time the request takes to travel between the private and public cloud. In addition, with VPN, you know you’re not going to get the best bandwidth capabilities out there because you’re going over the public internet flows.
Direct Link capabilities
So there’s an alternative to VPNs, and that’s taking advantage of Direct Link capabilities to create entirely private connections between the private and public cloud. This is made possible by taking advantage of a PoP—which is a point of presence—generally provided by a public cloud and enables completely private connections to that private cloud. And this kind of always exists.
To enable your existing architecture to fit into this, you’ll need to work with your network service provider and create a direct connection for all connections coming out of your private cloud—maybe you have a WAN (wide area network)—to make sure that all of these connections flow privately. And this way, you never have to actually take advantage of and you’re never actually using the internet for this connection. It’s all private and, in addition, the big advantage of that is you get much higher bandwidth capabilities.
There is one thing though I want to mention—once you have a Direct Link like this setup, it’s also possible to have a failover, which in case this doesn’t work, it’ll kind of fall back and use the VPN over internet. By using those two in conjunction, I’d say that’s probably the best way to connect up your networks from a private and public cloud environment.
Service mesh and Istio
Next, I want to talk about the service mesh. There’s a great project out there that you might be familiar with—it’s completely open source, it’s called Istio, and it was created by a number of industry leaders like Google, IBM, Lyft, and a number of other leaders out there. And, you know, what we’ve noticed here—we are taking advantage of Kubernetes on our on a private cloud, and let’s say that we’re also using Kubernetes on the public cloud, although we only have one service in there so far, we’ll get around to creating some additional ones later on.
So, what we have is two different clusters in different environments—we want to make sure that they get managed in an easy way so that your operations teams don’t have to concern themselves with working with multiple environments, multiple clusters, which can lead to an increased kind of load and can be quite difficult to manage.
So, a service mesh—generally, in the context of Istio, you manage the services within a single cluster. But there’s been new developments in Istio that allow you to connect up multiple clusters together and have the services behave as one singular cluster—one mesh across multiple environments.
To better explain this, let’s say that Stock Trader wants to create a new version of Trader—so we’ve got v1 here and we want to make v2. And this time, we want to host it on the public cloud—let’s say because we want to have the front end of the application geographically closer to where most of our customers are. So we’ll create another Trader application, and this one is going to be v2.
Creating policies with Istio
Let’s say that all this traffic coming right now—so, 100% of traffic flowing into the applications—with Istio, what we’re going to essentially set up is a gateway right here. And this gateway has a number of policies that are kind of enabled and set up by a control plane—so we’ll make an Istio control plane. So we’ve got Istio here, and that’s essentially enabling us to create policies for this ingress gateway—so all requests that are flowing in.
And now, let’s say that we want 50% of traffic to flow through to v1. And then we want 50% traffic to flow to v2. Very simply, once we have the service mesh set up, all we have to do is create a policy in Istio that tells the gateway to route the other traffic, and that will actually go ahead and take advantage of the VPN or the Direct Link connection we have to move 50% of all traffic to this version of the Trader application.
So, very simply, taking advantage of Istio, our existing connection policies, as well as the control plane, we were able to create a policy that allows us to route a certain percentage of traffic to the new version of the application. This is very useful when you start thinking about creating new versions of your app and rolling them out to your users.
Analytics and metrics through Istio
The last thing I want to mention—along with Istio, you also get a number of awesome analytics and metrics (tracing, management capabilities), so all of those health management capabilities that Istio offers, they’re not limited to the cluster itself. They will actually manage request the flow between all of your services that your Istio mesh is connected to. Essentially, this gives your operations team a single point of management for all of your services across your environments.
Integration through a suite of tools
And the last thing I want to touch on is Integration.
So, there’s a lot of things out there that are repeated quite often. That means that customers are kind of constantly doing these things, so IBM has created a suite of tools to make integration with the services easier.
For example, let’s imagine that you have a set of user data that’s stored in Salesforce. You’ve already taken advantage of this data—this account data—in your on-prem, private cloud application, but you want to start reusing those capabilities in the public cloud.
So, in the public cloud, maybe there’s certain network or network challenges that change how it’s implemented; you can take advantage of these integration tools to very quickly move that data between Salesforce and your public cloud microservice applications, taking advantage of some of these tools. This is made possible through connectors that, you know, not only connect up Salesforce but a lot of other services out there—things that we notice that our customers are doing a quite often.
Another integration tool I want to talk about is an API gateway. We’re noticing more and more that this is something that’s really important to the overall hybrid cloud architecture, especially when you’re working with third-party services.
So here, we actually have a number of them with Salesforce, Slack, and the Investor’s Exchange. Let’s say that one of our engineers has a bug that accidentally hits the Investor’s Exchange way too many times and they’re throttling us, which ends up bogging down the whole system. To prevent that from happening, or just to be more secure about how we’re accessing third-party services, what we can do is create a gateway that essentially sits in between the public cloud and those third-party services and allows you to do things like manage rate limits, create authentications—things like OAuth or maybe even basic keys—to really restrict how your public cloud services as well as users are accessing those third-party services.
So, notice that API gateways, in addition to that suite of tools that I mentioned, are a core part of connecting up your cloud services to third-party services as well as some of the things that you have going in your private cloud.
I’d say that these three topics are the main things you want to think about when figuring out connectivity with your hybrid cloud architecture.