To VPC or not to VPC
trossman 120000GJ9V Visits (510)
If you’re running workload on IBM Cloud you’ve probably seen all sorts of announcements for our latest and greatest advances with our next generation infrastructure services which introduces a new virtual private cloud (VPC) concept. But what does that mean for you with an existing estate on our classic infrastructure? This blog post is the first of several aimed to answer these questions and more. We’ll go into specific examples describing common infrastructures customers are running on IBM Cloud today and walk through the benefits, caveats and rationale to make the right migration decisions for your situation. If you don’t see examples that cover your needs, please let us know and we’ll do our best to capture the issues in a subsequent blog post.
Why Adopt VPC?
Before getting into specific examples of classic infrastructure, why would you even consider VPC? There are a ton of great reasons why VPC is a clear improvement over our classic infrastructure from something as simple as fine grained Identity and Access control integration across the entire IBM Cloud portfolio – a clear benefit to our new infrastructure. But rather than listing all the wonderful reasons, I’d like to focus on one thing which I believe is the crux of it all.
Let’s face it, most enterprises are looking for hybrid multi-cloud solutions. And the second you have more than one estate involved, as you would with a hybrid of on-premises systems integrated with public cloud systems, you will be faced with a networking problem. At the very least you need to carefully design your systems to avoid IPv4 address overlaps. We refer to this as “Bring your own IP” (BYoIP) which is a fundamental feature of virtual private clouds.
Furthermore, when using more than one public cloud it can become very complex to deal with all the differences between vendors. Fortunately, all the major public clouds offer some notion of VPC. Sure there are differences, but they have much more in common than differences.
All vendor’s VPCs are fully programmatic via APIs i.e. there is no need to deploy and operate your own network appliances which is commonplace in our classic infrastructure. Likewise, all vendor’s VPCs allow users to define their own subnets and addressing across the VPC which, by definition, is an isolated network. All parts of a VPC are connected to one another via layer 3 routing. Internal security group concepts enable users to define firewall rules to control traffic across their VPCs. And every vendor’s VPCs have mechanisms to interconnect to access the internet, access on-premises networks, etc.
In short, VPCs are becoming somewhat of a de facto standard for public cloud usage. And that is great for every enterprise adopting a hybrid multi-cloud strategy. Keeping your cloud usage as standard as possible will help avoid vendor lock in and even simplify workload migration across vendor estates as needed.
Hi Everyone, I’m Andrew and I am an IKS User
Now this is nothing to be ashamed about at all! IBM’s Kubernetes Service is arguably the best Kubernetes service in the business and tens of thousands of customers like you will tell you. But I’m not here to sell you IKS, rather I’d like to explore the challenges many of you have and how our next generation infrastructure can help or not.
Like many of our IKS users, you’re probably using one or more of our cloud services like our Cloud Database Services (CDS) such as Cloudant or PostgreSQL and many more. For all but the casual demo user, these databases are private isolated services that you can reach via unmetered private networks using Cloud Service Endpoints (CSEs). For developing and operating a single application, this works really well out of the box. But when you try connecting with other systems, things get a bit trickier.
One of the first hurdles we commonly see a successful cloud native application team encounter occurs when they attempt to establish dedicated connections to their corporate networks. We often see a hurdle because there are address conflicts between the corporate network and our classic infrastructure. Many customers get around this by adding complexity to their systems in one way or another. Since there are so many different ways users circumvent this problem, I won’t even attempt to list them all. However, suffice it to say that BYoIP combined with a flexible WAN connection service can simplify the resulting solution.
Prototypical IKS Scenario
In this first example, a customer started by developing a cloud native application using IKS and Cloudant. The application was first piloted within the organization and considered very successful. However, to establish the corporate network connectivity and to comply with various regulations, time limited exceptions had to be granted.
Figure 1 Classic Infrastructure
Rationale for Migration
Now that this first application was successful it’s critical to address the broader hybrid connectivity issues because the successful organization wants to further expand to more and more workloads which would otherwise compound the problem. Furthermore, as the organization standardizes on modern cloud native techniques using containers, Kubernetes, microservices, service meshes, and RedHat OpenShift, they should be able to take advantage of multiple public clouds as warranted. VPC will further simplify this progression.
Step by Step Non-Disruptive Migration
Let’s assume that corporate users access the application securely using TLS over the internet via standard web browsers. Using IBM’s Cloud Internet Services (CIS) to offer DDOS protection, the requests will get distributed across IKS workers deployed across at least three availability zones in a metro region. Each of these hosts will have access to the Cloudant database which is also deployed across the same three availability zones. Let’s also assume that the primary use of the on-premises Direct Link dedicated circuit is to integrate user identity with an on-premises active directory or similar technology.
Step 1 – Add Another IKS Cluster in a VPC
Since the application was designed using cloud native techniques there’s no reason multiple IKS clusters cannot be used. However, since both the existing IKS on Classic Infrastructure components and the new IKS on VPC components need access to the corporate Active Directory, it’s easily handled by using a clas
Figure 2 Classic and VPC Infrastructure
Step 2 – Divert 100% of Traffic to IKS in VPC
With the new IKS cluster running in a VPC, all traffic can be diverted to it leaving the classic infrastructure idle. This likely involves changes to the CICD pipeline to no longer depend on the classic IKS cluster.
Step 3 - Decommission Classic IKS
Once everything is working properly with IKS on VPC, you can decommission the original IKS cluster running on classic. Note, so far there has been zero downtime.
Step 4 – Establish Direct Link 2.0 to Replace DL 1.0
Now that your application is running in a VPC, you no longer need to be constrained by the IP addressing of Direct Link 1.0. You can establish a new DL 2.0 connection, even using the same underlying WAN links if need be. However, this would incur some brief outage, which could be mitigated via a redundant VPN if necessary.
Figure 3 Fully BYoIP - IKS on VPC with Direct Link 2.0
Step 5 – Build Your Next Cloud Native Application
Now that you have a DL 2.0 connection to your premises, you can add new VPCs to the connection at any time. This way you can create another fully isolated cloud native application workload in another VPC running IKS. Direct Link 2.0 enables many classic and next generation estates to share the same underlying network connections to your corporate network.
Figure 4 Multiple VPCs Sharing Direct Link 2.0
I hope this has been helpful. Next time we’ll explore a different example. Suggestions greatly appreciated!