Let’s get specific about how Kubernetes everywhere helps an established business move into new markets and regions. I’ll lead with a financial services example that can be applied to other regulated industries.
Efficiency: A fictional North American company needs to expand into South East Asia to capture that market’s growth. This financial services company is successful in their home region, licensing a payment processing solution for integration in their customers’ applications. Their solution already runs in containers on Kubernetes that make deploying to new sites within their existing region very efficient.
Local regulations: To extend their payment-processing solution into the new region, the company must meet new data locality and performance requirements. Data handled by the payment solution must be kept within the country where business is transacted, and any payments must be acknowledged within two seconds of being processed.
Time to market: The time and expense of building up a new colocation center in South East Asia — infrastructure, staffing, and consulting — would drive up the overall cost of the payment solution, making the company less competitive in the new market.
Managed as-a-service: This is where a distributed cloud delivery of Kubernetes provides clear business benefits. With a true distributed cloud architecture, Kubernetes and all other cloud services necessary for the solution are delivered as-a-service into on-premises, public cloud, or edge computing environments. The first big payoff is scaling the North American solution in a consistent way in each new market. All the better when the financial services company uses the distributed cloud services as-a-service; meaning the vendor manages everything related to keeping the services up-to-date and available. Local staff are not burdened with software upgrades and other maintenance.
Using distributed cloud services across environments — as in our example — depends on three other things:
Consistent security in all locations
User-controlled connectivity between the client location and the cloud vendor
Centralized monitoring and management.
Accommodating new use cases
Adapting successful solutions to new use cases is a business strategy with obvious upsides — development costs are low, and implementations leverage established efficiencies.
This second example comes from the domain of workplace safety, where technology operates in milliseconds. One of our clients offers a system for monitoring work sites that meets latency requirements by processing all data — ingesting, analyzing it — where the data is generated. The system operates in a consistent processing window of five milliseconds.
Latency: Our client leveraged on-site cameras that capture movement and ascertain if a visitor is wearing a hard hat. If not, the system is fast enough to flag entry, averting injuries related to non-compliance with safety requirements. Imagine this: a light beacon flashes and a recording says, “You can’t go into this area. You’re not wearing a hard hat.” Without the system operating at very low latency, someone entering unprotected could already be two steps over the line before the signal comes back and warns, “Hey, you’re not wearing a hard hat.”
Velocity: While meeting local low-latency requirements, this workplace safety system remains connected to the public cloud. Its monitoring and analytics components are deployed in Kubernetes clusters on location. The teams overseeing the system — either the IBM team overseeing the cluster software or the client team overseeing the safety system — can update the system as often as needed.
Adaptability: What’s interesting to me here is the system’s adaptability. The analytics technology is flexible enough to enable the video system to apply to different movement rules. For example, with a thermal device to support COVID-19 safety protocols, the system can instantaneously detect a visitor’s temperature. Other cameras can monitor distances between people and determine if they’re six feet apart or if a room was used and now needs to be sanitized.
Since software updates to the system are made so efficiently in this distributed cloud service model, software development teams can quickly adapt the system to rapidly evolving use cases. It will be interesting to see what ideas emerge next.