November 11, 2019 By Douglas Paris-White 3 min read

Here are six must-see sessions at KubeCon 2019 (November 18 – 21, 2019) that focus on the challenges for DevOps teams doing cloud native development with Kubernetes.

Cloud native application development, as the best way to compete as a digital business, is driving the innovation of cloud platforms themselves.

Kubernetes, by enabling application development and operations teams to do cloud-based work at velocity and scale, has become a foundational technology in the ecosystem of continuous delivery tools and practices.

Automating operations is the key. By automating operations, Kubernetes frees DevOps teams to keep their focus on what the customers want from the applications that are designed to grow the business. The declarative configuration model gives teams the power to chart a normal state for an application in production. Kubernetes takes care of running pods on the right nodes in a cluster and across regions, scaling up and down instances of relevant microservices to meet user demand.

Making your plan for KubeCon 2019

Here are six sessions at KubeCon 2019 (November 18 – 21, 2019) that focus on the  challenges for DevOps teams doing cloud native development with Kubernetes.

Running Kubernetes in production at high scale

Containerd Mini-Summit: Phil Estes (Distinguished Engineer & CTO, Container and Linux OS) leads this program on the container engine that enables Kubernetes to operate at very high scale.

Managing a DevOps pipeline

Mario’s Adventure in Tekton Land (Tuesday, November 19, 4:25pm – 5:00pm): Vincent Demeester (Principal Software Engineer, Red Hat) and Andrea Frittoli  (Open Source Developer Advocate, IBM Cloud) take you through replacing a pipeline defined with bash scripts to one based on re-usable Tekton modules.

Managing deployment patterns

Advanced Model Inferencing Leveraging Knative, Istio, and Kubeflow Serving (Wednesday, November 20, 10:55am – 11:30am): Animesh Singh (STSM and Program Director) covers how to handle autoscaling, scale-to-zero, canary, and other deployment patterns using Kubeflow Serving and the native Kubernetes stack (Knative, Istio).  

Managing microservices

Ready to Serve! Speeding-Up Startup Time of Istio-Powered Workloads (Thursday, November 21, 4:25pm – 5:00pm): Michal Malka (Manager, IBM Research) and Etai Lev Ran (System Architect, IBM)  analyze the latency contributed by Istio service mesh to pod startup time, right from pod creation and up to the pod becoming ready to service requests. They’ll also examine various techniques to reduce it, including using Istio CNI to bootstrap the pod’s network, launching the sidecar proxy with an initial routing configuration, and using manual sidecar injection.

Enabling serverless workloads on Kubernetes

CNCF’s Serverless Working Group: Tell Me Where it Hurts (Thursday, November 21, 4:25pm – 5:55pm): Doug Davis (STSM & Offering Manager, Knative) provides a community update on the state of serverless in Kubernetes. This session involves active discussion, so come with your pain points and opinions on the interoperability and portable of serverless workloads

Kubernetes does drones?

Flying Kubernetes: Using Drones to Understand how Kubernetes Works (IBM Cloud Theater, 11:00-11:30 am on November 19 and 20): Watch IBM Cloud leaders Briana Frank and Jason McGee put Kubernetes through its paces with a fleet of drones.

Running an application in Kubernetes involves creating and applying a deployment configuration file. Scaling involves manually defining a replica set in that configuration or defining an autoscaling rule (specifying a minimum and maximum number of pods to run) to tell Kubernetes how to respond with traffic fluctuations. That same deployment configuration enables an application to self-recover; should one of its instances go down, Kubernetes takes action to match the configuration.

In terms of managing load among the worker nodes assigned to host instances of an application, Kubernetes supports affinity rules; by default, Kubernetes assigns pods to a worker that is currently not running any and then round-robins additional pods onto the workers assigned to an application. You can take control of pod-worker assignments, as needed, by configuring affinity more strictly.

Maybe you already know those Kubernetes concepts and have seen them in action with your applications. But have you seen the concepts used to control the behavior of drones?

The drones are programmed to watch Kubernetes for changes in configuration to an application with whose pods/instances they are associated. As a bonus, you’ll see the drones demonstrate a canary testing configuration for an application using the open source Istio microservices mesh.

More about Kubernetes, containers, and the IBM Cloud Kubernetes Service

Explore the IBM Cloud Kubernetes Service.

Check out our lightboarding video series on various Kube and containers topics:

More from Automation

API-led connectivity: Improve API reuse and organizational agility

3 min read - Today’s customers and employees expect a real-time, personalized and connected user experience on any platform.  As enterprise applications grow and evolve to address these needs, integration between applications has become increasingly important. Building point-to-point integrations manually is time consuming, inefficient and costly; andorganizations need a better way to consume and share data, as well as a more flexible and agile way to add new features and solutions. This is where application programming interfaces (APIs) can help. API-led connectivity, also known…

App-centric connectivity: A new paradigm for a multicloud world

3 min read - Modern enterprises are powered by distributed software applications that need an always-on, secured, responsive and global, optimized access. A secured, hybrid cloud strategy is very important to deliver this application experience for internal and external users. Our vision for hybrid cloud is clear: to help clients accelerate positive business outcomes by building, deploying and managing applications and services anytime, anywhere. Traditional CloudOps and DevOps models that involve manual workflows may not deliver the required application experience. IBM strongly believes it's…

Retailers can tap into generative AI to enhance support for customers and employees

4 min read - As the retail industry witnesses a shift towards a more digital, on-demand consumer base, AI is becoming the secret weapon for retailers to better understand and cater to this evolving consumer behavior. With the rise of highly personalized online shopping, direct-to-consumer models, and delivery services, generative AI can help retailers further unlock a host of benefits that can improve customer care, talent transformation and the performance of their applications. Generative AI excels at handling diverse data sources such as emails,…

Top 5 criteria for developers when adopting generative AI

3 min read - The surge in adoption of generative AI is happening in organizations across every industry, and the generative AI market is projected to grow by 27.02% in the next 10 years according to Precedence Research. Advacements in machine learning algorithms, neural networks and the computational power of generative AI, combined with human expertise, intuition and creativity, can unlock new possibilities and achieve levels of innovation that were previously unimaginable. As a result, we are seeing that businesses are recognizing the enormous…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters