November 11, 2019 By jason-mcalpin 3 min read

Here are six must-see sessions at KubeCon 2019 (November 18 – 21, 2019) that focus on the challenges for DevOps teams doing cloud native development with Kubernetes.

Cloud native application development, as the best way to compete as a digital business, is driving the innovation of cloud platforms themselves.

Kubernetes, by enabling application development and operations teams to do cloud-based work at velocity and scale, has become a foundational technology in the ecosystem of continuous delivery tools and practices.

Automating operations is the key. By automating operations, Kubernetes frees DevOps teams to keep their focus on what the customers want from the applications that are designed to grow the business. The declarative configuration model gives teams the power to chart a normal state for an application in production. Kubernetes takes care of running pods on the right nodes in a cluster and across regions, scaling up and down instances of relevant microservices to meet user demand.

Making your plan for KubeCon 2019

Here are six sessions at KubeCon 2019 (November 18 – 21, 2019) that focus on the  challenges for DevOps teams doing cloud native development with Kubernetes.

Running Kubernetes in production at high scale

Containerd Mini-Summit: Phil Estes (Distinguished Engineer & CTO, Container and Linux OS) leads this program on the container engine that enables Kubernetes to operate at very high scale.

Managing a DevOps pipeline

Mario’s Adventure in Tekton Land (Tuesday, November 19, 4:25pm – 5:00pm): Vincent Demeester (Principal Software Engineer, Red Hat) and Andrea Frittoli  (Open Source Developer Advocate, IBM Cloud) take you through replacing a pipeline defined with bash scripts to one based on re-usable Tekton modules.

Managing deployment patterns

Advanced Model Inferencing Leveraging Knative, Istio, and Kubeflow Serving (Wednesday, November 20, 10:55am – 11:30am): Animesh Singh (STSM and Program Director) covers how to handle autoscaling, scale-to-zero, canary, and other deployment patterns using Kubeflow Serving and the native Kubernetes stack (Knative, Istio).  

Managing microservices

Ready to Serve! Speeding-Up Startup Time of Istio-Powered Workloads (Thursday, November 21, 4:25pm – 5:00pm): Michal Malka (Manager, IBM Research) and Etai Lev Ran (System Architect, IBM)  analyze the latency contributed by Istio service mesh to pod startup time, right from pod creation and up to the pod becoming ready to service requests. They’ll also examine various techniques to reduce it, including using Istio CNI to bootstrap the pod’s network, launching the sidecar proxy with an initial routing configuration, and using manual sidecar injection.

Enabling serverless workloads on Kubernetes

CNCF’s Serverless Working Group: Tell Me Where it Hurts (Thursday, November 21, 4:25pm – 5:55pm): Doug Davis (STSM & Offering Manager, Knative) provides a community update on the state of serverless in Kubernetes. This session involves active discussion, so come with your pain points and opinions on the interoperability and portable of serverless workloads

Kubernetes does drones?

Flying Kubernetes: Using Drones to Understand how Kubernetes Works (IBM Cloud Theater, 11:00-11:30 am on November 19 and 20): Watch IBM Cloud leaders Briana Frank and Jason McGee put Kubernetes through its paces with a fleet of drones.

Running an application in Kubernetes involves creating and applying a deployment configuration file. Scaling involves manually defining a replica set in that configuration or defining an autoscaling rule (specifying a minimum and maximum number of pods to run) to tell Kubernetes how to respond with traffic fluctuations. That same deployment configuration enables an application to self-recover; should one of its instances go down, Kubernetes takes action to match the configuration.

In terms of managing load among the worker nodes assigned to host instances of an application, Kubernetes supports affinity rules; by default, Kubernetes assigns pods to a worker that is currently not running any and then round-robins additional pods onto the workers assigned to an application. You can take control of pod-worker assignments, as needed, by configuring affinity more strictly.

Maybe you already know those Kubernetes concepts and have seen them in action with your applications. But have you seen the concepts used to control the behavior of drones?

The drones are programmed to watch Kubernetes for changes in configuration to an application with whose pods/instances they are associated. As a bonus, you’ll see the drones demonstrate a canary testing configuration for an application using the open source Istio microservices mesh.

More about Kubernetes, containers, and the IBM Cloud Kubernetes Service

Explore the IBM Cloud Kubernetes Service.

Check out our lightboarding video series on various Kube and containers topics:

More from Announcements

Success and recognition of IBM offerings in G2 Summer Reports  

2 min read - IBM offerings were featured in over 1,365 unique G2 reports, earning over 230 Leader badges across various categories.   This recognition is important to showcase our leading products and also to provide the unbiased validation our buyers seek. According to the 2024 G2 Software Buyer Behavior Report, “When researching software, buyers are most likely to trust information from people with similar roles and challenges, and they value transparency above other factors.”  With over 90 million visitors each year and hosting more than 2.6…

IBM named a Leader in Gartner Magic Quadrant for SIEM, for the 14th consecutive time

3 min read - Security operations is getting more complex and inefficient with too many tools, too much data and simply too much to do. According to a study done by IBM, SOC team members are only able to handle half of the alerts that they should be reviewing in a typical workday. This potentially leads to missing the important alerts that are critical to an organization's security. Thus, choosing the right SIEM solution can be transformative for security teams, helping them manage alerts…

Reflecting on IBM’s legacy of environmental innovation and leadership

4 min read - Upholding a legacy of more than 50 years of environmental responsibility through our company’s actions and commitments, IBM continues to be a leader in driving sustainability for our business, our communities and our clients—including a 34-year history of annual, public environmental reporting, which we continue today. As a hybrid cloud and artificial intelligence (AI) company, we believe that leveraging technology is key to unlocking impact, and it will play a substantial role in how society addresses, adapts to, and overcomes…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters