Key concepts for Red Hat OpenShift Container Platform
The following section covers basic concepts of operators, containers, and Kubernetes. Learning these concepts can help you understand the benefits of migration.
For more information about working with Red Hat OCP, see the following topics:
- Operators and dependencies
- Operator Lifecycle Manager (OLM)
- Containers, Kubernetes, and the Red Hat OpenShift Container Platform
Operators and dependencies
What is an Operator?
An Operator is a set of Kubernetes-native resources that packages, deploys, and manages a Kubernetes application by extending the Kubernetes API.
What is a Kubernetes application?
A Kubernetes application is an application deployed on Kubernetes and managed by using Kubernetes APIs and kubectl tooling.
How does an Operator work?
An Operator consists of several pieces of software that allow efficient management of applications on Kubernetes — a controller and one or more custom resource definitions (CRD).
The controller is custom code that is deployed to a Kubernetes cluster and is designed to watch for changes to custom Kubernetes resources and react to them. A custom resource is an extension of the Kubernetes API and is used to provide additional capability that may not be available in the default Kubernetes installation. It allows for customization and modularization of Kubernetes.
What is a dependency?
A dependency is a prerequisite that must be satisfied before processing can proceed. That is, when one entity in a system cannot meaningfully function without another entity, it is said to be dependent. For example, an application might have dependencies on a server, database, or other services to which it is connected to. In cloud migration, such application dependencies is a possible risk. Discovery tools can provide you with a clear picture of the relationship between each application and its dependencies so that you can successfully migrate all critical applications and services to the cloud.
Operator Lifecycle Manager (OLM)
What is Operator Lifecycle Manager?
Operator Lifecycle Manager (OLM) extends the capability of Kubernetes by enabling users to install, manage, and upgrade Operators and their dependencies in a cluster.
Why use Operator Lifecycle Manager?
OLM enables users to do the following:
- Easily manage applications by defining an application as a single Kubernetes resource each with its requirements and metadata (OLM requires this metadata to verify that an Operator can safely run on a cluster and understand how updates should be applied)
- Automate application installations and resolve dependencies or manually install with nothing but kubectl
- Automate application updates and apply different approval policies for each
Containerization, Kubernetes, and Red Hat OpenShift Container Platform (OCP)
What is a container?
A container is an executable unit of software in which application code is packaged — together with libraries and dependencies — in common ways so that it can be run anywhere on the desktop, traditional IT, or the cloud. Containers take advantage of a form of Operating System (OS) virtualization that lets multiple applications share the OS by isolating processes and controlling the amount of CPU, memory, and disk those processes can access.
What is containerization?
Containerization is the process of packaging up software code and all its dependencies so that it can run consistently on any infrastructure. That is, containerization allows applications to be "written once and run anywhere".
Benefits of containerization
Containerization offers the following benefits to developers and development teams:
- Portability: A container creates an executable package of software that is abstracted away from (not tied to or dependent upon) the host operating system, and hence, is portable and able to run uniformly and consistently across any platform or cloud.
- Agility: The open source Docker Engine for running containers started the industry standard for containers with simple developer tools and a universal packaging approach that works on both Linux and Windows operating systems. The container ecosystem has shifted to engines managed by the Open Container Initiative (OCI). Software developers can continue to use agile or DevOps tools and processes for rapid application development and enhancement.
- Speed: Containers are often referred to as "lightweight", meaning they share the machine’s Operating System (OS) kernel and are not bogged down with this extra overhead. Not only does this drive higher server efficiencies, it also reduces server and licensing costs while speeding up start-times as there is no operating system to boot.
- Fault isolation: Each containerized application is isolated and operates independently of others. The failure of one container does not affect the continued operation of any other containers. Development teams can identify and correct any technical issues within one container without any downtime in other containers. Also, the container engine can use any OS security isolation techniques—such as SELinux access control—to isolate faults within containers.
- Efficiency: Software running in containerized environments shares the machine's OS kernel, and application layers within a container can be shared across containers. Thus, containers are inherently smaller in capacity than a VM and require less start-up time, allowing far more containers to run on the same compute capacity as a single VM. This drives higher server efficiencies, reducing server and licensing costs.
- Ease of management: A container orchestration platform automates the installation, scaling, and management of containerized workloads and services. Container orchestration platforms can ease management tasks such as scaling containerized apps, rolling out new versions of apps, and providing monitoring, logging and debugging, among other functions. Kubernetes, perhaps the most popular container orchestration system available, is an open source technology (originally open-sourced by Google, based on their internal project called Borg) that automates Linux container functions originally. Kubernetes works with many container engines, such as Docker, but it also works with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes.
- Security: The isolation of applications as containers inherently prevents the invasion of malicious code from affecting other containers or the host system. Additionally, security permissions can be defined to automatically block unwanted components from entering containers or limit communications with unnecessary resources.
To learn more about containerization, see Containerization Explained.
What is Kubernetes?
Kubernetes — also known as "k8s" or "kube" — is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.
Why use Kubernetes?
As containers proliferated — today, an organization might have hundreds or thousands of them — operations teams needed to schedule and automate container deployment, networking, scalability, and availability.
Developers choose Kubernetes for its breadth of functionality, its vast and growing ecosystem of open source supporting tools, and its support and portability across the leading cloud providers (some of who now offer fully managed Kubernetes services).
What does Kubernetes do?
Kubernetes schedules and automates the following tasks:
- Deployment: Deploy a specified number of containers to a specified host and keep them running in a desired state.
- Rollouts: A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
- Service discovery: Kubernetes can automatically expose a container to the internet or to other containers by using a DNS name or IP address.
- Storage provisioning: Set Kubernetes to mount persistent local or cloud storage for your containers as needed.
- Load balancing and scaling: When traffic to a container spikes, Kubernetes can employ load balancing and scaling to distribute it across the network to maintain stability.
- Self-healing for high availability: When a container fails, Kubernetes can restart or replace it automatically; it can also take down containers that don’t meet your health-check requirements.
What is Red Hat OpenShift Container Platform?
OpenShift Container Platform is a platform for automating the deployment and management of containerized applications. While OpenShift Container Platform uses Kubernetes to orchestrate containers, Kubernetes does not manage platform-level requirements or deployment processes. Therefore, OpenShift Container Platform enhances the capability of Kubernetes by providing platform management tools and processes.
For more information, see Red Hat OpenShift Container Platform.