Terminology

General Terminology

Terminology Description
management node In IBM® Cloud Infrastructure Center, a management node is a Linux machine running a set of services that do management work. Only one management node is required for one cloud. The relationship between management node and compute node is 1:n, which means one management node manages multiple compute nodes
compute node In IBM Cloud Infrastructure Center, a compute node is a Linux® machine running a set of agents for hypervisor so that it can be managed by management node. Compute node has 1:1 relationship to hypervisors. For example, you need create one compute node for each z/VM® managed and through the compute node the z/VM is managed by management node to run workload (for example, schedule a virtual machine to run on the z/VM).
compute template Default configurations for virtual machines. For example, the cpu, memory, disk size, and others. Called flavors in OpenStack.
deploy templates Deploy templates allow authorized users to quickly, easily, and reliably deploy an image. A deploy template contains everything that you need to deploy an image, including the deploy target, the storage connectivity group to use, the compute template to use, the size of the virtual machine to create, and so on. After the deploy template has been created, it is simple to use the template to create one or more virtual machines.
host In the IBM Cloud Infrastructure Center, a host is a hypervisor (for example, z/VM) that contains processors, memory, and I/O resources.
project Resources belong to a project. This allows for resources to be made accessible to a single user by giving that user their own project. Alternatively, multiple users can be given access to the same resource by giving each of those users a role on the project in question. Projects are sometimes referred to as tenants.
requests Requests are any actions that require administrator approval before they can complete. IBM Cloud Infrastructure Center sends an approval request when a user attempts an operation that an administrator has set up to require approvals.
self-service portal This includes the ability to create and use deploy templates. Accessing the self-service portal requires self_service or administrator authority.
self-service user A user that has been granted self_service authority. Self-service users can access the self-service portal.
telemetry Collecting data on the utilization of the physical and virtual resources comprising deployed virtual servers, persist these data for subsequent retrieval and analysis.
virtual server (VM) Also called virtual machine, it's collection of processor, memory, and I/O resources that are defined to run an operating system and its applications. Virtual machines are also called instances (in an OpenStack context).
z/VM z/VM is highly secure and scalable virtualization technology for cloud infrastructure and for running critical applications on IBM® Z and LinuxONE servers. Directory Maintenance Facility for z/VM (DIRMAINT) provides efficient and secure interactive facilities for maintaining your z/VM system directory, Resource Access Control Facility (RACF®) manages system data security and integrity on z/VM systems. In this document DIRMAINT and RACF are used to represent the mandatory and optional configuration on z/VM and you might choose other third-party tools to complete same tasks
KVM Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) provides the virtualization functions, which enables a machine running RHEL to host multiple virtual machines (VMs), also referred to as guests. VMs use the host’s physical hardware and computing resources to run a separate, virtualized operating system (guest OS) as a user-space process on the host’s operating system. KVM is used to represent Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) in the rest of the documentation
RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the next generation of single-purpose container operating system technology. RHCOS combines the quality standards of Red Hat Enterprise Linux (RHEL) with the automated, remote upgrade features from Container Linux.

Network Terminology

Terminology Description
Network bonding Network bonding is a method to combine or aggregate network interfaces to provide a logical interface with higher throughput or redundancy. Network bonding supports various modes. The active-backup, balance-tlb, and balance-alb modes do not require any specific configuration of the network switch. However, other bonding modes require configuring the switch to aggregate the links.
Network teaming Network teaming is a feature that combines or aggregates network interfaces to provide a logical interface with higher throughput or redundancy. Network teaming uses a kernel driver to implement fast handling of packet flows, and user-space libraries and services for other tasks. This way, network teaming is an easily extensible and scalable solution for load-balancing and redundancy requirements.
Security group A security group defines which traffic is allowed through to a virtual machine. Security groups are associated with one or more network ports and let you define access controls for communication. Administrators can use security groups to define a consistent policy for applying data protocol controls to virtual machines. These controls are applied at the hypervisor level and do not require operating system-dependent firewall technologies.
Layer 3 network In the seven-layer OSI model of computer networking, the layer 3 is network layer. Layer 3 performs packet forwarding functions.
Private tenant network Tenant networks are created by tenants for use by their instances and always are private. Tenant networks can be instantiated by using either underlay or overlay technologies. In general, tenants are unaware of how their tenant networks are physically realized.
OVN OVN (Open Virtual Network) is a series of daemons for the Open vSwitch that convert virtual network configurations into OpenFlow. OVN provides a higher-layer of abstraction than Open vSwitch, working with logical routers and logical switches, rather than flows. OVN is also an Open vSwitch-based software-defined networking (SDN) solution for supplying network services to instances and virtual machines. It is now promoted as an implementation of Open vSwitch in OpenStack releases.
Geneve Geneve stands for Generic Network Virtualization Encapsulation. It is a Network Virtualization technology, also known as an Overlay Tunnel protocol, which defines an encapsulation data format, to be adopted in large multi-tenant clouds. Geneve encapsulated packets are transmitted via standard networking equipment. Packets are sent from one tunnel endpoint to one or more tunnel endpoints that use either unicast or multicast addressing. Geneve is being adopted as the default tunneling protocol for OVN (Open Virtual Network).
Network Routing Routing is the process of selecting a path for traffic in a network or between or across multiple networks.
Floating IP A floating IP is a public, routable IP address that is not automatically assigned to an instance or virtual machine. Instead, a project administrator assigns them to one or more instances. The instance has an automatically assigned, private IP for communication between instances in a private, nonroutable network area, as well as a manually assigned floating IP. Using floating IP, such an instance is able to accept incoming connections from the internet.

Storage Terminology

Terminology Description
local storage Storage gets allocated in support of a virtual machine and its lifecycle cannot exceed the owning virtual machine's lifecycle. The disks disappear when the virtual machine that uses the local storage is decommissioned. Usually it is the storage that is managed by the underlayer hypervisor of the compute node, for example the ECKD or FBA dasd group on z/VM hypervisor or the storage of the KVM hypervisor. root disk, ephemeral disk and swap disk can be allocated from local storage
shared local storage A special type of local storage. The shared local storage is shared among multiple compute nodes and treated as their own local storage by each compute node.
persistent storage Persistent storage is about storage objects (for example, volume) that comes from storage provider. Access rights and bindings between virtual machines and persistent storage objects can be established and released dynamically. A volume can be attached to a virtual machine and is available even after the virtual machine is decommissioned.
volume The storage objects that are allocated from the persistent storage, multiple volumes can be allocated and attached to the virtual machine.
ephemeral disk The disks that are associated with virtual machines are ephemeral, meaning that from the user’s point of view they disappear when a virtual machine is decommissioned. One virtual machine can have multiple ephemeral disk(s)
boot from volume Boot a virtual machine from a volume as root disk.
agent node of a storage provider The node where the agent service of a storage provider runs. When registering a storage provider with the IBM Cloud Infrastructure Center, you must select an agent node. It can be one of the compute nodes.
storage template A template that specifies the properties of a volume, such as thin provisioning, compression, storage pool, and storage provider. Storage templates make it easy to quickly and accurately create volumes and deploy virtual machines from images. The storage template is called volume type in the OpenStack.
root disk The disk that the virtual machine boots from
swap disk The disk is used for the memory swap when the memory usage is high.
data disk Different from the root disk that the virtual machine boots from. Data disks are mostly used for data store
multiattach-capable volume A volume that is able to be attached to more than one virtual machines
FCP multipath Template The feature of the FCP multipath template provides a flexible and efficient way to use FCP devices. For each virtual machine(VM) on a z/VM host, different FCP multipath templates can be used when you attach volumes (including booting from volume) from different persistent storage providers through Fibre Channel Protocol. It causes the FCP devices to be allocated from a selected FCP multipath template so that the wanted choice of the FCP devices fits into the corresponding customer workload.
Consistency group Consistency groups are groups of volumes with consistent data that is used to create a point in time snapshot or used to create a consistent copy of volumes. It is commonly used in scenario where application data spans multiple volumes and requires that data integrity is preserved across volumes. For example, the logs for a particular database usually reside on a different volume than the volume that contains the data so a consistency group can create snapshot for those volumes at exactly same time.

Topology Terminology

Terminology Description
availability zone A way to create logical groupings of hosts. For more information, see Availability Zones.
host group A way to create virtual boundaries around a group of hosts and logically partition hosts, it is called host aggregates in OpenStack. For more information, see Host Aggregates.
collocation rules The rules are used to specify that the selected virtual machines must always be kept on the same host or can never be placed on the same host. In OpenStack, the collocation rules are called Server Groups. For more information, see Collocation Rules.
placement In a narrow sense, the placement is used to track the resource provider inventories and the usages, along with different classes of resources. For example, a resource provider can be a compute node, a shared storage pool, or an IP allocation pool. For more information, see Placement. In a broad sense, the placement is used to select a compute node that a virtual machine lands on when you deploy the virtual machine, including the placement mentioned, previously, and different compute schedulers. For more information, see Compute Schedulers.
instance extra specs A set of extra specs that provides more granularity control to the virtual machine's definition created.

Multi-node Cluster Terminology

Terminology Description
mutli-node cluster Three management nodes high availability deployed cluster.
Stand-alone deployment The installation as the previous releases, which have one management node, no high availability components are used.
Pacemaker Pacemaker (PCS), a component that is provided by the Red Hat Enterprise Linux High Availability Add-On cluster infrastructure, which offers the basic functions for a group of computers (called nodes or members) to work together as a cluster.
HAProxy HAProxy, a component that is provided by the Red Hat Enterprise Linux High Availability Add-On cluster infrastructure, which offers load balanced services to HTTP and TCP-based services.
virtual ip A virtual IP address (VIP) that internally routes the request to one of the three management nodes with the help of a load-balancer (HAProxy).
Galera Galera Cluster for MySQL is a database clustering solution that is using synchronous replication to build a single cluster entity, a so-called multi-master cluster, in which all servers have identical data each time.
Fencing If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent. A fence device is an external device that can be used by the cluster to restrict access to shared resources by an errant node, or to issue a hard restart on the cluster node.
primary node The primary is the management node, which is assigned as a designated controller by pacemaker. All other nodes in the multi-node cluster are labeled as secondary node.
secondary node All other nonprimary management nodes in the multi-node cluster are labeled as secondary node.
operation manager Operation manager(OpsMgr) is an administrative tool that enables administrator to perform day one or day two operations against the mutli-node cluster environment.