Blueprints for Linux on IBM systems

Plans for success using IBM hardware

A Linux® blueprint is a detailed plan of action for a specific task involving Linux on IBM hardware.

Installing and configuring Xen

Xen is a virtual machine monitor (also known as a hypervisor), which lets you deploy multiple virtual servers on a single physical server. Using software to make a single server appear to be many servers is known as virtualization. Virtualization offers many benefits such as server consolidation, increased utilization of server resources, and the ability to simultaneously run a mix of operating systems and their applications on a single hardware platform.

Installing and configuring Xen provides step-by-step instructions for installing and configuring a Xen host and paravirtual guests on RHEL5.2 based NUMA and non_NUMA running on System x hardware.

To discuss this blueprint, visit the Xen Virtualization Blueprint forum.

Protecting your data at rest with Linux

There are a number of data protection mechanisms already built into your Linux system that you may not be taking full advantage of.

Protecting your data at rest with Linux provides concrete advice on how to use encryption to protect your data at rest and vastly improve your organization's resistance to data leaks. This blueprint demonstrates step by step how to set up a new encrypted data partition, swap partition, temporary file system, and how to migrate your old data to a new encrypted partition. The blueprint also shows you how to use a new file system, eCryptfs, to encrypt your data.

To discuss this blueprint, visit the Security Blueprint forum.

Installing Linux distributions on multipathed devices

Multipath connectivity refers to a system configuration where multiple connection paths exist between a server and a storage unit (Logical Unit (LUN)) within a storage subsystem. This configuration can be used to provide redundancy or increased bandwidth.

Installing Linux distributions on multipathed devices provides step-by-step instructions for installing Red Hat Enterprise Linux 5.2 and SUSE Linux Enterprise Server 10 SP2 on a LUN in a multipath disk storage environment. The procedure demonstrated here is performed on a System x host connected to a DS6000 storage server through Fibre Channel Fabric, but it can be adapted for installing either of the Linux distribution onto other supported models of storage devices.

To discuss this blueprint, visit the IBM Linux Storage Connectivity Forum: iSCSI and Multipath.

Installing the IBM System x and System p Blade iSCSI software initiator

The iSCSI standard (RFC 3720) defines transporting of the SCSI protocol over a TCP/IP network that allows block access to target devices. A host connection to the network can be provided by an iSCSI host bus adapter or an iSCSI software initiator that uses the standard network interface card in the host.

iSCSI-capable IBM Blades utilize two iSCSI initiators during the boot process. One iSCSI software initiator is contained in the system firmware or BIOS. The other iSCSI software initiator is provided by the operating system.

The System x and System p Blades provide a non-volatile area that can be written with iSCSI parameters. These parameters are used by the system firmware/BIOS during the boot process to obtain the boot loader. The iSCSI parameter values can also be used by enabled Linux distributions for configuring the software iSCSI initiator during installation and boot.

IBM System x and System p Blade iSCSI software initiator install provides instructions to install Red Hat Enterprise Linux 5 and SUSE Linux Enterprise Server 10 to an iSCSI target using the iSCSI software initiator capabilities on supported IBM System x and System p Blades.

To discuss this blueprint, visit the IBM Linux Storage Connectivity Forum: iSCSI and Multipath.

Installing and using real time Linux

Real time Linux covers the installation, setup, and use of Red Hat Enterprise MRG real time Linux on supported IBM platforms. It includes a brief introduction to basic real time concepts, and pointers to related information for the download and installation of IBM's WebSphere Real Time product. Along with the installation and set up tasks, a description of the tools used to view and change real time attributes of a task is provided.

To discuss this blueprint, visit the IBM Real Time Linux Blueprint forum.

Implementing the IBM HPC Open Software Stack on IBM Power servers

The IBM HPC Open Software Stack is a set of open source components that can be used with software included in Red Hat Enterprise Linux 5.2 to build a high performance solution.

IBM HPC Open Software Stack for RHEL 5.2 on IBM Power servers describes a reference implementation on IBM Power servers using this open source software along with IBM's XL compilers and IBM's ESSL Math Libraries, which offer optimization and performance-tuning features to exploit the POWER6 hardware architecture. This reference implementation uses a cluster of JS22 servers.

To discuss this blueprint, visit the IBM HPC Open Software Stack forum.

Monitoring and controlling power consumption using Linux

Learn to monitor and control power use on System X servers that have the relevant hardware features -- systems that support Intel's Enhanced Speed Step (EIST) or AMD's PowerNow -- and have power meter hardware installed.

Monitoring and controlling power consumption using Linux includes an install guide for the pwrkap software, an LTC-developed package that monitors and displays power utilization, CPU load, and CPU frequency; instructions on the power capping mechanism in action; the key ideas behind the pwrkap algorithms; a how-to on measuring the quantity of energy consumed by a task; plus, troubleshooting tips and tricks and a list of related information and downloads.

To discuss this blueprint, visit the Energy Management for System x Community Forum.

Installing Linux distributions on a Multipathed Device using the RDAC Driver

Multipath configurations provide multiple paths to a single device; the purpose of a multipath configuration is to allow for both path failover and increased throughput. Although many IBM storage platforms use the available open source dm multipath driver, the Redundant Disk Array Controller (RDAC) driver (also known as LSI's mpp driver) is the only driver certified on many DS3K and DS4K disk storage platforms.

Installing Linux distributions on a Multipathed Device using the RDAC Driver provides a basic set of instructions on how to correctly install and configure RHEL5.2 and SLES10/SP2 on a multipath device using the RDAC driver, including a tutorial on multipath connectivity, how to set the hardware up for test environment, how to install the flavors of Linux on the system, and troubleshooting tips and tricks and a list of related information and downloads.

To discuss this blueprint, visit the IBM Linux Storage Connectivity Blueprint Forum: iSCSI, Multipath, and Installing Linux on a Multipathed Device using RDAC.

Installing and using SystemTap

SystemTap is a tool that allows developers and administrators to write and reuse simple scripts to finely examine the activities of a live Linux system. Data may be extracted, filtered, and summarized quickly and safely to enable diagnoses of complex performance or functional problems.

Installing and using SystemTap includes an installation guide, scripting examples, command-line options, and a safety features list, as well as tutorials on function boundary tracing probes and root/non-root user probes; plus, troubleshooting tips and tricks and a list of related information and downloads.

To discuss this Blueprint, visit the SystemTap Blueprint Support Forum.

Installing and configuring eCryptfs with a trusted platform module (TPM) key

Key management is traditionally the weakest point in deployed data encryption mechanisms since the majority of encryption solutions employ only basic password protection schemes and disregard the "best practices" tenet of multi-factor authentication. Remember, most passwords that users can reasonably expect to memorize can be successfully attacked with straightforward algorithms running on existing commodity computing devices.

Installing and configuring eCryptfs with a trusted platform module (TPM) key explains how to build and install the eCryptfs software along with its dependencies, how to set up encrypted swap, how to generate a TPM-sealed key, and how to perform the eCryptfs mount with the TPM-sealed key on RHEL5.2; plus, you get troubleshooting tips and tricks and a list of related information and downloads.

To discuss this Blueprint, visit the Linux Security Support Forum.

Using Intelligent Platform Management Interface (IPMI) on IBM Linux Platforms

IPMI is a standardized message-based hardware management interface; the Baseboard Management Controller (BMC) or Management Controller (MC) hardware chip implements the core of IPMI.

Using IPMI on IBM Linux Platforms details how to use the BMC to check system health, set up password controls and power cycle the system, and how to use the hardware timer to recover from a crash. It also lists the features in the newest version, IPMI 2.0.

To discuss this Blueprint, visit the IPMI Blueprint Community Forum.

Using MIT-Kerberos with IBM Tivoli Directory Server backend

Kerberos is a network authentication protocol designed to provide strong, mutual authentication between client and server applications. MIT-Kerberos started as a reference implementation from the original creators of the protocol, but later evolved to be one of the most widely used implementations around.

Using MIT-Kerberos with IBM Tivoli Directory Server backend describes how to configure a Kerberos authentication Realm using MIT-Kerberos and Tivoli Directory Server (ITDS) 6.2 to store authentication data. It includes setting up IBM Tivoli Directory Server and the Kerberos KDC Server, troubleshooting tips, and tips on considerations when using LDAP/OpenLDAP.

To discuss this Blueprint, visit the section on "Using MIT-Kerberos" in the Linux Security Support Forum.

Configuring Multipath Storage in the BladeCenter S Chassis

The BladeCenter S chassis has slots that accept multiple blades of various capabilities, as well as modules to provide other functions such as network connectivity and administration. In addition to these standard features, it contains room for up to two SAS (Serial Attached SCSI) Storage Modules, each of which can contain up to six SAS disk drives.

Configuring Multipath Storage in the BladeCenter S Chassis describes how to configure the SAS Switch Modules in a BladeCenter S chassis to enable multipath access to the disks in the Storage Modules in the chassis. Once multipath access is enabled, you can add the multipathed disks to a blade that has Linux (RHEL 5.2 or SLES 10SP2) already installed onto its local disk. The multipath environment provides redundant paths for data access to the internal storage in the chassis. (For more on multipath, you might also want to look at the Installing Linux distributions on a Multipathed Device using the RDAC Driver blueprint.)

To discuss this Blueprint, visit the section on "Configuring Multipath Storage in the BladeCenter S Chassis" in the IBM Linux Storage Connectivity Blueprint Forum.

Installing and using the SystemTap GUI

SystemTap is a tool that allows developers and administrators to write and reuse simple scripts to finely examine the activities of a live Linux system. Data may be extracted, filtered, and summarized quickly and safely to enable diagnoses of complex performance or functional problems.

SystemTap GUI is a tool built on Eclipse that makes it easier to write SystemTap scripts and visualize kernel events. From the SystemTap GUI, you can use SystemTap scripts to insert probes into the Linux kernel that monitors system activities and collect related data. This blueprint shows you how to install the SystemTap GUI software on a system running Red Hat Enterprise Linux (RHEL) version 5.2; it also shows you how to use the IDE Perspective to write and run SystemTap scripts and how to use the Graphing Perspective to visualize collected data.

To discuss this blueprint, visit the SystemTap Blueprint Support Forum.

First Steps with Security-Enhanced Linux (SELinux): Hardening the Apache Web Server

Security-Enhanced Linux (SELinux) provides a method for creating and enforcing MAC (mandatory access control) policies, the policies that confine users and processes to the smallest amount of privilege required to perform assigned tasks.

This blueprint introduces basic SELinux commands and concepts, and demonstrates how you can increase the security of the Apache Web server by using these concepts. It focuses on SELinux's use of Boolean variables, a set of built-in switches, or conditional policies, that you can use to turn specific SELinux features on or off.

To discuss this blueprint, visit the Linux Security Community Forum.

Installing Linux on a Multipath iSCSI LUN on an IP Network

The iSCSI standard (RFC 3720) defines transporting the SCSI protocol over a TCP/IP network in a way that allows block access to target devices. A host connection to the network can be provided by an iSCSI host bus adapter or iSCSI software initiator that uses the standard network interface card in the host.

This blueprint delivers step-by-step instructions for installing Red Hat Enterprise Linux® (RHEL) 5.3 and SUSE Linux Enterprise Server (SLES) 11 on a multipath iSCSI logical unit (LUN). The procedures covered are tested on System x® and System p® blades connected to a NETAPP storage server through an Ethernet IP network. The instructions can be adapted to install either of these Linux distributions onto other supported models of iSCSI storage devices.

To discuss this blueprint, visit the IBM Linux Storage Connectivity Blueprint Forum.

Moving partitions into an Active Memory Sharing (AMS) environment and tracking performance on SLES11

The Active Memory Sharing (AMS) environment — available in selected POWER6 models — can be used to optimize memory utilization by allowing partitions to share memory much like CPU sharing is done. The PowerVM hypervisor manages real memory across multiple AMS-enabled partitions, distributing memory to partitions based upon their workload: With AMS, you can virtualize and share memory between partitions based upon demand which can help you reduce your investment in memory modules.

This blueprint shows you how determine if your particular setup is right for AMS and how to size the memory usage of multiple partitions in order to integrate them into a single AMS environment. It also demonstrates how to track the memory performance of the partitions and provides tips on fine-tuning the memory assignment in an AMS environment.

To discuss this blueprint, visit either the Community Forum for IBM's Systems Management on Linux Blueprints or the Linux for Power Architecture Forum.

Configuring Remote Crash Dump on Linux Systems

A kernel crash dump is the memory image of an operating system kernel that is written to a file. Typically, the system writes a crash dump file when the operating system experiences a serious problem such as a hang or crash.

This blueprint shows you how to enable kernel crash dumps with Kdump on systems running RHEL 5.3 and SLES10 SP2. It includes instructions on setting up a remote server to receive crash dumps (on the same operating system version as the client) and talks about how to enable Kdump for additional unresponsive and hang conditions other than kernel panic.

To discuss this blueprint, visit the Community Forum for IBM's Systems Management on Linux Blueprints.

Using the Linux CPUFreq Subsystem for Energy Management

The Linux CPUFreq subsystem can be configured to statically set the processor operating frequency or to dynamically scale the frequency based on system load. By dynamically reducing the frequency, you can consume less power without significant performance loss.

This blueprint— built from tests on RHEL5.3 — shows you how to enable and configure the CPUFreq subsystem to control processor power-saving features to reduce system power consumption. There's a hands-on exercise so you can experiment.

To discuss this blueprint, visit the Energy Management for System x Blueprint Forum.

Quick Start Guide for installing and running KVM

The KVM kernel virtualization hypervisor allows you to host different guest operating systems.

Quick Start Guide for installing and running KVM delivers a handy instruction guide to get you quickly installed and running your Kernel-based Virtual Machine, a hardware-assisted, full virtualization solution for Linux on x86 hardware that contains virtualization extensions such as Intel® VT or AMD-V. With KVM installed, you can run multiple virtual machines, each one running a different operating system image. And each of these VMs will possess private, virtualized hardware, including a network card, storage, memory, and graphics adapter.

To discuss this blueprint, visit the Linux Virtualization Blueprint Community forum.

Securing sensitive files with TPM keys

Multiple passphrases during the boot sequence is an impediment to automation in the server environment; TPM can provide passphrases automatically.

When using encrypted partitions you typically have to enter one or more passphrases during the boot sequence to allow the kernel to decrypt them; this isn't a desirable feature in an automated server environment. Securing sensitive files with TPM keys shows you how TPM — the Trusted Platform Module, supported in enterprise Linux distributions since SLES11/RHEL5.3 — can be used in this type of environment to wrap the passphrases and provide them automatically to the cryptsetup command. This blueprint also describes how to realize TPM-protected dm-crypt passphrases on your system.

To discuss this blueprint, visit the Security Blueprint forum.

Using a prototype version of WBEM-SMT to manage Linux Containers (LXC)

The Web-Based Enterprise Management technologies would be a great way to simplify management of various virtualized systems, but the current version's not recommended — but this prototype version might be the thing.

The Using a prototype version of WBEM-SMT to manage Linux Containers (LXC) blueprint demonstrates how to use a prototype version of the WBEM-SMT to deliver an elegant solution to the cumbersome task of wrangling those delightful Linux Containers (which provide the lightweight virtualization that makes isolating processes and resources and handling migration easier, letting you avoid the overhead of full virtualization).

To discuss this blueprint, visit the Linux Virtualization Blueprint Community forum.

Securing KVM guests and the host system

Virtualization features such as management simplification and resource sharing require that your system be secure — here are a variety of options so you can set up your KVM host to manage your KVMs securely and remotely.

This blueprint is loaded with security techniques and options. It demonstrates security options using SSH, SASL, and TLS. You'll learn two secured network-port-sharing methods (simple Linux bridge and 802.1q VLANs) and how to create ebtables to filter the traffic of these bridges. Experience the Linux Audit subsystem for logging changes and suspicious activities. Use block-level disk encryption to secure your KVM guests' images at rest. (Based on a RHEL 5.5 host system.)

To discuss this blueprint, visit the Linux Virtualization Blueprint Community forum.

Managing Linux On Power virtual appliances using IBM Systems Director VMControl

Use the VMControl plug-in for IBM Systems Director to capture existing installations as virtual appliances and to deploy virtual appliances to create new virtual servers that are configured with these operating systems and software applications.

This blueprint discusses how to use VMControl to manage images of operating systems and software applications and how to capture a software and hardware configuration that you can deploy as needed. It covers such key topics as VMControl, image repositories, and virtual appliances and outlines the steps necessary to set up the environment.

To discuss this blueprint, visit the Linux Virtualization Blueprint Community forum.

SAP 2-tier Sales and Distribution Tunings for Linux on POWER7

Learn about performance tuning for the SAP 2-tier Sales and Distribution (SD) application running on IBM POWER7-based systems. The SAP SD module is a part of the SAP Enterprise Resource Planing suite.

This blueprint discusses key tools and technologies such as NUMA, the CFQ scheduler, SAP instances, the SD benchmark, SMT4 mode, and barriers. In previous test instances, test environment performance increased by a factor of two after applying the recommended tuning steps in this blueprint.

To discuss this blueprint, visit the Linux Virtualization Blueprint Community forum.

NEW: Setting up an HPC cluster with Red Hat Enterprise Linux

Learn to set up and manage a cluster of compute nodes using common open-source-based cluster and resource management tools on POWER®-based systems running Red Hat Enterprise Linux 5.5.

Key tools and technologies detailed in this blueprint include ATLAS, TORQUE, Maui, and Open MPI; this blueprint has examples of how to use the Open MPI, Torque, and Maui open source software to define and manage a small set of compute servers interconnected with InfiniBand in a classic High Performance Computing (HPC) cluster.

To discuss this blueprint, visit the HPC Central Technical Forum.

NEW: Setting up an HPC cluster with SUSE Linux Enterprise Server

Learn to set up and manage a cluster of compute nodes using common open-source-based cluster and resource management tools on a small cluster of POWER6® or POWER7® systems that are running SUSE Linux Enterprise Server 11 SP1.

Key tools and technologies detailed in this blueprint include ATLAS, TORQUE, Maui, and Open MPI; this blueprint has examples of how to use the Open MPI, Torque, and Maui open source software to define and manage a small set of compute servers interconnected with InfiniBand in a classic High Performance Computing (HPC) compute-intensive cluster.

To discuss this blueprint, visit the HPC Central Technical Forum.