Modified on by breno.leitao
By: Breno Leitão
This technical preview tutorial explains how users of IBM's latest POWER8-based scale-out Linux servers can try Ubuntu running non-virtualized. We show how Ubuntu can be installed directly on the OPAL firmware, and run as a single-image operating system directly on the system.
Ubuntu 14.04 is generally available today and fully supported as a PowerKVM guest on the IBM Power Systems shown below:
For more details on Ubuntu 14.04 - see Canonical's What’s new in 14.04 LTS document.
It is also possible to run Ubuntu 14.04 directly on these systems, which the development teams refer to as non-virtualized mode, or "bare-metal". There is no PowerVM LPAR layer, and there is no PowerKVM hosting layer. Over time, this capability is available in the open-source communities, so new versions of other Linux distros are expected to be enabled for this support as well.
Note: If you are running Ubuntu 14.04 non-virtualized, you need to upgrade the kernel packages to get cpufreq support. The 3.13.0-32 level works.
The OPAL firmware referenced below is designed to allow a Linux operating system to run directly on the POWER8 system. By running directly on the system, this enables the operating system to be a KVM host, creating and controlling KVM Guests. In the scenario described here in this article, there is no KVM hosting, and the Ubuntu 14.04 operating system is running as an operating system directly on the system.
Because the OPAL firmware enables the PowerKVM mode, the terminology used in selecting the firmware below is targeted at that mode. In practice, OPAL firmware enables a Linux operating system to run directly on the system, and running KVM in that operating system is not a requirement.
Technical Preview only at this time. The ability to run Ubuntu directly on the POWER8 Linux-only system is provided as-is, is not a supported configuration option at this time, and therefore, is not for production use. This ability is provided as a technical preview only. If you should encounter any problems running with the non-virtualized technical preview you can report your bugs against Ubuntu in Launchpad. Alternatively, you can always ask a question in the Forums here on the Community!
You are over-writing your PowerKVM install. These instructions replace (destroy) your existing PowerKVM installed host and all of the guests. The PowerKVM software can be re-installed at a later time, and your guests can be re-created.
Your system must have access to the external web for access to Canonical's netboot server - or you will need a DVD image downloaded and burned to a DVD. These instructions assume a netboot load.
1. In order to install Ubuntu 14.04 on the IBM Power system, the system needs to be set for KVM as the Hypervisor mode. This step selects the OPAL Firmware to be loaded. If you build the system with the PowerKVM configuration, you are ready to go, otherwise, you can configure it using the following steps:
Turn off the server by going to the server Advanced System Management (ASM), under System Configuration ⇒ Hypervisor Configuration, and set the hypervisor mode to KVM (or OPAL) and choose a IPMI password
2. Once the machine is in PowerKVM mode, you need to connect to the FSP using IPMI to get the machine console. Run the following IPMI commands:
Once you run the last command, you will be seeing the machine console, and everything you type will be sent to the machine. In order to exit the console, you should type ~. and ~? shows the help menu.
3. Once the machine is booted up, you will see the petitboot console, as shown below:
Petitboot is the bootloader for the IBM Power machines configured with PowerKVM. From here, you can install an Ubuntu DVD in the machine using DVD-ROM and boot from it. You can also boot from the network.
This document will explain how to install from the network.
4. In order to install from the network, you need to configure the System Network in the 'System Configuration' menu entry. Once the network is configured, you can create a new entry in the petitboot by pressing the letter 'n'. By creating a new entry, you will go into Option Editor to configure the entry details, as shown:
Once you are editing the boot entry, you should choose the 'Specify paths/URLs manually' option and you must provide the installer kernel and initird. For Ubuntu, you should point it to the Canonical Ubuntu 14.04 netboot website.
On this example, I used version 14.04 that I following URLs:
For a more recent version, as 14.04.1, 14.04.2 and updates, check the Ubuntu ppc64el wiki page download section
You do not need to fill out the other entries, if you just want to do a default installation. Once you do the configuration, get back to the petitboot menu, and boot the entry you just configured "User Item 1".
Then just boot on that entry, pressing 'Enter',, and you are going to launch the Ubuntu 14.04 installer, as shown:
When you see this screen, select the language you want to use during the installation and proceed through normal Ubuntu 14.04 installation processes.
For more information about Ubuntu installation process, check Ubuntu 14.04 Installation guide. For more information about the Petitboot, you can check the IBM PowerKVM RedBook.
By: Brent Baude.
On May 11th, Fedora announced a beta for F17 for ppc64. This is another milestone in the march towards Fedora 17 for the powerpc architecture.
The beta announcement itself can be found here ->
Yeah, lots of packages have been updated and so forth but there are two interesting pieces I'd like to draw some attention towards. Firstly, adoption of grub2 continues in the beta. We smoothed out some of the rough edges since the alpha timeframe and have a number of additional patches we'll push as well.
Secondly is that rpm and yum now are equipped to deal with a ppc64p7 subarch. I'll write up more on this topic as we near or pass Fedora 17 General Availability, but the basic function is that we have a POWER7 subarch (akin to i686 for x86) where certain optimizations are passed to the binary rpms. This is an exciting time for the architecture and Fedora!
Stay tuned as we catch our breath and begin to share more!
Need Java 7? We got Java 7.
By: Bill Buros.
Last year we posted that the IBM Java 7 was in open beta mode, but along the way we neglected to post the news that the IBM Java 7 kits are now fully available across a number of platforms and operating systems. By now, this is relatively older news, IBM Java 7 was announced back in September 2011, but it's always good to remind people and teams that it's available.
This latest IBM Java is available on Power systems and across the RHEL and SLES distros available for Power. We use the latest Java 7 across a number of performance workloads and benchmarks. Customers and product teams can and do take advantage of the new features available in Java 7.
A news update on IBM SDK Java Technology Edition Version 7 is available here
There's an extensive set of Information Center materials available for the Java SDK here
. Included in the Information Center materials are sections for
- Planning, installing, and configuring the SDK and Runtime Environments
- Developing and running Java applications
- Performance and security considerations
- and of course troubleshooting and support
Want to see a video? Check out an interview with Trent Gray-Donald on IBM SDK Java Technology Edition V7 over on You Tube
. In that video, Trent (the IBM Java 7 Technical Lead), joins Scott Laningham to talk about how the IBM SDK Java Technology Edition V7 differs from previous releases, the impact of IBM joining OpenJDK, and more.
By: Bill Buros.
This morning we got into a good discussion on what simple things that we check on a new system or partition for Power Linux.
The list was easy.
- How many cores?
- what SMT mode are the cores running in?
- how fast are the cores running?
- how much memory is available?
- how is the memory balanced?
- and is any of the memory tied up in Hugepages?
So we wrote a quick script here to get that information and thought we'd post that here.
# An easy script to check vital system info
echo "Check the number of cores enabled"
ls /proc/device-tree/cpus/ | grep -c PowerPC
echo -e "\nCheck the SMT mode:"
echo -e "\nCheck the expected number of CPUs that are online (based on SMT):"
grep -c proc /proc/cpuinfo
echo -e "\nCheck how fast the CPUs are running"
echo "Check the total system memory and free memory available:"
grep Mem /proc/meminfo
echo -e "\nCheck how the total memory for each NUMA node is balanced:"
cat /sys/devices/system/node/node*/meminfo | grep MemTotal
echo -e "\nCheck how the free memory for each NUMA node is balanced:"
cat /sys/devices/system/node/node*/meminfo | grep MemFree
echo -e "\nCheck for any HugePage usage:"
grep Huge /proc/meminfo
Copy/paste that to your system into a script, turn the executable bit on, and give it a shot.
> vi verify.sh
(copy/paste the above into the script - file and exit)
> chmod +x verify.sh
When you run it, it should give you a quick listing of what you have on your system or partition. Here's a quick example which I ran in a Power7 LPAR which was defined with 8 cores and 64GB of memory.
Check the number of cores enabled
Check the SMT mode:
SMT is on
Check the expected number of CPUs that are online (based on SMT):
Check how fast the CPUs are running
min: 3.56 GHz (cpu 30)
max: 3.56 GHz (cpu 18)
avg: 3.56 GHz
Check the total system memory and free memory available:
MemTotal: 64208512 kB
MemFree: 62304000 kB
Check how the total memory for each NUMA node is balanced:
Node 0 MemTotal: 65536000 kB
Check how the free memory for each NUMA node is balanced:
Node 0 MemFree: 62304000 kB
Check for any HugePage usage:
Hugepagesize: 16384 kB
Thanks to Jenifer for the quick script. Next we'll check the distro and the various important packages that we use for performance.
Modified on by jscheel
By Jeff Scheel
As you likely have heard, Arvind Krishna, IBM General Manager for Development and Manufacturing in the IBM Systems & Technology Group, announced that Power Systems would be supporting KVM. This is an exciting announcement for numerous reasons that I'll defer for another posting. For this blog entry, I thought I'd do some question/answer session based on common questions I've been asked in the past couple weeks. However, before I do so, I need to remind you that these are our current thoughts at this time: things may change.
Q: When will KVM be available on Power?
A: The outlook for general availability is next year. However, IBM has already started releasing patches to various KVM communities to support the POWER platform.
Q: On what systems does IBM intend to support KVM?
A: IBM intends to initially support KVM on a limited set of models, targeted at the entry end of the system servers. This strategy supports IBM's efforts to capture the largest growing market, x86 Linux servers in the 2-socket and smaller space.
Q: How does IBM plan to position KVM against PowerVM?
A: IBM remains committed to the PowerVM being the premier enterprise virtualization software in the industry. With KVM on Power, IBM will be targeting x86 customers on entry servers but will offer both KVM and PowerVM to meet the varying virtualization needs PowerLinux customers. However, KVM virtualization technology represents an opportunity to simplify customer's virtualization infrastructure with a single hypervisor and management software across multiple platforms.
Q: What Linux versions from Red Hat and SUSE will provide KVM hosts support on Power?
A: The decision to provide KVM on PowerLinux will be made by Red Hat and SUSE. IBM will be working with them in the months to come and would welcome their support.
Q: What management and cloud software will support KVM on Power?
A: For KVM node management, IBM intends to work with multiple vendors, including Red Hat and SUSE to certify KVM on Power into their system management software offerings. Additionally, IBM plans to contribute any patches necessary to OpenStack to extend the KVM driver to Power. Using this foundation, additional IBM and third-party software should provide a diverse set of management software.
Q: What will software providers need to do to support KVM on Power?
A: Most software provides have become comfortable with some form of virtualization such as PowerVM, VMWare, and KVM. Just like with applications on Linux, software providers should find that applications in the KVM environment behave similarly on x86 and Power platforms. As such, each vendor should understand any challenge KVM on Power would provide.
Q: What operating systems will be supported as guests in KVM on Power?
A: Given that KVM is initially targetted to be released on Linux-only servers, only Linux is planned at this time. IBM plans to certify the latest updates of RHEL 6 and SLES 11 as KVM guests.
Q: How will KVM run on the Power Systems?
A: The design goal of KVM on Power is to be just another hardware platform supporting KVM. As such, the KVM on Power will be true to the KVM design point of a KVM host image that supports one or more guests. PowerVM constructs such as the HMC, IVM, and VIOS will not exist in KVM. Management and virtualization will occur through the KVM host image.
Q: Will KVM run in a PowerVM logical partition (LPAR)?
A: While KVM supports a user-mode virtualization that can run on any Linux operating system, KVM on Power is being developed to run natively on the system, not nested in PowerVM. This is done to enable KVM to run optimally using the POWER processor Hypervisor Mode. As such, the system will make a decision very early in the boot process to run KVM or PowerVM. This is envisioned as a selectable option managed by the Service Processor (FSP)?
Q: Will it be possible to migrate from KVM on Power to PowerVM or vice versa?
A: While the virtualization mode will be selectable on systems, the process of migrating from KVM and PowerVM will require additional steps such that frequent migrations will be unlikely. However, in the case where a customer wishes to upgrade to PowerVM to acquire advanced virtualization capabilities, this migration should be supported. Steps to backup and restore the VM image will be required when migrating in either direction.
Q: Will AIX or IBM I run in KVM on Power?
A: Given that KVM initially runs on Linux-only platforms, support for non-Linux operatings systems has not been planned at this time.
Q: Will Windows run in KVM on Power?
A: Windows does not run on Power Systems. As such, supporting it in a KVM guest VM will not work.
Hopefully, these questions were helpful to folks. As usual, follow-up questions/comments appreciated.
Modified on by jerberstark
By: Ralph Nissler. If your company is an IBM Business Partner or if you are an IBM employee, this might interest you. No-cost remote access to POWER5, POWER6 and POWER7 will also still be provided:
Today the Virtual Loaner Program (VLP) is announcing support for IBM's latest Power Systems hardware using the POWER7+ processor. Standard reservations can now be processed on systems using the new POWER7+ processor.
In addition, several OS images have been updated to the latest OS support levels. Currently provided OSs and releases for the POWER7+ systems are AIX 7.1 TL2SP2, RHEL 6.4 and SUSE 11 SP2. Images saved from these OS releases are supported on previous hardware levels as well, however system images saved prior to this announce will not restore on the POWER7+ systems.
For access to these new systems, please go the VLP web site at http://ibm.com//systems/vlp
For more information about IBM Power7+ Systems - http://www-03.ibm.com/systems/power
Modified on by Bill_Buros
By Kent Yoder
I've just released a white paper
with some info on using the encryption and random number generation hardware accelerators in the POWER7+ CPU in Fedora Linux. It covers a few areas:
1. Background on the hardware and software architecture
2. Setting up Fedora to use the accelerators for disk encryption and IPSec
3. How to monitor that you're really getting hardware acceleration
Hope you find it useful, and don't hesitate to send me questions or comments!
Modified on by jerberstark
The MiniCloud provides free access to Power® virtual machines. It allows easy access to an environment that can be configured for development, testing or migration of applications to Power. The virtual machines of MiniCloud run on PowerKVM™, which supports running a large number of virtual machines on a single scale-out Linux server. MiniCloud is hosted at State University of Campinas - Unicamp, Brazil.
To request access to a VM your first need to access the MiniCloud website, read the terms and conditions of usage and fill out the form with the following information:
Name, which identifies the requester;
Email, which allows contact with the requester;
The desired operation system which will be installed on the VM;
A message, which must contains a brief explanation about why you are requesting access;
SSH public key, for a more secure way of logging into your VM.
Figure 1: Request access form
After pressing the submit button you will see this message:
Figure 2: Request confirmation
Also, a confirmation message will be sent to your email address:
Figure 3: Confirmation message
Then, you must wait. The MiniCloud team will evaluate and create your machine as soon as possible.
Note: the machine creation depends on the availability of resources.
Once your machine is ready, you will receive an email with the access information. It looks like:
Figure 4: Access Information
Now you can access your VM and start working.
Important: the VM will be available for one month. If you intend to use for an extended period of time, let the MiniCloud team know about it.
Request access, it is free !
Modified on by PowerLinuxTeam
A new Solution Guide to implement MongoDB in IBM Power Systems running Linux featuring the new IBM POWER8 technology is available.
This white paper describes MongoDB (from humongous), which is an open source NoSQL style, document database with dynamic schema. The paper discusses the main features of the MongoDB solution, its architecture and implementation on IBM Power Systems running Linux. The target audience is users and system integrators who implement MongoDB on IBM Power Systems running Linux servers. No familiarity with MongoDB or IBM Power Systems running Linux is required; however, some familiarity with the basics of Linux commands is required.
OS: Red Hat Enterprise Linux 6.5
MongoDB: 2.4.9 (ppc64 version)
- Both mongod and mongo shell instances ran on the same server
- UNIX domain sockets was used as the communication method (also the default)
There are two ways in which users can install MongoDB on Power Systems running Linux:
Install from prebuilt MongoDB binary files available from IBM.
Build and install MongoDB binary files from the MongoDB source.
Prebuilt MongoDB package
Modified on by rfolco
By Rafael Folco, Advisory Software Engineer
In today's post we show an example on how to workaround virtio limitations and successfully spawn a little-endian instance in OpenStack. This post assumes you have an upstream version of OpenStack already installed with Nova (compute) and Glance (image) services up and running. Check Devstack for a quick way to deploy OpenStack on your system.
Currently, PowerKVM supports LE (Little-Endian) guests with Canonical’s Ubuntu Server 14.04 distribution. LE cloud images can be found at https://cloud-images.ubuntu.com/. This blog post has a good explanation about Little-Endian support on Power.
At the time of writing, there is a limitation for ppc64el images with the 'virtio' driver model, which doesn't work with vhost (kernelspace). OpenStack generates the libvirt XML for its guests using virtio settings with vhost. In other words, there is no way to use virtio without vhost in OpenStack. Default 'virtio' configuration (with vhost) adds "<model type='virtio'/>" to the guest libvirt settings.
The workarounds for this issue could be one of the following:
Use ibmveth (spapr-vlan) driver (recommended)
Disable vhost-net module
Turn vhost mode off by using driver name='qemu' in the libvirt XML configuration
Use virtio-net model type (not officially supported)
ibmveth (spapr-vlan) driver
ibmveth and ibmvscsi are legacy drivers derived from PowerVM and are now supported on PowerKVM, as explained in this blog post.
Currently, OpenStack upstream does not support ibmveth driver. This driver support in OpenStack is being addressed by https://review.openstack.org/#/c/106451/. You may need to manually apply this change until it gets merged upstream.
Disable vhost-net module
This workaround doesn't require any change in OpenStack. Disabling the vhost-net module you force vhost=off when qemu runs the virtual machine.
This option uses qemu driver (userspace) instead of vhost (kernelspace) for the virtio model.
This model type has been 'accidentally' worked in my tests. Although it works, this is not officially supported and should not have be available for long-term usage. Here is an example of the 'virtio-net' model configuration in the libvirt xml:
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
In order to boot a little-endian image in OpenStack, you first need to upload it to the image service component, Glance. The following commands download and upload the image to Glance:
$ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-ppc64el-disk1.img
$ glance image-create --name Little-Endian --file trusty-server-cloudimg-ppc64el-disk1.img --disk-format=qcow2 --container-format=bare --property hw_vif_model='spapr-vlan'
To spawn an instance of the Little-Endian image, run:
$ nova boot --image 7febf98f-93ac-48e7-9377-a17f2bfa2077 --key-name mykey --flavor 3 test
Log in to the guest using the key you provided, as shown below:
ssh -i mykey.pem email@example.com
The authenticity of host '10.0.0.2 (10.0.0.2)' can't be established.
ECDSA key fingerprint is d8:39:f4:32:cd:04:e9:3b:17:c1:f9:44:d6:91:1b:e0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.2' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-30-generic ppc64le)
* Documentation: https://help.ubuntu.com/
System information as of Mon Jul 14 22:21:54 UTC 2014
System load: 6.7 Memory usage: 2% Processes: 68
Usage of /: 58.3% of 1.32GB Swap usage: 0% Users logged in: 0
Graph this data and manage this system at:
Get cloud support with Ubuntu Advantage Cloud Guest:
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
ubuntu@test:~$ uname -m
Confirm your network device is using ibmveth driver:
ubuntu@test:~$ find /sys/devices/vio/ -iname *eth*
For more information about IBM PowerKVM refer to the Redbook.
Modified on by jscheel
by Jeff Scheel, IBM Linux on Power Chief Engineer
As promised, here is my first blog post on little endian or "LE" as we call it. Where better place to start than with a list of frequently ask questions (FAQs)? Hopefully, you'll find this helpful. Let me know if you have any questions I missed.
What is big endian and little endian, anyway?
In order to perform operations on data, computers routinely load and store bytes of data from and to memory, the network, and disk. This data management generally follows one of two schemes: little endian or big endian.
Imagine the number one hundred twenty three. When representing this number with numerals, we typically write it with the most significant digit first and the least significant digit last: 123. This is big endian. Mainframes and RISC architectures like POWER default to big endian when manipulating data.
Some microprocessor architectures store the numbers representing one hundred twenty three in reverse – the least significant digit first and the most significant digit last: 321. This is little endian. x86 architectures use little endian when storing data.
Why do people care about what endian mode their platform runs?
Most users do not care which endian mode their platform is using. They simply care about what applications are supported by their Linux operating systems. Only application providers care about endianess. For example:
A software developer that has code manipulating data through pointer casting or bitfields would not be able to simply recompile an application for one endian mode to another.
A user with large amounts of data stored to disk or exchanged among systems over network connections without consideration of endian schemes risks a range of application failures from very subtle to complete failures.
A system accelerator programmer (GPU or FPGA) who needs to share memory with applications running in the system processor must share data in an pre-determined endianness for correct application functionality.
Why is Linux on Power transitioning from big endian to little endian?
The Power architecture is bi-endian in that it supports accessing data in both little endian and big endian modes. Although Power already has Linux distributions and supporting applications that run in big endian mode, the Linux application ecosystem for x86 platforms is much larger and Linux on x86 uses little endian mode. Numerous clients, software partners, and IBM’s own software developers have told us that porting their software to Power becomes simpler if the Linux environment on Power supports little endian mode, more closely matching the environment provided by Linux on x86. This new level of support will lower the barrier to entry for porting Linux on x86 software to Linux on Power.
Which Linux distributions will support little endian on Power?
So far, only Canonical’s Ubuntu Server 14.04 distribution supports little endian on Power. Plans are underway in the community distributions of Debian and openSUSE for little endian releases.
Additionally, SUSE has stated publicly that SLES 12 will be little endian when it becomes available. See SUSE Conversations for more information.
Red Hat has not yet publicly disclosed their plans around a little endian operating systems However, work to create a ppc64le architecture has started in the Fedora.
Which Linux distributions will support big endian on Power?
It is IBM's understanding that Red Hat and SUSE will continue to support their existing big endian releases on Power for their full product lifecycles.
While SUSE has announced their plans to transition their distribution to little endian (see above), Red Hat has not disclosed anything. The newly available Red Hat Enterprise Linux 7 operates in big endian mode on Power. Specifics about the transition to little endian will be decided and disclosed by Red Hat.
What about Linux applications that have already been optimized for big endian on Power?
The existing PowerLinux application portfolio supports only big endian modes today. Open source applications have begun extending their support to little endian mode on Power Systems. Existing third party and IBM applications will likely migrate more slowly and deliberately. As such, Power hardware will support both endian modes for the foreseeable future so that existing Linux applications optimized for a big endian platform will continue to run unchanged while new applications optimized to little endian mode are added.
Can applications compiled for x86 (Windows or Linux) run without change on little endian Power?
Because the x86 and Power processors use different instruction set architectures (ISAs) – the binary executable known to the processor – compiled applications will need at least a recompile on the new platform. Whether source code changes are required depends on how many optimizations have been made in the application source – such as the use of assembler language and any assumptions about page size or cache line size, etc.
However, interpreted applications such as those in Java, perl, python, php, ruby and others should be capable of migrating with little to no change.
Does this transition affect application ecosystems for AIX or IBM i?
No, there will be no effect on AIX or IBM i application environments as a result of this change.
What if I want to run a mix of big endian and little endian applications on the same Power System?
Each Linux distribution will support a particular endian mode, little or big. Applications always certify to specific distributions. As such, endian mode decisions should be transparent to the end user. Customers should not have to consider endianess in their application choice.
If one requires different Linux distributions or the same distribution at different releases on a single server, then Power Systems virtualization (LPARs or VMs) allows customers to run applications supported by a big endian Linux distribution like RHEL6 as well as applications supported by a little endian distribution like Canonical’s Ubuntu Server at the same time. However, concurrent little endian and big endian support on the same server will not be available until a future date. See more details in the questions below.
Which POWER processors support little endian mode?
The POWER8 processor is the first processor to support little endian and big endian modes equivalently. Although previous generations of the POWER processors had basic little endian functionality, they did not fully implement the necessary instructions in such a way to enable enterprise operating system offerings.
Where can little endian distributions run on Power?
When IBM announced POWER8 in April 2014, little endian (LE) operating systems were initially supported as KVM guests. Further, KVM support was limited to only include all LE or all big endian (BE) guests. In coming releases, IBM expects to support concurrent LE and BE guests in KVM, as well as the support of LE guests on PowerVM.
Do POWER systems support the running of mixed environments of big and little endian operating systems?
The POWER8 processor supports mixing of big and little endian memory accesses at the core level, through the use of SPR (special purpose register) settings. While this could technically support the running of both big and little endian software threads, the complexity of implementing such a design point would be high. Therefore, IBM has elected to enable operating system versions as completely big endian or little endian by design.
The virtualization capabilities of the POWER platform have allowed for mixed environments of operating system levels and types. This same isolation mechanism applies to big and little endian operating systems. However, in implementing the initial releases of little endian, IBM has introduced some short-term limitations on where LE operating systems can run. Over time, these will be removed and both KVM and PowerVM will support concurrent mixing of LE and BE operating systems.
See the previous question for more information.
Does PowerVM support little endian operating systems?
While the POWER8 systems support little endian (LE) mode, IBM has not yet completed the software development and testing to enable LE operating systems on PowerVM. The outlook is that this function will be delivered around mid-2015. When this capability is delivered, PowerVM will support the mixing of both big endian (BE) and LE operating systems. This enablement will also enable the running of LE operating systems on the Power Integrated Facilities for Linux (IFLs).
Does PowerKVM support mixing of little endian and big endian operating systems?
Testing has not yet completed to enable the mixing of little endian (LE) and big endian (BE) guests for KVM. Until this completes, IBM supports guests of the same type – all LE or all BE.
IBM hopes to support mixing of guest types around mid-2015.
Can I run big endian applications on a little endian operating system or vice versa?
No, the operating system enablement only supports applications of the same type. As such, a little endian operating system (ppc64le or ppc64el) can only run little endian applications built for this software platform. Likewise, big endian operating systems (ppc64) only support software built for big endian.
January 23, 2015 - Author's update
A couple noteworthy activities have occurred since this blog was originally published.
A little endian (LE) version of RHEL 7.1 has been released in beta form. This announcement indicates that RHEL 7 updates will have both the existing big endian (BE) offering and a new LE offering. For more information about the beta, see the RHEL 7.1 beta announcement information. This means that all three Linux on Power distribution partners -- SUSE, Canonical, and now Red Hat -- have LE operating systems.
IBM PowerKVM now supports the mixture of BE and LE guests beginning with the 2.1.1 update in October 2014. This was a subtle change that is hard to find in documentation.
Support for LE operating systems on PowerVM continues to make progress toward a delivery sooner versus later this year. When this is delivered, the mixing of BE and LE logical partitions will be supported.
Additionally, the following question keeps being asked and needs it's own FAQ:
Can I run x86 Linux applications on LE Linux on Power operating systems unchanged?
If your application was written in a dynamic language, it is highly portable and often migrates to BE and LE Linux on Power operating system environments without change. Examples include applications written in Java, php, perl, python, etc.
If your application was written in a compiled language like C/C++, it must be recompiled on Power in both the BE and LE operating systems. Applications migrating from x86 Linux onto an LE Linux operating system on Power will migrate without concern for data layout (endianness). Applications migrating onto BE operating systems need to be reviewed for consist data access, especially if they will share data using disk or networking with LE systems.
Modified on by PowerLinuxTeam
Over on IBM's DeveloperWorks Linux Zone, there's a new offering from SiteOx.com which allows anyone to easily get access to a deployed Ubuntu image running on IBM's Power systems.
This offering is being offered as an easy way for all developers to get access to Ubuntu and Power. There's even a free two-week trial period. Costs on a per day basis start at $3 day.
(Update: See Day Two experiences in the next article)
Interesting.. I wonder how easy this really is. Let's try.
Quiet disclaimer: while I am generally well-wired into many things around the Linux on Power platform in IBM, in this case, I'm approaching this from my own world, my personal gmail account, my own personal credit card, and poking at things like I want to. I'll provide screen shots.. tests.. observations. It's a good way to catch up on some technical work anyway.
How hard is this to get a system?
Starting from siteox.com.
I "Registered" myself as a new account. Normal email validation. There's of course a variety of T's and C's to agree to. I generally don't pay much attention, but they seemed reasonable. No production work. Don't put confidential or personal information here. They're not liable for anything. Blah blah blah.
I selected a new service order. I wanted an Ubuntu guest. The default account is 1 processor, 2 GB memory, 20GB storage. Their terminology is "Linux on Power VM". The two week trial was good for $27 ($3 a day). I selected 4 processors. That added $3 to the two week period. I left the 2GB memory and 20GB storage as the default.
Various screen sequences to enter my credit card and confirm the order. Completed the order at 7:30am my time. Quickly received numerous confirmation type emails. Around 7:40am, I got the email that said my system was ready. Login information provided.
I ssh'ed to the server. Checked the processors. Power7 and four CPUs.
Quick observation here. Ubuntu was built to run on both POWER7 systems and POWER8. Ubuntu will be *supported* on POWER8, it is optimized for POWER8, and it exploits the capabilities of POWER8 processors. Ubuntu will not be officially supported on POWER7. Since POWER8 servers are not yet shipping (coming in June), that means a POWER8 system cannot be used for this open hosting setup. Thus, Power7.
I've added a couple of screen shots below.
Logged in. Check.
What processors? What SMT mode? What frequency? Oops. Need to be root. Yep, root access provided. Check.
That's enough to start the day. More later... Next I have to try copying files to that guest image, building, compiling, running, etc.
I'm a little disappointed that the Advance Toolchain (recently enabled for Ubuntu) isn't already preloaded on the image. Will need to check on that. 'Course, that just came out last week.
Modified on by Bill_Buros
Debian is one of my favorite distributions with its fast installation and light-weight deployment.
Actually, it is not officially supported on PowerLinux machines. But I knew that a ppc version exists, and I wanted to see one on action.
Recently, there was an opportunity to test Debian on Power.
Downloaded the basic image, give it a try. It was a nice experience, there were no problems with installation.
Check out the version and kernel
root@debian:~# cat /etc/debian_version
root@debian:~# uname -a
Linux debian 3.2.0-4-powerpc64 #1 SMP Debian 3.2.41-2+deb7u2 ppc64 GNU/Linux
All hw devices seems to be present ( not the full output, just partial)
width: 32 bits
physical id: 0
physical id: 0
logical name: /proc/device-tree
I installed mysql, apache and put some load in it. My tests were successful.
There were some issues;
The DLPAR does not seems to work at the moment.
Also there were some issues with the entitled capacity usage reporting.
If you have the time, you can give a try yourself.
Modified on by jerberstark
By: Anirban Chatterjee.
Last year, our research team published a research paper showing how a 10-node Hadoop cluster of IBM PowerLinux 7R2 servers could sort through a terabyte of data in less than 9 minutes. At the time, this beat the best known result achieved with a comparable cluster composed of x86 nodes by over a factor of two.
The team has not been standing still, however. With the launch in February of our new 7R2s that included enhanced POWER7+ processors, the team has pushed the envelope even further on these systems and, with a similarly sized cluster, is now able to sort a terabyte of data in less than 6.7 minutes.
The IBM China Research Lab reached this milestone using a 10-node cluster running RHEL 6.2 and Hadoop 1.1.3, managed with IBM Platform Symphony. The cluster comprised of one master control node and nine compute nodes. At 16 cores per compute node, this amounts to a sorting rate of 1.04 GB/min/core. (By comparison, a recent benchmark using an 18-node Cloudera Hadoop cluster of HP ProLiant Gen8 DL380 systems achieved a sorting rate of 0.57 GB/min/core.*)
We’ll have more information on the details of the testing environment coming soon, but proof points like this show the ability of Power Systems and Platform Symphony to provide high performance data analytics platforms at a reasonable cost. IBM solutions can provide rapid results to big data challenges, often in half the time as other solutions.
Modified on by jhopper
zswap" is discussed, with some initial performance data provided to demonstrate the potential benefits for a system (partition or guest) which has constrained memory and is beginning to swap memory pages to disk. The technique improves the throughput of a system, while significantly reducing the disk I/O activity normally associated with page swapping. We also explore how zswap works in conjunction with the new compression accelerator feature of the POWER7+ processor to potentially improve the system throughput even more than software compression alone.
This article is a good example of the ongoing collaboration that occurs in the Linux open-source community. New implementations are proposed, discussed, debated, refined and updated across developers, community members, interested customers, and performance teams. Here on the PowerLinux technical community, we are working to highlight more of these examples of work-in-progress from the broader Linux community. These proposals are applicable to both x86 systems and Power systems, so examples shown below cover both realms.
What is zswap?
Zswap is a new lightweight backend framework that takes pages that are in the process of being swapped out and attempts to compress them and store them in a RAM-based memory pool. Aside from a small reserved portion intended for very low-memory situations, this zswap pool is not pre-allocated, it grows on demand and the max size is user-configurable. Zswap leverages an existing frontend already in mainline called frontswap. The zswap/frontswap process intercepts the normal swap path before the page is actually swapped out, so the existing swap page selection algorithms are unchanged. Zswap also introduces key functionality that automatically evicts pages from the zswap pool to a swap device when the zswap pool is full. This prevents stale pages from filling up the pool.
The zswap patches have been submitted to the Linux Kernel Mailing List
(lkml) for review, you can view them in this post
Instructions for building a zswap-enabled kernel on a system installed with Fedora 17 can be found on this wiki
What are the benefits?
When a page is compressed and stored in a RAM-based memory pool instead of actually being swapped out to a swap device, this results in a significant I/O reduction and in some cases can significantly improve workload performance. The same is true when a page is "swapped back in" - retrieving the desired page from the in-memory zswap pool and decompressing it can result in performance improvements and I/O reductions compared to actually retrieving the page from a swap device.
Using the SPECjbb2005 workload for our engineering tests, we gathered some performance data to show the benefits of zswap. SPECjbb2005 uses a Java™ benchmark that evaluates server performance and calculates a throughput metric called "bops" (business operations per second). To find out more about this benchmark or see the latest official results, see the SPEC web site
. Note that the following results are not tuned for optimal performance and should not be considered official benchmark results for the system, but rather results obtained for research purposes. We liked this benchmark for this use case because we could more carefully control the amount of active memory being used in increments.
The SPECjbb2005 workload ramps up a specified number of "warehouses", or units of stored data, during the run. The number of warehouses is a user-controlled setting that is configured depending on the number of threads available to the JVM. As the benchmark increases the number of warehouses throughout the run, the system utilization level increases. A bops score is reported for each warehouse run. For this work, we focused on the bops score from the warehouse that keeps the system about 50% utilized. We also increased the default runtime for each warehouse to 5 minutes since swapping can be bursty and a longer runtime helps to achieve more consistent results.
For these results, the system was assigned 2 cores, 10 GB of memory, and a 20 GB swap device. A single JVM was created for the SPECjbb2005 runs, using IBM Java. First, a baseline measurement was taken where normal swapping activity occurred, then a run with zswap enabled was measured to show the benefits of zswap. We gathered results on both a Power7+ system and an x86 system to observe the performance impacts on different architecture types. The mpstat, vmstat, and iostat profilers from the sysstat package were used to record CPU utilization, memory usage, and I/O statistics. We would recommend taking advantage of the lpcpu
package to gather these data points.
To demonstrate the performance effects of swapping and compression, we started with a JVM heap size that could be covered by available memory, and then increased the JVM heap size in increments until we were well beyond the amount of free memory, which forced swapping and/or compression to occur. We recorded the throughput metric and swap rate at each data point to measure the impacts as the workload demanded more and more pages.
Settting up zswap
With the current implementation, zswap is enabled by this kernel boot parameter:
We looked at several new in-kernel stats to determine the characteristics of compression during the run. The metrics used were as follows:
pool_pages - number pages backing the compressed memory pool
reject_compress_poor - reject pages due to poor compression policy (cumulative) (see max_compressed_page_size sysfs attribute)
reject_zsmalloc_fail - rejected pages due to zsmalloc failure (cumulative)
reject_kmemcache_fail - rejected pages due to kmem failure (cumulative)
reject_tmppage_fail - rejected pages due to tmppage failure (cumulative)
reject_flush_attempted - reject flush attempted (cumulative)
reject_flush_fail - reject flush failed (cumulative)
stored_pages - number of compressed pages stored in zswap
outstanding_flushes - the number of pages queued to be written back
flushed_pages - the number of pages written back from zswap to the swap device (cumulative)
saved_by_flush - the number of stores that succeeded after an initial failure due to reclaim by flushing pages to the swap device
pool_limit_hit - the zswap pool limit has been reached
There are two user-configurable zswap attributes:
max_pool_percent - the maximum percentage of memory that the compressed pool can occupy
max_compressed_page_size - the maximum size of an acceptable compressed page. Any pages that do not compress to be less than or equal to this size will be rejected (i.e. sent to the actual swap device)
failed_stores - how many store attempts have failed (cumulative)
loads - how many loads were attempted (all should succeed) (cumulative)
succ_stores - how many store attempts have succeeded (cumulative)
invalidates - how many invalidates were attempted (cumulative)
To observe performance and swapping behavior once the zswap pool becomes full, we set the max_pool_percent parameter to 20 - this means that zswap can use up to 20% of the 10GB of total memory.
The following graphs represent the SPECjbb2005 performance and swap rate for a run using the normal swapping mechanism.
Note that as "available" memory is used up around 10GB, the performance falls off very quickly (the Blue Line) and normal page swapping (the Red Line) to disk increases. The behavior is consistent both on Power7+ and x86 systems.
Power7+ baseline results:
x86 baseline results:
As you can see, performance dramatically decreased once the system started swapping and continued to level off as the JVM heap was increased.
The following graphs represent the SPECjbb2005 performance and swap rate for a run when zswap is enabled. In these cases, memory is now being compressed, which significantly reduces the need to go to disk for swapped pages. Performance of the workload (the blue line) still drops off but not as sharply, but more importantly the system load on I/O drops dramatically.
Power7+ with zswap compression:
x86 with zswap compression:
As you can see, the swap (I/O) rate was dramatically reduced. This is because most pages were compressed and stored in the zswap pool instead of swapped to disk, and taken from the zswap pool and decompressed instead of swapped in from disk when the page was requested again. The small amount of "real" swapping that occurred is due to the fact that some pages compressed poorly - which means they did not meet a user-defined max compressed page size - and were therefore swapped out to the disk, and/or stale pages were evicted from the zswap pool.
Looking at the zswap metrics for each run, we can calculate some interesting statistics from this set of runs - keep in mind the base page size is different between Power (64K pages) and x86 (4K pages), which accounts for some of the different behaviour. Also note that we set the max zswap pool size to 20% of total memory for these runs, as mentioned above - this max setting can be adjusted as needed. On Power, the average zswap compression ratio was 4.3. On x86, the average zswap compression ratio was 3.6. For the Power runs, we saw entries for "pool_limit_hit" starting at the 17 GB data point. For the x86 runs, the pool limit was hit earlier - starting at the 15.5 GB data point. For the Power runs, at most the zswap pool stored 139,759 pages. For the x86 runs, the max number of stored pages was 1,914,720. This means all those pages were compressed and stored in the zswap pool, rather than being swapped out to disk, which results in the performance improvements seen here.
POWER7+ hardware acceleration
The POWER7+ processor introduces new onboard hardware assist accelerators that offer memory compression and decompression capabilities, which can provide significant performance advantages over software compression. As an example, the system specifications for the IBM Flex System p260 and p460 Compute Nodes
mention the "Memory Expansion acceleration" feature of the processor.
The current zswap implementation is designed to work with these hardware accelerators when they are available, allowing for either software compression or hardware compression. When a user enables zswap and the hardware accelerator, zswap simply passes the pages to be compressed or decompressed off to the accelerator instead of performing the work in software. Here we demonstrate the performance advantages that can result from leveraging the POWER7+ on-chip memory compression accelerator.
POWER7+ hardware compression results
Because the hardware accelerator speeds up compression, looking at the zswap metrics we observed that there were more store and load requests in a given amount of time, which filled up the zswap pool faster than a software compression run. Because of this behavior, we set the max_pool_percent parameter to 30 for the hardware compression runs - this means that zswap can use up to 30% of the 10GB of total memory.
The following graph represents the SPECjbb2005 performance and swap rate for a run when zswap and the POWER7+ hardware accelerator are enabled. In this case, memory is now being compressed in hardware instead of software, and this results in a significant performance improvement. Performance of the workload (the blue line) still drops off, but even less sharply than the zswap software compression case, and the system load on I/O still remains very low.
Power7+ hardware compression:
As you can see, the swap (I/O) rate was dramatically reduced. This is because most pages were compressed using the hardware accelerator and stored in the zswap pool instead of swapped to disk, and taken from the zswap pool and decompressed in the hardware accelerator instead of swapped in from disk when the page was requested again. The small amount of "real" swapping that occurred is due to the fact that some pages compressed poorly - which means they did not meet a user-defined max compressed page size - and were therefore swapped out to the disk, and/or stale pages were evicted from the zswap pool.
The following graphs show the performance comparison between normal swapping and zswap compression, and the POWER7+ graph also includes the hardware compression results, showing that the hardware accelerator provides even more performance advantages over software compression alone:
Power7+ performance comparison:
x86 performance comparison:
As you can see, this workload shows up to a 40% performance improvement in some cases after the heap size exceeds available memory when zswap is enabled, and the POWER7+ results show that the hardware accelerator can improve the performance by up to 60% in some cases compared to the baseline performance.
Swap (I/O) comparison
The following graphs show the swap rate comparison between normal swapping and zswap compression, and the POWER7+ graph includes the hardware compression results, showing that the hardware accelerator also reduces the swap rate dramatically. Swap rates are dramatically reduced on both architectures when zswap is enabled, including the POWER7+ hardware compression results.
Power7+ swap I/O comparison:
x86 swap I/O comparison:
The new zswap implementation can improve performance while reducing swap I/O , which can also have positive effects on other partitions that share the same I/O bus. The new POWER7+ on-chip memory compression accelerator can be leveraged to provide performance improvements while still keeping swap I/O very low.
By: Jeff Scheel.
What an exciting week in Miami, FL!!! I spent last week at Power Technical University, helping people Think Power Linux. We had lots of great discussions. A big "thank you" goes out to all who attended sessions, a bigger "THANK YOU" to those who asked questions and participated in the discussion.
Here are some my key thoughts from the event:
- The interest in Linux continues to increase. Although I don't keep formal counts, attendance at the Linux sessions is up over last year which was better than the previous. The first Trends and Directions presentation was standing-room-only, largely due to overflow from the other sessions. But even before the overflow wave started, we had at least 40 attendees in the room. I've posted the deck for people to review who didn't make the session.
- Power customers continue to grapple with the question of "Why Power Linux?" Those attending the sessions are frequently feeling like they're trying to convice their enterprises to consider Power when deploying Linux. When I provide the simplified answer of there being two reasons to do Power Linux -- the value of the Power Platform (virtualization, RAS, and performance) and all of the additional value-add items that we provide (pre-load, Installation Toolkit, Simplified Setup Tool, Software Development Toolkit, and Think Power Linux community) -- the answer seems to resonate. Folks understand that the platform provides value to all Power operating systems. They also appreciate the value-add initiatives that reduce their time-to-value for Linux solutions on Power Systems.
- The 2011 focus items on the SDK for application development and the new Think Power Linux community are definitely needed and timely. The reception of these items have been resoundingly positive. Customers are happy that we're working to simplify the porting process with the SDK and they're looking for places to ask their questions and find the latest information on the product.
- In a great discussion with a Power Linux customer, I learned that customers are still grappling with backup solutions similar to makesysb. While we have an open source solution that we're looking at for our Installation Toolkit next year, this customer discovered that Storix has made their SBAdmin product available for Power Linux. He implemented and was very impressed with the function, support, and price. What a great thing to learn and hear from a customer!
If you attended the conference, I hope you found as much value as I did. If you didn't attend, perhaps you join us at a future event.
I may be asking an obvious question, but I have been searching the IBM developerworks site for the last hour with no hint of any answers to my question.
Can I run Ubuntu 14.x in a PowerVM partition??
This seems simple enough. Redhat and SUSE have been able to do this for a long time. But I cannot find any definitive reference to running Ubuntu in a LPAR.
The follow-on question is...
Can I run Ubuntu 14.X in little endian mode in an LPAR under PowerVM on one of the new Power8 servers that support little endian?
Again this seems like a simple extension of technologies that are already in place. However, I may be missing some major issue. If so, can someone please enlighten me?
THanks for any input.
Modified on by jscheel
by Jeff Scheel, IBM Linux on Power Chief Engineer
In June of last year, I started publicly discussing the role that little endian (LE) plays in our Linux on Power strategy with the blog, Just the FAQs about Little Endian. Then, in August I attempted to eliminate uncertainty in my Removing the FUD and Demystifying LE (little endian) article. With the announcement of the Red Hat Enterprise Linux 7.1 beta delivering an LE version, it is time to revisit little endian from the perspective of an application developer.
The release of RHEL 7.1 LE completes the offerings of little endian operating systems. Canonical had Ubuntu 14.04 ready for POWER8 launch in May. SUSE supported the launch with public statements by Michael Miller about SLES 12 being LE in May, and publicly released in October. It is now time for application developers to get busy: little endian Linux on Power is here!
One thing that being a developer by training has taught me, is that “we” often need to be convinced that work is worth doing. Little endian Linux on Power is about reducing the cost of migrating an application AND providing additional value of the end application.
Being able to run Linux on Power in LE mode means that applications have one less thing – data endianness – to worry about in the port. While technical differences such as assembler language, page size, and cache size still exist, developers and architects tend to worry most about data endianness because the finding and fixing all the problems can be very time consuming. By enabling Power to run in the same endian mode as x86 (the defacto Linux platform of choice for developers), applications can simply be recompiled without having to worry about endianness. Further, if one is going to build a solution mixed with x86 and POWER systems, exchanging data on disk or across the network in the same endian mode greatly simplifies the application as well. Then, add in the ability to accelerate Power applications with (inherently little endian) GPUs and the benefits of little endian become “a no brainer”.
So, hopefully, we're past the “why should I do this?” phase and now we address the list of technical resources for migrating to Linux on Power. My favorite list of resources include:
The Linux on Power community in developerWorks has a great wiki page Porting from Intel x86 to Power systems running Linux that provides a great starting point for the process.
If you are migrating your application from x86 Linux and like bundles or toolkits, the Software Development Toolkit for Linux on Power provides an Eclipse-based environment for C/C++ applications with a porting wizard (The Migration Advisor) and a tuning wizard (Souce Code Analyzer) for development efficiently. This bundle further provides the latest free software (GNU) tools, oprofile, gdb and several Power-unique tools such as FDPR for post link optimization, pthread-mon to analyze highly threaded applications, and CPI (cycles per instruction) tooling to visually show inefficiencies.
For the best advice on tuning your application, I recommmend starting at the Performance Rocks – Best Practices wiki page in developerWorks.
The Performance Optimization and Tuning Techniques for IBM Processors, including IBM POWER8 Redbook provides excellent insight to the Power processor.
Now let us take a look at “where can I get started?” The answers to this question depend on your role in the software ecosystem. If you are a software provider, my colleague Bob Dick, recently published he thoughts on how to get started in a the Using the IBM Power Development Cloud for Red Hat Enterprise Linux 7.1 (little endian) Beta application testing blog posting. Programs like IBM PartnerWorld provide this and more resources to facilitate porting. Check them out.
If you are a “in house” owner of an application in your enterprise, finding a system on which to port your application could be challenging. Of course, your IBM Sales contact or your business partner can provide alternatives such as try-and-buy or proof-of-concept systems. Do not hesitate to start with them. If you do not know them, or if this does not work out, go to the cloud! Site Ox offers a two week free trial for development purposes. Visit their website for details. As we move forward, I remain hopeful that other vendors will provide public offerings of Linux on Power images. Further, if you do not at first see the particular release for which you are looking, reach out to the service provider and request it. They might just surprise you and have a plan to provide it. If not, it helps them to hear your needs.
For open source developers, the access to free cloud images increases. The Open Source Labs at Oregon State University hosts Power development images (VMs). University of Campinas (UNICAMP) also hosts a minicloud in Brazil. In China, the SuperVessel Cloud
provides a similar service to developers. In addition to these three locations, we are hoping to extend our offerings in both Europe and India in the near future. Again, the particular releases hosted at these sites may vary, but will generally include the little endian versions of Fedora, openSUSE, and Debian. If none of these sites or offerings work for you, feel free to reach out to me on Google+ (loaner post) to explore a dedicated loaner system.
With a complete set of little endian Linux on Power distributions, a robust list of technical resources, and plenty of resources for porting applications, the future is here. Take the first step. Seize the moment. Let's see what you can do with Linux on Power!
As tremendous progress continues across the Linux on Power offerings, Red Hat has announced the availability of the beta for the next release of their Red Hat Enterprise Linux 7.1 product.
Noteworthy in this release is the availability of Little Endian support coming with RHEL 7.1. Big Endian support continues.
"Additionally, for customers who are using the IBM Power Systems platform as part of their datacenter infrastructure, Red Hat Enterprise Linux 7.1 beta now includes support for POWER8 on IBM Power Systems (based on little endian). Running in little endian mode accelerates innovation on the Power platform by removing an application portability barrier and allowing customers utilizing IBM Power Systems to leverage the existing ecosystem of Linux applications as developed for the x86 architecture."
Modified on by jhopper
By: Jenifer Hopper
This article discusses some basic XML tuning tips for PowerKVM guests. It helps new users get started with editing guest XML definitions, and walks through some simple tuning examples.
The article covers various options to tune the guest disk, network, cpu, and memory. It also includes some example guest resource pinning configurations for different scenarios. Applying these tips may help improve application performance by ensuring your guest is configured properly and optimized for the KVM environment.
Modified on by jerberstark
A new Service and Productivity tools repository update is available.
This release adds support to SUSE Linux Enterprise Server 12, along with updates to other supported Linux distributions:
Red Hat Enterprise Linux 7
Red Hat Enterprise Linux 6
Red Hat Enterprise Linux 5
SUSE Linux Enterprise Server 12 (new)
SUSE Linux Enterprise Server 11
The Yum repository is easily configured by downloading and installing the Configuration RPM from IBM Service and Productivity Tools website. The website includes installation instructions for the recommended tools.
SLES 12 customers can also get access to Linux on Power repositories through new YasT software add-ons IBM DLPAR Utils for SLE 12 ppc64le and IBM DLPAR sdk for SLE 12 ppc64le (which also includes Advance Toolchain repository).
For more information on Service and Productivity Tools available in the repositories, check the documentation.
Modified on by rfolco
IBM PowerKVM runs the OpenStack integration tests using a "CirrOS-like" image, a minimal Linux distribution that was designed for use as a test image on clouds.
This article shows how to build a custom cloud image for Power and how to use it on OpenStack Tempest suite.
The tiny image has less than 10MB and boots a custom mainline Kernel with built-in virtio drivers in less than 9 seconds. The rootfs is built with buildroot release 2014.05 and the resulting image is bundled with the scripts provided in CirrOS-0.3.2.
For more information about IBM PowerKVM refer to the Redbook.
Modified on by Bill_Buros
We get a number of questions these days on porting from x86 systems to Linux on Power, in particular around Little Endian. IT turns out there are a number of things to consider, let's make a list of things to be aware of
ppc64le is 64bit only; code and/or makefile potentially will need changes
x86 specific intrinsics/built-ins are often hidden in the code
x86 specific APIs, easy for the tools to find
x86 asm code, not common, but surprisingly used here and there
natural build flags differences (ex. mcpu=)
potential packaging change (RPM > deb), package scripting
tuning/optimization differences - but that's generally the follow-on phase
The IBM SDK (Eclipse based) tool for PowerLinux provides a very cool set of tools for analyzing source code and finding these issues.
Out on Youtube, there are a number of videos for using the IBM SDK. Browse and review as your interest drives you.
On IBM's DeveloperWorks, there's a quick video available of a demo of the Migration Advisor.
On IBM's InfoCenter, there's a good guide which describes using the IBM SDK, with a specific chapter on the Migration Advisor
For a feel of what's available in the Code Analysis section.. and a sample assessment of a common x86 application.. see the following images.
We highly recommend downloading the IBM SDK and start playing! http://www-304.ibm.com/webapp/set2/sas/f/lopdiags/sdklop.html
Modified on by JeffAntley
By: Jeff Scheel, PowerLinux Chief Engineer
A software provider recently said, point blank, at the beginning of our discussion, “Convince me that PowerLinux is not a new platform.”
The core of my answer went like this: the value of Linux is the same for software providers as it is for customers – Linux provides a single operating system environment across different hardware platforms. Customers and partners who have both x86 Linux skills and Power System skills have all the knowledge they need to run Linux on IBM Power Systems and PowerLinux servers. Customers and partners who only have x86 skills will need some training in PowerVM and the platform, but will quickly find commonality in the operating system.
This software provider's simple request strikes at the heart of the IT challenge – managing expense for the data center. Everyone is struggling to control software deployments in the enterprise. One can visualize this challenge as being depicted by a two-dimensional matrix (spreadsheet) of applications (across the top) against platforms (down the left side).
This complexity of this problem becomes most daunting when one considers two key factors:
Adding a new platform to the matrix means addressing all the applications and vice versa.
The application list continuously grows, seemingly daily.
The good news for all of us in the IT industry is that Linux and open source are driving commonality into software and redefining the “platform” to be the most basic components of the hardware platform. To fully understand this conclusion, we need to look at how the definition of “platform” has evolved over time.
Software Engineers typically draw the software stack with hardware on the bottom of the stack and applications on top, as shown below.
Applications run on operating systems, hosted by hypervisors, which virtualize the hardware architecture.
For the sake of simplicity, I have grouped several categories together. For example, middleware is really an enabling component of applications and as such could be a separate layer below “Applications.” Additionally, runtime environments like Java or Perl have been lumped into the operating systems level instead of being placed as an unique layer on top of “Operating Systems.” One could just as easily include hypervisors as part of the hardware architecture. To enable a technology trends discussion later in this blog entry, I have elected to explicitly separate “Hypervisors” as a distinct layer.
You must consider the visibility to users in the IT infrastructure of each software stack component to appreciate the impacts on the business. With a little insight, you can see that most people in the enterprise touch the application layer, while fewest touch the hardware. This observation suggests a roughly graduated pyramid, as depicted below.
The implication of this graduated visibility means that changing an application causes more expense for an enterprise than does changing hardware architectures, especially when changes in a top layer drive changes in the foundational layers below it.
Does anyone remember the day when applications ran on only one platform? Not too long ago, “platform” meant the whole software stack, a silo top-to-bottom. If I wanted to run the AmiPro word processor, I did so on my DOS-based personal computer. To do word processing at work, I used BookMaster on the mainframe. Applications were the “platform."
Over time, application providers began to write applications that could handle the disparate operating system environments. Applications (middleware) like IBM's DB2 database are available for Windows and UNIX operating systems. The advent of interpreted languages like Java further helped application portability. In this time frame, people began to think of the “platform” as the layers from the operating system downward.
With the adoption of the Linux operating system, the definition of “platform” continues to move further down the stack. When customers run RHEL 6.3 or SLES 11 SP3 in their environment, they get the same kernel version, built with the same compiler (which of course generated different binaries for the each processor instruction set), running the same libraries, leveraging the same file systems, and including the same levels of many common applications such as the Apache webserver. In this structure, properly written applications can expect common application environments, even though they may be running in different hypervisors on different hardware platforms. For well written applications, “porting” to a new platform generally becomes a recompile and a regression test.
The value of Linux to the enterprise comes from commonality. Differences drive expense. Commonality saves money. The greater the “visibility” for a component of the software stack, the larger the potential for savings from commonality. So, it should not surprise us that the historical trend of convergence has been top-down in our software stack.
While the majority of commonality has been with applications and operating systems, some convergence has occurred within the hardware architecture layer. I/O has developed standards like PCI. Networking converged to Ethernet, despite a competing standard of Token Ring. Likewise, storage area networking selected the Fibre-channel protocol over SSA. While these gains have been beneficial, they likely are limited in opportunity because processor architectures will likely never converge to a single architecture. Every processor type and system architecture focuses on solving a different challenge.
As we look forward toward the future, opportunity exists for continued convergence in the software stack with hypervisors. Today's enterprises have their virtualization options dictated by the hardware platform. Power systems virtualize with PowerVM, mainframes use z/VM and PR/SM, x86 systems use VMWare, HyperV, Xen, or KVM. Enterprises with heterogeneous hardware platforms run different hypervisors on each platform -- not by choice, but by mandate.
With Linux embracing KVM, the potential for a single, cross-platform hypervisor emerges. In the not too distant future, enterprises should be able to leverage KVM to reduce their virtualization expense, again changing the definition of “platform.” In this final picture, commonality has driven the definition of “platform” to simply being a new hardware architecture, reducing the impact (and resulting expense) to the most minimal definition.
Now that we have completed the analysis of the software stack convergence, we can provide a deeper answer to the original question: Is PowerLinux a new platform? Not really. Linux on Power is “just Linux”, RHEL 6.5 or SLES 11 SP3. For a software provider such as the one who posed the original challenge, their x86 Linux expertise combined with their Power System skills facilitate support of Linux on Power in a very cost effective fashion. As Power embraces KVM, even the hypervisor differences will be eliminated and the impact of adopting Linux on Power has truly been isolated to a few people for both software providers and customers alike.
While this convergence can be viewed from a historical perspective, one could also view these steps as phases of application maturity. New applications typically begin on a single operating system and hardware stack. Over time time, applications wishing to support multiple hardware platforms, will port. Modern applications, such as those written in interpreted languages like Java or Python or those compiled with using open source compilers like gcc, will migrate to new hardware platforms quickly. Linux and open source tools enable software vendors to maximize their addressable market by writing to common libraries and compiling using standard tools common to all platforms who run the Linux operating system.
The evolution of the software stack has been on an exciting trend. Open source software has driven and will continue to drive a convergence of the software stack as the IT industry evolves. We have come a long way from the days of “application silos.” Today's runtime environment with the Linux operating system provides a common architecture that enables cross-platform application support. With the emergence of KVM as a virtualization technology, convergence will continue into the hypervisor layer. Once this transformation is complete, platform differences will be reduced to the fundamentals of the processor architecture, enabling IT customers to minimize expense while leveraging the fundamental advantages each platform can provide to their solution. What enterprise would not want to be allowed the flexibility of selecting the best platform for their solution while minimizing expenses?
If you are an IBM Business Partner or IBM employee who needed remote access to Power Systems servers in the past, you might have come across the Virtual Loaner Program (VLP). The VLP is gone now, but not to worry, it has been replaced by the Power Development Platform (PDP).
In addition to the name change, the program added new features and comes with an improved web interface. As did the VLP, the PDP focuses on bringing ISVs, other Business Partners and IBM employees worldwide remote access to IBM POWER processor-based servers on the IBM AIX, IBM i and Linux operating systems. The PDP brings the latest in IBM Power Systems hardware for porting, testing, certifying and demonstrating applications.
Outside of a new Linux porting image with IBM DB2 10.x and IBM WebSphere 8.5.5., which will especially interest the PowerLinux community members, the other enhancements include areas such as improved reservation navigation, the capability to allow expansion beyond Virtual Server Access and deeper social media integration to provide users with more news and information.
So, check out the PDP site at ibm.com/partnerworld/pdp and see it for yourselves. Please note that you need to be a PartnerWorld member or an IBM employee to reserve a virtual system.
Modified on by Bill_Buros
An interesting new Facebook page was created from Austin Texas stemming from the new OpenPOWER consortium announce. It'll be interesting to watch and see what's announced, discussed, and talked about there.
See https://www.facebook.com/powertoyall !
Power to Y'all is dedicated to sharing information and insights about the IBM POWER servers and ecosystems as seen through the eyes of the developers of the POWER systems.
If you're interested, there are two other Facebook pages which you should check out as well:
Modified on by jerberstark
At the Red Hat Summit this week, in one of the opening keynote presentations, Arvind Krishna (the IBM General Manager for Development and Manufacturing in the IBM Systems & Technology Group), presented a good summary and strategic view of IBM's continuing contributions to open-source communities, technologies, and customer solutions.
The video of his presentation is available here
Near the end of the video (right before the 23:00 mark), Arvind makes two announcements which help demonstrate the continuing excitement and investments around Power and Linux for customers and new technologies.
With KVM support coming to the Linux-only Power servers next year, a new realm of virtualization becomes available for Power customers.
June 18th 2013: Adding link to IBM Press release: http://www-03.ibm.com/press/us/en/pressrelease/41255.wss
In the category of learning something new on a regular basis, over the last week I discovered some commands on Linux running on Power systems which were new to me. Turns out "lparstat" has been implemented, and a colleague here in the LTC pointed out two commands "lscpu" and "lsblk" which I hadn't seen before.
Trying these out on a system with POWER7,
# cat /etc/*release*
SUSE Linux Enterprise Server 11 (ppc64)
VERSION = 11
PATCHLEVEL = 2
# rpm -qf `which lscpu`
# rpm -qf `which lsblk`
# rpm -qf `which lparstat`
For lparstat, there is a utilization view and an informational view of the LPAR.
type=Dedicated mode=Capped smt=On lcpu=16 mem=130797952 kB cpus=0 ent=16.0
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ----- ----- ----- ----- ----- ----- -----
0.00 0.00 0.00 99.99 0.00 0.00 0.00 5279201 962
# lparstat -i
Node Name : testsys
Partition Name : lpar1
Partition Number : 1
Type : Dedicated
Mode : Capped
Entitled Capacity : 16.0
Partition Group-ID : 32769
Online Virtual CPUs : 16
Maximum Virtual CPUs : 16
Minimum Virtual CPUs : 1
Online Memory : 130797952 kB
Minimum Memory : 256
Desired Variable Capacity Weight : 0
Minimum Capacity : 1.0
Maximum Capacity : 16.0
Capacity Increment : 1.0
Active Physical CPUs in system : 16
Active CPUs in Pool : 0
Maximum Capacity of Pool : 0.0
Entitled Capacity of Pool : 0
Unallocated Processor Capacity : 0
Physical CPU Percentage : 100
Unallocated Weight : 0
Memory Mode : Shared
Total I/O Memory Entitlement : 134754598912
Variable Memory Capacity Weight : 0
Memory Pool ID : 65535
Unallocated Variable Memory Capacity Weight : 0
Unallocated I/O Memory Entitlement : 0
Memory Group ID of LPAR : 32769
Desired Variable Capacity Weight : 0
lscpu is available, although the socket calculation isn't correct in the realm of terminology that we more typically use on POWER systems. I'll need to follow-up on that, in our thinking the two nodes are sockets, and these processors are 8-cores per socket.
Byte Order: Big Endian
On-line CPU(s) list: 0-63
Thread(s) per core: 4
Core(s) per socket: 1
CPU socket(s): 16
NUMA node(s): 2
Hypervisor vendor: pHyp
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 10240K
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
lsblk is available. It provides another view of the block devices on a system.
NAME MAJ:MIN RM SIZE RO MOUNTPOINT
sdb 8:16 0 136.7G 0
├─sdb1 8:17 0 399.5K 0
└─sdb2 8:18 0 136.5G 0 /
sda 8:0 0 136.7G 0
├─sda1 8:1 0 4M 0
├─sda2 8:2 0 500M 0
├─sda3 8:3 0 4G 0 [SWAP]
├─sda4 8:4 0 1K 0
└─sda5 8:5 0 132.2G 0
sdc 8:32 0 136.7G 0
sdf 8:80 0 136.7G 0
sdd 8:48 0 136.7G 0
sde 8:64 0 136.7G 0
sr0 11:0 1 1024M 0
# lsblk -t
NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED
sdb 0 512 0 512 512 1 cfq
├─sdb1 0 512 0 512 512 1 cfq
└─sdb2 0 512 0 512 512 1 cfq
sda 0 512 0 512 512 1 cfq
├─sda1 0 512 0 512 512 1 cfq
├─sda2 0 512 0 512 512 1 cfq
├─sda3 0 512 0 512 512 1 cfq
├─sda4 0 512 0 512 512 1 cfq
└─sda5 0 512 0 512 512 1 cfq
sdc 0 512 0 512 512 1 cfq
sdf 0 512 0 512 512 1 cfq
sdd 0 512 0 512 512 1 cfq
sde 0 512 0 512 512 1 cfq
sr0 0 512 0 512 512 1 cfq
By: Kersten Richter. In the PowerLinux world where the installation choices are seemingly endless, a clear path for installing Linux on a PowerLinux server can be somewhat difficult to find. In late January, Steve Champagne and I (Kersten Richter) traveled to Rochester, Minnesota for our own installation experience. We received an IBM PowerLinux 7R2 (8246-L2C) still in the shipping box. We spent several days exploring various installation scenarios and, based on our experiences, have recently published several resources to use for installing Linux on a PowerLinux server.
The first of these publications are Quick start guides
. From start to finish, these guides provide a specific path for installing Linux on a PowerLinux server. One guide illustrates setting up and installing Linux through the use of a console. This method is perfect for a system that does not have a graphics card installed. Another method demonstrates setting up and installing Linux using a monitor, keyboard, and mouse: a path designed for a system that does have a graphics card. Both of these guides are intended for a stand-alone system.
For those of you who would rather watch an installation than read about it, we also have Quick start videos
illustrating similar installation paths to the guides. The videos contain many tips for setting up your PowerLinux server, in a format that is easy to follow. Similar to the quick start guides, the videos highlight a path for a specific setup scenario.
Take a look and let us know what pathways you found most helpful.
By: Fabio Dassan dos Santos
One of the new and noteworthy features for this 5.3 release, the LPAR Cloning and Restoration tool, focuses on extending value in this category by providing a quick and easy way of creating re-usable system images across LPARs.
Through very few steps, it is possible to achieve the following with this new tool:
- Save all available devices of the LPAR in backup images;
- Use compression methods to decrease the size of the backup image;
- Store these images in a NFS server share;
- Associate previously saved images with available devices of a LPAR and restore the system;
This function is specially useful in those situations where there is a need to preserve a certain system level, or even quickly replicate system images to multiple LPARS, in a virtualized environment.