Whilst preparing for a Linux on Power workshop and coming to the step where I should power on the S822L server, I found out that I needed to use something called IPMI. Using IPMI I would be able to power on the server and get a console.
IPMI is short for Intelligent Platform Management Interface, but in my case it stood for 'IProbably Miss Instructions'.
I gave it a thought but decided that it would be a weird workaround. So I started searching for an alternative that could be used on Windows. And I found one called IPMIUTIL. It even has an msi installer package version available for download.
You can download this tool from the following page:
Click on "Download Primary Packages", then choose one of the Windows version (32 or 64 bit) of the file. The MSI is the automatic installer. At the time of writing, these are the versions available:
The msi-version installs the command based utility in C:\ProgramFiles\sourceforge\ipmiutil. So you will need to open a CMD prompt and change your directory to where it is installed.
The commands and the syntax to be used differs a bit from IPMITOOL, so I will list them below:
To power on your server, run the following command:
ipmiutil power -u -N fsp_ip_address -P ipmi_password
To activate your IPMI console run the following command:
ipmiutil sol -a -r -N fsp_ip_address -P ipmi_password
(where fsp_ip_ipaddress is the IPaddress of the Power system and ipmi_password is the password setup for IPMI - See the guide referenced above for more information).
When I used the above command(s), I felt happy and overwhelmed when the server powered on, and the fans started making their sound. Special thanks to Maarten Kreuger who helped solving a piece of this puzzle by supporting me with his knowledge about IPMITOOL.
If the session can't be opened because it's already active somewhere else, you can issue the following command to de-activate it:
ipmiutil sol -d -r -N fsp_ip_address -P ipmi_password
EDIT: Some later versions of this utility seem to present some issues on Windows7 users (DLL files missing and so on). If that happens, try an earlier version from the archive:
If you are an IBM Business Partner or IBM employee who needed remote access to Power Systems servers in the past, you might have come across the Virtual Loaner Program (VLP). The VLP is gone now, but not to worry, it has been replaced by the Power Development Platform (PDP).
In addition to the name change, the program added new features and comes with an improved web interface. As did the VLP, the PDP focuses on bringing ISVs, other Business Partners and IBM employees worldwide remote access to IBM POWER processor-based servers on the IBM AIX, IBM i and Linux operating systems. The PDP brings the latest in IBM Power Systems hardware for porting, testing, certifying and demonstrating applications.
Outside of a new Linux porting image with IBM DB2 10.x and IBM WebSphere 8.5.5., which will especially interest the PowerLinux community members, the other enhancements include areas such as improved reservation navigation, the capability to allow expansion beyond Virtual Server Access and deeper social media integration to provide users with more news and information.
So, check out the PDP site at ibm.com/partnerworld/pdp and see it for yourselves. Please note that you need to be a PartnerWorld member or an IBM employee to reserve a virtual system.
One of the new and noteworthy features for this 5.3 release, the LPAR Cloning and Restoration tool, focuses on extending value in this category by providing a quick and easy way of creating re-usable system images across LPARs.
Through very few steps, it is possible to achieve the following with this new tool:
Save all available devices of the LPAR in backup images;
Use compression methods to decrease the size of the backup image;
Store these images in a NFS server share;
Associate previously saved images with available devices of a LPAR and restore the system;
This function is specially useful in those situations where there is a need to preserve a certain system level, or even quickly replicate system images to multiple LPARS, in a virtualized environment.
Jason Furmanek has published a nice article on his experiences and insights in setting up a NIM server (more typically done with AIX installations..) to install RHEL 6 on a Power system. If you're an AIX customer - this could be very beneficial.... Questions are welcome on the forums. This page emerged from an ongoing discussion on the forums.
From his article..
The process of booting and installing a Linux server from the network is valuable for many reasons. It is efficient, predictable, and customizable. The time spend up front on boot server configuration is usually well worth the effort.
This page will walk through the methods of setting up a boot server on AIX that can be used to install Red Hat Enterprise Linux onto Power servers or LPARs over the network.
By: Jeff Scheel. What an exciting week in Miami, FL!!! I spent last week at Power Technical University, helping people Think Power Linux. We had lots of great discussions. A big "thank you" goes out to all who attended sessions, a bigger "THANK YOU" to those who asked questions and participated in the discussion.
Here are some my key thoughts from the event:
The interest in Linux continues to increase. Although I don't keep formal counts, attendance at the Linux sessions is up over last year which was better than the previous. The first Trends and Directions presentation was standing-room-only, largely due to overflow from the other sessions. But even before the overflow wave started, we had at least 40 attendees in the room. I've posted the deck for people to review who didn't make the session.
Power customers continue to grapple with the question of "Why Power Linux?" Those attending the sessions are frequently feeling like they're trying to convice their enterprises to consider Power when deploying Linux. When I provide the simplified answer of there being two reasons to do Power Linux -- the value of the Power Platform (virtualization, RAS, and performance) and all of the additional value-add items that we provide (pre-load, Installation Toolkit, Simplified Setup Tool, Software Development Toolkit, and Think Power Linux community) -- the answer seems to resonate. Folks understand that the platform provides value to all Power operating systems. They also appreciate the value-add initiatives that reduce their time-to-value for Linux solutions on Power Systems.
The 2011 focus items on the SDK for application development and the new Think Power Linux community are definitely needed and timely. The reception of these items have been resoundingly positive. Customers are happy that we're working to simplify the porting process with the SDK and they're looking for places to ask their questions and find the latest information on the product.
In a great discussion with a Power Linux customer, I learned that customers are still grappling with backup solutions similar to makesysb. While we have an open source solution that we're looking at for our Installation Toolkit next year, this customer discovered that Storix has made their SBAdmin product available for Power Linux. He implemented and was very impressed with the function, support, and price. What a great thing to learn and hear from a customer!
If you attended the conference, I hope you found as much value as I did. If you didn't attend, perhaps you join us at a future event.
OpenStack uses virtio driver as default model for both networking and disk devices. Virtio takes advantage of paravirtualization and enables guests to get high performance network and disk operations. However, OpenStack also supports some alternate options to enable custom models for KVM/Qemu hypervisor.
In order to use custom models, KVM users can change the image_meta properties and select the desired driver.
IBM PowerKVM provides support to ibmveth and ibmvscsi drivers as an alternative to the default virtio driver. These legacy drivers came from the PowerVM Hypervisor and are now supported by PowerKVM.
At the time of writing, for ibmvscsi, there is no change needed in OpenStack code. On the other hand, for ibmveth, a patch is required. The upstream change that reflects this driver support has been addressed in https://review.openstack.org/#/c/106451/.
The image_meta properties to change are hw_vif_model for ibmveth and hw_disk_bus for ibmvscsi. This is shown in the following example:
Last year, our research team published a research paper showing how a 10-node Hadoop cluster of IBM PowerLinux 7R2 servers could sort through a terabyte of data in less than 9 minutes. At the time, this beat the best known result achieved with a comparable cluster composed of x86 nodes by over a factor of two.
The team has not been standing still, however. With the launch in February of our new 7R2s that included enhanced POWER7+ processors, the team has pushed the envelope even further on these systems and, with a similarly sized cluster, is now able to sort a terabyte of data in less than 6.7 minutes.
The IBM China Research Lab reached this milestone using a 10-node cluster running RHEL 6.2 and Hadoop 1.1.3, managed with IBM Platform Symphony. The cluster comprised of one master control node and nine compute nodes. At 16 cores per compute node, this amounts to a sorting rate of 1.04 GB/min/core. (By comparison, a recent benchmark using an 18-node Cloudera Hadoop cluster of HP ProLiant Gen8 DL380 systems achieved a sorting rate of 0.57 GB/min/core.*)
We’ll have more information on the details of the testing environment coming soon, but proof points like this show the ability of Power Systems and Platform Symphony to provide high performance data analytics platforms at a reasonable cost. IBM solutions can provide rapid results to big data challenges, often in half the time as other solutions.
Big data has taken a toll on traditional computing systems.
Microprocessor performance simply has not kept pace with exponential increases in information, coupled with heightened demands for speed, analytics and cost effectiveness.
Until the introduction of the IBM POWER8 Coherent Accelerator Processor Interface, or CAPI, industry attempts to deploy hardware acceleration solutions that addressed the performance shortfall had not measured up to the challenge.
Two key hurdles prevented the broad deployment of hardware acceleration via hybrid computing platforms that attempted to integrate a typical processor with a customized hardware accelerator. The first is that the inefficiencies introduced by controlling an accelerator device through a traditional I/O subsystem overshadowed the benefits of acceleration for many workloads. The second is that the programming complexities of I/O-attached acceleration put it out of reach for many companies with time-to-market constraints and finite resources.
The POWER8 CAPI technology works around those issues with breakthrough innovations that have made hardware acceleration easily accessible and affordable for a broader base of clients than ever before. At the same time, CAPI has also laid the groundwork for an unprecedented level of partnership and wide range of new possibilities.
Available only on POWER8, and central to the concept of the Open Platform, CAPI serves as a “hollow core” that can be integrated with existing cores and is easily programmable to perform a specific function with amazing speed and efficiency. Drawing data directly from the processors and main system memory without any I/O overhead, CAPI runs as a peer to the POWER8 cores, accessing memory using the same programming methods and virtual address space.
By enabling a customer-defined FPGA (field programmable gate array) to function as a peer to the POWER8 cores in terms of memory access, developers are able to simplify the programming model of the accelerator and achieve huge efficiencies as a result.
Among the many metrics demonstrating greater efficiencies and cost savings:
CAPI reduces the typical seven-step I/O model flow to three steps.
Running NoSQL on POWER8 with CAPI reduces costs by three times.
In overcoming the fundamental inhibitors to hybrid computing, CAPI enables a higher-performing solution with a much smaller programming investment. As a result of lower costs and highly simplified programming, hardware acceleration is now accessible to a broader segment of the market than ever before.
The overall value proposition of CAPI is that it significantly reduces development time for new implementations and improves performance by connecting the processor to hardware accelerators and allowing them to communicate in the same language – eliminating I/O “middleman.”
Ultimately CAPI is opening up new levels of partnership by opening up the POWER8 architecture to innovation and customization. Tens of thousands of developers from around the world are now able to work on the platform and build solutions that we could never have imagined.
Learn more about CAPI, and other unique advantages offered by Power Systems scale-out servers, in this paper written by Robert Frances Group, "The IBM Power Scale-out Advantage."
If you read my recent post, Backing Your Guests With Huge Pages, you might remember me mentioning that the ability to back your KVM guests with hugepages is on its way to OpenStack. This functionality is planned for the Juno release. If you are just learning about hugepages, as I am, you may be wondering how this functionality could benefit you.
What This Means for OpenStack
OpenStack Compute, also known as Nova, is a feature rich service that streamlines the creation, use, and management of virtual machines. That being said, it still is not able to expose all of the functionality that your hypervisor provides. The hugepages blueprint, among others, brings OpenStack closer to providing you all of the options you could use to tweak your guest machines.
It should be noted that the hugepages blueprint is targeted towards hypervisors that use libvirt.
General Benefits of Hugepages
The advantages of hugepages are not widely applicable. Such benefits are seen in very specific workloads. Nevertheless, your workload may be one that does benefit from hugepages so it is worth knowing about.
Hugepages have several interesting properties. Two of these properties can be quite useful, and though I mentioned them in my previous post, allow me to expand upon each.
Property: Each hugepage is, as the name implies, large. Its size is a multiple of the standard page size.
Benefits: It goes without saying that you can fit more in a hugepage than a standard page. What you might not be thinking is how this improves the performance of TLB lookups. With hugepages, finding an address is faster as fewer entries are needed in the TLB to provide memory coverage.
Drawbacks and Other Observations: You should only use hugepages if the workload you are performing will use the space in the hugepages. Otherwise you will end up with severely fragmented memory. It should also be noted that the speedup seen in the TLB look up will not be the same on each machine. Let's take a look at an example comparing two hypothetical machines, A and B. For simplicity let's assume that page size is the same on both machines, while the hugepage size is larger on Machine A. On Machine A, 10 hugepages takes up 40% of memory. On Machine B 10 hugepages take up only 20% of memory. This means that to achieve memory coverage, more TLB entries will be needed on Machine B than for Machine A. Therefore, the benefits of hugepages would be seen more on Machine A than they would on Machine B
Property: Hugepages are pinned to memory. This means they do not get swapped in and out from RAM.
Benefits: The overhead for paging in and out is effectively eliminated. This provides a significant boost in performance.
Drawbacks and Other Observations: Main memory can fill up pretty quickly, and if you aren't swapping pages in and out, your perceived memory size will be much much smaller. Place that on top of under utilized hugepages and you have a real disaster on your hands! That being said, you do not have to only use hugepages on your host machine, nor would I necessarily recommend that you do. Your host can have a portion of memory backed by hugepages, leaving the rest to be backed by normal pages.
It is difficult to find/think of use cases which are general enough to be applicable to a wide audience, but specific enough that they could be utilized. So while the following section may be helpful, my advice is that you keep the properties and benefits mentioned above in the back of your head, considering their applicability on new workloads. Keep in mind that these are all strictly theoretical and should be taken with a grain of salt.
In broad terms, what you are looking for are workloads that require large chunks of data often. For example, perhaps a chunk of code needs to be accessed very frequently but it's too large to fit in a single standard page. While you could of course disperse this chunk of code across several pages, the opportunity to benefit from hugepages could be taken, potentially improving performance.
In a more specific case, imagine you have a database of books. Each record has title field, author field, etc. However, this database is unique in that each record has a field that contains the entire text of the book. For simplicity, assume that each book's text will use most of a hugepage but is unable to fit in a standard page. You write an application to utilize this database and want to bring the text of several books into memory so you can search them for a certain phrase.
Let's consider solving this problem with standard pages. Searching can take a bit of time, and in that time a chunk of one of the books gets marked for paging by the operating system. Once you reach that chunk of text to search, you now need to go out to the disk, find the page, etc. Additionally, the text of a book is quite large and will need to be dispersed across several standard pages.
By instead using hugepages, you reduce the overhead of checking the TLB as each book only requires one entry. As well, you are assured that the book's full text is in memory which means you won't incur the overhead of paging.
Finally, I'd like to direct you to a success story involving hugepages.
Once again, expect to see hugepage support for KVM guests in OpenStack's Juno release. Also look out for a post on using this feature on Power8 machines around release time. There, I'll be posting step by step instructions for creating guests with hugepages with OpenStack . If you are interested in keeping up with the development of this feature, or simply want more information, I highly suggest that you look through the blueprint (Click here to check it out).
Hopefully this post helped you further understand this interesting feature. Who knows, maybe the next time you need to optimize your machine for a certain problem, Hugepages will come to your rescue?
Businesses today are steeped in an overabundance of data that they collect and create every day. While IT is being called upon to leverage this data for insights that can transform the organization, data centers have essentially been maxed out. The mandate to deliver the insights and analytics that are hiding in the data is not feasible or cost effective within the framework of current computing capabilities.
There’s no question that it’s time for an entirely new and scalable analytics engine, driven by bigger memory, lower latency and better reliability.
With a vision for enhanced bandwidth, IBM POWER8 has achieved vast improvements in latency, two-and-a-half time’s better memory performance, and a lot more.
POWER8 offers more than 32 channels of DDR memory funneling into the POWER8 processor. This is two times the 16-channel capacity for POWER7, and four times the eight-channel capacity of the most competitors.
The result of a depth and breadth of innovation focused on optimizing for data centers, while increasing efficiency and lowering infrastructure cost, the POWER8 bandwidth contributes to a better system that does more while making technology leadership attainable for customers.
Each POWER8 socket supports up to 1 TB of DRAM in the initial server configurations, yielding 2 TB capacity Scale-out systems and 16 TB capacity Enterprise systems, and supports up to 230 GBs per second of sustained memory bandwidth per socket.
Having developed the first processor designed for Big Data with massive parallelism and bandwidth for real-time results, when coupled with IBM DB2 with BLU Acceleration and Cognos analytics software the capacity of POWER8 far outpaces industry standard options with 82x faster delivery to insights
Far more than a function of size, sophisticated innovations in the POWER8 memory organization is designed to enhance both reliability and performance. Key among the innovations:
Up to eight high-speed channels which each run up to 9.6 GHz for up to 230 GB of sustained performance
Up to 32 total DDR ports yielding 410 GB/sec peak at the DRAM
Up to 1 TB memory capacity per fully configured processor socket
Big Data’s Big Memory requirements call for nothing less than the industry’s most innovative, scalable, and massive bandwidth and capacity. POWER8 thrives on the kinds of complexities that your organization faces in the current environment, with a platform to keep you ahead of the game as unforeseen challenges and opportunities emerge.
Learn more about the unique advantages offered by Power Systems scale-out servers, in this paper written by Robert Frances Group, "The Power Scale-out Advantage."
This article explains an example method to tune a full-system PowerKVM guest to achieve CPU and memory performance that is very close to non-virtualized speeds, demonstrating very low KVM overhead on a POWER8 system. It also provides some common tuning tips for running SPECjbb2005 and the STREAM memory bandwidth workload on POWER8 systems.
The example tutorial starts by measuring non-virtualized performance to provide a system baseline (non-virtualized mode is currently a technical preview), then tests PowerKVM performance using both an "out of the box" and a "tuned" full-system guest configuration, describing how applying some common guest tuning to pin the vCPUs and memory can help achieve near-baremetal performance for some workload scenarios.
By: Bill Buros. There's quite a bit going on in the world of Linux on Power, where several of us have some focus on improvements for performance. Lately, a series of articles have been published on DeveloperWorks which nicely highlight the performance gains that gcc (packaged in the Advance Toolchain) provides over the gcc packaged with the Linux operating system.
Two articles are available which dive into performance gains across a number of workloads embedded in the SPECcpu2006 suite. The approach is simple. Use gcc as bundled with the version and release of the operating system, measure the performance. Then install the Advance Toolchain (a couple of rpms), change the path to gcc, re-build, re-run, and compare the performance.
By: Jeff Scheel. Two weeks ago, I blogged about my thoughts after attending Power Technical University in Miami. This week, I bring you my thoughts from our event in Copenhagen, Denmark.
It never ceases to amaze me what I learn at these events. While the topics I presented were identical to the session in Miami two weeks ago, I still learned a bundle from the Power Linux customers who attended in Copenhagen.
Here are my thoughts from this week:
Again, I took a significant number of business cards to Copenhagen and still ran out before the week was over. Interest in Power Linux was definitely greater than Lyon, France last year!!! Power customers are definitely "thinking Linux". I did my best to help extend this to "Think Power Linux". I believe we see this growth reflected here in our new community where our membership continues to grow. We've past the yearly goal of 150. Can we top 200 before year-end?
If you remember my blog from Miami, I was surprised to meet a customer who taught me that Storix had a solution for Power Linux. In Copenhagen, I met another Storix customer and had more discussions about mksysb type backups. There's a real need for this solution in the market place. While some open source solutions exist, none yet support Power. Having Storix supporting the platform is great because of their deep heritage with in the UNIX marketplace.
On the theme of surprising solutions, I found an answer to a frequent question: How do I size my Linux partition? Midrange Performance Group provides a Power Navigator product to perform capacity planning on AIX, Linux, and other proprietary operating systems. As I understand this solution, it can help you migrate a Linux workload from x86 to Power using data from the nmon tool. Give it a look if this is a problem with which you've been grappling. Oh, by the way, did I mention that the Linux Installation Toolkit includes nmon?
I attended a great presentation on the Linux on Power Best Practices in virtualized environments by Dr. Michael Perzl (firstname.lastname@example.org). He did a terrific job of detailing HA configuratoins for Power Linux and comparing showing the similarities/differences with the AIX equivalents. I've posted a PDF export of his presentation to our community. (Please note, the formatting issues in the PDF are a result of my export, not Michael's presentation.) Feel free to reach out to him as one of our many technical experts.
Finally, the issues in Europe are the same as the United States: How do we differentiate Power Linux from Intel Linux? How do we "sell" Power Linux within an business that believes Linux is x86 only? If you haven't read my approach to answering these questions, feel free to refer back to my blog about Power Tech U. in Miami.
If I met you in Copenhagen and you joined because of presentation, feel free to comment against this blog and provide feedback. Welcome to our community! Help make us better.
If you just follow along and my postings spur any thoughts or comments, feel free to comment as well. Did I spur any thoughts, comments, or questions?
Well, that's all. Thanks for Thinking Power Linux today. -Jeff Scheel
This articlediscusses some basic XML tuning tips for PowerKVM guests. It helps new users get started with editing guest XML definitions, and walks through some simple tuning examples.
The article covers various options to tune the guest disk, network, cpu, and memory. It also includes some example guest resource pinning configurations for different scenarios. Applying these tips may help improve application performance by ensuring your guest is configured properly and optimized for the KVM environment.