Pulse Behind The Scenes, Part 2
JohnCrawfordIBM 100000BANX Visits (2255)
Pulse labs behind the scenes
by David Ross
This is the second in a series of postings that will show you some of the effort that goes into hosting live code exercises at a conference such as IBM Pulse. The first article in the series can be found by clicking this link.
This series includes articles on:
In the first article in this series, we discussed why on-site laptops were the method of choice for hosting labs. In this article, the process of configuring and preparing the 400 host laptops will be covered.
So, we chose laptops to run the lab virtual machines. Now what?
One of my biggest complaints of the past
decade, which still holds true today, is this:
It is frustrating to run enterprise-level software, designed for large
corporate server farms and storage arrays and networking equipment, on end-user
laptops and desktops. This was
particularly true when RAM and CPU limits were prohibitive: if a product needed a minimum of 4GB RAM to
operate, hosting a virtual machine on a 4GB host would invariably create
performance issues. The host needed some
of that RAM, leaving you with less
In years past, when the hardware a limiting factor, lab developers had to take advantage of everything possible to maximize the performance. As a result, it has been many years since we chose to use a Linux hosting environment. I am not wanting to start a flame war, and I personally use both Linux and Windows on a regular basis in order to do my job here at IBM. However, if you go back 5 or more years and consider what we were facing, Linux was the better choice. Remember, Windows XP would not recognize the full 4GB RAM that we had, and for some of our products at that time, such a limitation was a huge factor in the decision. In deciding for the Pulse 2013 event, I was open to running a Windows host if it proved to be the best option.
Fortunately for all of us, both Windows and Linux have come a long way in supporting USB3, newer wireless network chips, and so on. My personal comfort level is with Linux, primarily for managing the environments, pushing out changes on-site if needed, and reduced disk footprint for the base OS. In our situation, the Linux install was about 10GB smaller overall than the Windows 7. That's not much, but when you have to replicate 400 laptops, every savings adds up real fast. The difference in base footprint means that 4000GB (4 terabytes!) less data has to be replicated in the end.
So, host environment: Linux
Once that decision was made, the rest flowed pretty quickly from that point. In configuring the laptops, we added support for every possible contingency we could imagine. These laptops will support pretty much every Linux partition out there in addition to CIFS/NTFS partitions in case we need to transfer an updated image from a Windows-formatted disk. Further, we have full support for the USB3 ports, the wireless networking adapter, and have included every compression tool we could think of ready to be used: gzip, bzip2, zip, 7-zip, tar, rar, and on it goes. A file transfer client was put in place for easing the process of moving images to and from the master. A standard location for the images was established, and the “base master” was complete. As icing on the cake, one of my colleagues is a conky fan. He has created a nifty desktop for our use. Very nice, indeed:
Once the base master is finished, we move to master processing, in the next article.