Pulse Behind the Scenes Part 1
JohnCrawfordIBM 100000BANX Visits (1354)
Pulse labs behind the scenes
Countless decisions are considered when planning labs;
Where the software should be hosted is among of the first
by David Ross
This is the first in a series of postings that will show you some of the effort that goes into hosting live code exercises at a conference.
It has been about 7 years or so since Tivoli Technical Enablement had the notion that participants at a conference would like to experience our software for themselves. What started out as a small room apart from the main venue at a user conference in Chicago has grown – and grown – and grown - to what we currently put on each year at Pulse. This year's event will mirror last year's overall: nearly 200 labs hosted on approximately 400 laptops, proctored by about 50 people in a huge room off the main arena. Needless to say, a lot of planning and work goes into making this happen.
This series includes articles on:
Where to host
When it comes time to provide a hands-on lab experience, there are many decisions that must be made, even as simple as the basic question of where to host the lab images. There are advantages and shortcomings to each method, and in the end, I hope you see why we think the solution we use is our best choice.
For many years, IBM Technical Enablement has standardized on virtual-machine images whenever possible for classroom exercises. Virtualized systems provide known software and operating system levels, the ability to revert to known configurations, and portability across multiple delivery environments. Tivoli Technical Enablement provides training across a number of venues and delivery scenarios. Sometimes we go on-site to a customer and perform the training in the client's classroom or lab. Other times, a public course is held at a public venue, such as a training center. Then there are the remote teaching situations, usually on-line with remote access to lab equipment for the labs.
When it comes to Pulse, our goal is to provide access to existing training materials as much as possible, chopped up into smaller, self-contained tasks that a participant can finish in under an hour. In addition, there is no formal instruction that takes place, so the materials are modified to include more descriptions, explanations and context to enable a stand-alone experience as much as possible. The proctors can – and do – answer technical questions about the products, but it is our goal that the labs “run themselves” as much as possible.
Re-using our materials is key to providing a rich variety of topics and products to participants. If we had to start over for each Pulse event, the number of labs would be greatly reduced. In conjunction, our classroom images are also re-used, and that begins the winnowing process for deciding how to host the labs.
One option we use for many formal classes is a remote lab. Classes can be hosted on IBM-provided hardware, such as out of the Austin, Texas labs, or by third-party vendors. For some labs that require specialized hardware, that is still the case, even at Pulse. It's hard to ship a mainframe or a SAN to a conference solely for the purpose of user labs. In those cases, the participant will connect remotely from the laptop in the room and perform their labs against on remote hardware. However, these are the exception to the general rule.
Remote connections are always a risk. At some venues, the outgoing Internet connections are slow, unreliable, and sometimes cost-prohibitive. Further, as the recent storm that hit the East Coast shows us, even a reliable connection can be affected by external factors. If our labs were to rely solely on remote access in order to operate, and that connection were, for some reason, unavailable, then you have lost ALL labs for the conference. Clearly, we cannot allow such a thing to happen if we can prevent it. For the same reason, a cloud-based offering is also not the best choice. No matter how robust a cloud connection might be, connectivity becomes the keystone in bringing all efforts to a stop if it fails. Even as remote access has become more reliable and faster, I am far more comfortable having my labs configured in a way as to not need that connection if at all possible.
Removing remote access as the bottleneck or weakness leaves you with on-site hosting as your main delivery method. At the Pulse 2012 event, an on-site cloud pilot was held and showed promise, but a variety of factors has delayed such a solution for the present. Since we are not running an on-site cloud, and we do not want remote access to be the only method of running labs, this leaves us with providing hosting using on-site hardware. For our team, this means laptops. Hosting locally on a laptop limits you only by the hardware you have in front of you. Thankfully, the T61 ThinkPads are finally obsolete enough to have been dropped from our list this year, and everyone will get to work on W520 systems. These are newer, more robust systems that should give you a great experience in working with our products at the event this year.
So, now that we are using laptops, a whole new series of questions and processes must be put in place to make it happen. Part 2 will cover some of the laptop configuration and verification process we use.
David Ross is a Senior Enablement Specialist with IBM Software Services for Tivoli, Cloud Enablement. He develops courseware, teaches classes and configures training systems to help customers get the most from their IBM training experience.