Earlier this month, at the SHARE conference in Pittsburgh, my predecessor, Glenn Anderson, received the John R. Ehrman award for sustained excellence in Training Education. John Ehrman was "the father of High Level Assembler (HL-ASM)" which is still used today.
Here is SHARE president, Jason Bastin (left), with Glenn Anderson (right).
Here is a close-up picture of the award itself.
For the past 18 years, Glenn was the Content Manager for IBM Z and LinuxONE at IBM Systems Technical University (TechU) events. He also was active at SHARE and other Z-related events.
But managing IBM Z and LinuxONE content was not all he did. Several years ago, Glenn also launched "Leadership and Professional Development" track at TechU. This track helped IT leaders, and those aspiring to become leaders, to learn technical direction to implement projects from proof-of-concept into production. He also included soft skills, such as how to be a better public speaker, how to lead projects, or how to run meetings better.
After retiring from IBM last December, he has launched his own public speaking practice, and has spoken at a variety of events. Check out his website [Glenn Anderson The Performance Catalyst Speaker].
Next month, [NewEra Software] starts a monthly webcast featuring Glenn called "Up your game".
Glenn has been an excellent mentor to me, as I take over his responsibilities at IBM TechU events, and I am glad he is doing well in retirement!
Comments (15) Visits (13118)
For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Menciaand one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels.
So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benc
(Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. )
That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin.
For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out.
Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time.
Note: This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted.
Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available.
In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set.
Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance.
So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment.
technorati tags: Carlos Mencia, Mind of Mencia, IBM, system, storage, SVC, SAN Volume Controller, Storage Performance Council,SPC, benchmarks, Texas Memory Systems, Woody Hutsell, EMC, Chuck Hollis, movie, book, reviews, awards, salt, soy sauce
The Belgium IT Security and Storage Expo was a great success!
(I am back to the USA in Portland, Oregon this week, so these posts relate to last week.)
However, that wasn't to say I didn't encounter a few challenges during my week in Belgium. The first was getting to the venue. The Belgium Expo is a large complex of buildings to the north of the city. The local IBM team suggested I go to the facility a day in advance so that I would be able to see where it was and how to get there.
I was staying in the center of town, in Place Rogier section. I had many transportation options:
Upon arrival to the building complex, I was unsure of which building I needed to be in. Standing in front of the beautiful Building 5, I found this legend that provided the answer: Building 8. In front of Building 12 was a map that showed where Building 8 was located on the campus.
For this event, IBM joined forces with IBM Business Partner I.R.I.S-ICT to have a fabulous booth, with plenty of experts and equipment demos. As is often the case, the team had to work late into the night to get all the equipment set up, all the podiums and counters constructed, and the demos fully operational.
Apparently, I was not the only one to have troubles finding the place, so I did not feel alone. Some with cars drove around the complex several times before figuring out which parking lot to park in. Others parked at the first spot they found, and still ended up walking as much as I did.
For future reference, If you plan to attend any event at the Belgium Expo, either (a) ask for more explicit directions, and (b) plan to do lots of walking!
Many people have asked me if there was any logic with the IBM naming convention of IBM Systems branded servers. Here's your quick and easy cheat sheet:
From a storage perspective, we often joked that the "i" stood for "island", as most System i machines used internal disk, or attached externally to only a fewselected models of disk from IBM and EMC that had special support for i5/OS using a special, non-standard 520-byte disk block size. This meant only our popular IBM System Storage DS6000 and DS8000 series disk systems were available. This block size requirement only applies to disk. For tape, i5/OS supports both IBM TS1120 and LTO tape systems. For the most part,System i machines stood separate from the mainframe, and the rest of the Linux, UNIX and Windows distributed serverson the data center floor.
Often, when I am talking to customers, they ask when will product xyz be supported on System z or System i?I explained that IBM's strategy is not to make all storage devices connect via ESCON/FICON or support non-standard block sizes, but rather to get the servers to use standard 512-byte block size, Fibre Channel and other standard protocols.(The old adage applies: If you can't get Mohamed to move to the mountain, get the mountain to move to Mohamed).
On the System z mainframe, we are 60 percent there, allowing three of the five operating systems (z/VM, z/VSE and Linux) to access FCP-based disk and tape devices. (Four out of six if you include [OpenSolaris for the mainframe])But what about System i? As the characters on the popular television show [LOST] would say: It's time to get off the island!
Last week, IBM announced the new [i5/OS V6R1 operating system] with features that will greatly improve the use of external storage on this platform. Check this out:
Now that's exciting!
technorati tags: IBM, System x, System p, System i, System z, island, COMMON, AIX, Linux, POWER, POWER6, Windows, EMC, DS6000, DS8000, TS1120, LTO, ESCON, FICON, 520-byte, z/VM, z/VSE, z/OS, z/TPF, OpenSolaris, mainframe, LOST, CPW, x86, VMware, VMotion, BladeCenter, JS22, i5/OS, V6R1, PowerVM, VIOS, LPAR, DS4700, DS4800, LTO, disk, SAN, tape, storage[Read More]
Continuing this week's theme on virtual worlds, I saw thatGartner predicts 80% of the online community will be using virtual worlds like Second Life by 2011.ComputerWorld ranks the top 8 corporations present in Second Life, IBM ranks #1.
Well, I'm off on another business trip.
I didn't really have a theme this week, still recovering from jet-lag from my travels through Japan, Australia, China.
Gary Diskman has an amusing blog entry about a Funny disaster recovery job posting. It is not clear if he is being completely tongue-in-cheek, or a bit cynical. However, it rings true that you get what you measure, and some managers look for easy metrics, even if there are unintended consequences.
Western medicine works this way. Rather than paying your doctor to keep you healthy, you pay him per visit, to get refills on prescriptions, check-ups on medical conditions, surgeries and so on. While Eastern medicine is focused on keeping people healthy, Western medicine profits more from resolving "situations".
I have seen similar situations on the "health" of the data center. In one case, the admins were measured on how quickly they bring back up their web-servers after a crash. They had this process down to a science, because they were measured on how quickly they resolved the situation. I suggested switching from Windows to Linux, a much more reliable operating system for web-serving, and showed examples of web-servers running Linux that have been up for 1000 days or more. Management changed the metrics to "average up-time in days" and magically the re-boots all but disappeared, thanks to Linux, but also thanks in part to shifting the incentive structure. Perhaps some of those earlier situations were "artificially created"?
Back in the 1980s, I was working on a small software project that was about 5000 lines of code. In those days, testers were measured by the number of "successful" testcases that ran without incident. Testcases that uncovered an error were labeled as "failures" to be re-run after the developers fixed the code. When I declared my code ready for test, the test team ran 110 testcases, all successfully, and they were all rewarded for meeting their schedule. I, on the other hand, did not accept these results, met with them and told them I would give them $100 each if they could find a bug in my code in the next week. Nobody writes 5000 lines of code without some error along the way, not even me. (As one author put it, more people have left earth's gravity to orbit the planet than have written perfect code that did not require subsequent review or testing. It's so true. Good software is difficult to write.)
The test team accepted the challenge, and found 6 problems, more than I expected, but at least I felt more confident of the code quality after fixing these. As I suspected, the unintended consequence of counting "successful" testcases was that testers would write the most simple, basic, leas So, my advice is to determine metrics that have the intended consequences you want, while avoiding any negative unintented consequences that might undermine your eventual success. People will quickly figure out how to maximize the results, and if you can align their goals to company goals, then everybody benefits. Well, I'll be blogging from Mexico next week (yes, it is a business trip!). Enjoy the weekend.
So, my advice is to determine metrics that have the intended consequences you want, while avoiding any negative unintented consequences that might undermine your eventual success. People will quickly figure out how to maximize the results, and if you can align their goals to company goals, then everybody benefits.
Well, I'll be blogging from Mexico next week (yes, it is a business trip!). Enjoy the weekend.
Comments (4) Visits (9971)
Jon W Toigo over at Drunkendata has had a great set of posts on his skepticism of storage vendors touting their "green storage" solutions. My apologies for my"unnecessary" use of quotation marks.
The ones I liked specifically were:
The last of which refers to this ComputerWorld article "EPA: U.S. needs more power plants to support data centers", which claims "from a technology perspective, the systems most responsible for gobbling up power are the relatively low-cost x86 servers ..." The article is based onthe recent EPA report that was just released.
Last month, in my post How manys Watts per Terabyte, I mentioned:
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
Comment (1) Visits (11932)
NetworkWorld has compiled interlude with storage videos, a follow up to last year's Yikes! Exploding Servers.
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exam Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
I had an interesting query about my last blog post [Enterprise Systems are Security-Ready], basically asking me what I decided to do for Full-Disk Encryption (FDE) for my laptop.
Earlier this year, IBM mandated that every employee provided a laptop had to implement Full-Disk Encryption for their primary hard drive, and any other drive, internal or external, that contained sensitive information. An exception was granted to anyone who NEVER took their laptop out of the IBM building. At IBM Tucson, we have five buildings, so if you are in the habit of taking your laptop from one building to another, then encryption is required!
The need to secure the information on your laptop has existed ever since laptops were given to employees. In my blog post [Biggest Mistakes of 2006], I wrote the following:
"Laptops made the news this year in a variety of ways. #1 was exploding batteries, and #6 were the stolen laptops that exposed private personal information. Someone I know was listed in one of these stolen databases, so this last one hits close to home. Security is becoming a bigger issue now, and IBM was the first to deliver device-based encryption with the TS1120 enterprise tape drive."
Not surprisingly, IBM laptops are tracked and monitored. In my blog post [Using ILM to Save Trees], I wrote the following:
"Some assets might be declared a 'necessary evil' like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assets are declared "strategically important" but are readily discarded, or at least allowed to [walk out the door each evening]."
When it was [time for a new laptop] in 2010, I spent a week [re-partitioning the drive], [transfering files], [installing programs], [re-organizing my folders], and finally [testing my system]. It was dual-boot so that I could run either Windows or Linux, as needed, to demonstrate various software solutions at the IBM Tucson Executive Briefing Center.
Unfortunately, dual-boot environments won't cut it for Full-Disk Encryption. For Windows users, IBM has chosen Pretty Good Privacy [PGP]. For Linux users, IBM has chosen Linux Unified Key Setup [LUKS]. PGP doesn't work with Linux, and LUKS doesn't work with Windows.
For those of us who may need access to both Operating Systems, we have to choose. Select one as the primary OS, and run the other as a guest virtual machine. I opted for Red Hat Enterprise Linux 6 as my primary, with LUKS encryption, and Linux KVM to run Windows as the guest.
I am not alone. While I chose the Linux method voluntarily, IBM has decided that 70,000 employees must also set up their systems this way, switching them from Windows to Linux by year end, but allowing them to run Windows as a KVM guest image if needed.
Let's take a look at the pros and cons:
In theory, I could have tried the Windows/PGP method for a few weeks, then gone through the entire process to switch over to Linux/LUKS, and then draw my comparisons that way. Instead, I just chose the Linux/LUKS method, and am happy with my decision.
Thirteen months ago, fellow IBM blogger Bob Sutor suggested the potential for avatars to [move from one virtual world to another].I thought this was far, far in the future myself, but this week, IBM and Linden Labs, the makersof Second Life, successfully teleported an avatar from SecondLife over to OpenSim. Here is the[Press Release].
If you are thinking there is no business value here, consider that Cisco has this incredible [11-minute demonstration video] that haspresenters in one city on the stage at another city.
Well, my job is done here in Tokyo, and my team is off next to Mumbai, India. This of course will takethe bulk of tomorrow in airplanes and airports, and not be as easy as teleporting in the metaverse!Read More]