This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
... indeed products such as USP, SVC and Invista can help in this regard. However at some stage even the virtualisation tools need replacing and the problem remains, although in a different place.
Knowing that replacement of technologies at all levels are inevitable, IBM System Storage SAN Volume Controlleris actually designed to allow cluster non-disruptive upgrade, which we announcedMay 2006.
The process is quite elegant. The SVC consists of one or more node-pairs, and can be upgraded while the systemis up and running by replacing nodes one at a time in a sequence of suspend and resume. All of the mapping tablesare loaded onto the new nodes from the rest of the still active nodes.
I was hoping as part of the USP-V announcement HDS would indicate how they intend to help customers migrate from an existing USP which is virtualising storage, but alas it didn't happen.
Unlike the SVC, once cannot just upgrade the USP in place and make it into a USP-V. While it might be possible tounplug external disk from the old USP, and re-plug into the new USP-V, what do you do about the internal disk data?I doubt you can just move drawers and trays of disk from the old to the new. The data has to be moved some other way.
Some have asked why not just put an SVC in front of both the old USP and the new USP-V and transfer the data that way.While SVC does support virtualizing the old USP device, IBM is still testing the new USP-V as a managed device, and so this solution is not yet available, and would only apply to the LUNs in the USP-V, not the volumes specifically formatted for System i or System z.
An alternative is to take advantage of IBM's Data Mobility Services, the result of our recentacquisition of SofTek. IBM can help you both mainframe and distributed systems data from any device, to any device.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
Last week, a writer for a magazine contacted us at IBM to confirm a quote that writing a Terabyte (TB) on disk saves 50,000 trees. I explained that this was cited from UC Berkeley's famousHow Much Information? 2003 study.
To be fair, the USA Today article explains that AT&T also offers "summary billing" as well as "on-line billing", but apparently neither of these are the default choice. I can understand that phone companies send out bills on paper because not everyone who has a phone has internet access, but in the case of its iPhone customers, internet access is in the palm of your hands! Since all iPhone customers have internet access, and AT&T knows which customers are using an iPhone, it would make sense for either on-line billing or summary billing to be the default choice, and let only those that hate trees explicitly request the full billing option.
Sending a box of 300 pages of printed paper is expensive, both for the sender and the recipient. This informationcould have been shipped less expensively on computer media, a single floppy diskette or CDrom for example. Forthose who prefer getting this level of detail, a searchable digitized version might be more useful to the consumer.
Which brings me to the concept of Information Lifecycle Management (ILM). You can read my recent posts on ILM byclicking the Lifecycle tab on the right panel, or my now infamous post from last year about ILM for my iPod.
His recollection of the history and evolution of ILM fairly matches mine:
The phrase "Information Lifecycle Management" was originally coined by StorageTek in early 1990s as a way to sell its tape systems into mainframe environments. Automated tape libraries eliminated most if not all of the concerns that disk-only vendors tout as the problem with manual tape. I began my IBM career in a product now called DFSMShsm which specifically moved data from disk to tape when it no longer needed the service level of disk. IBM had been delivering ILM offerings since the 1970s, so while StorageTek can't claim inventing the concept, we give them credit for giving it a catchy phrase.
EMC then started using the phrase four years ago in its marketing to sell its disk systems, including slower less-expensive SATA disk. The ILM concept helped EMC provide context for the many acquisitions of smaller companies that filled gaps in the EMC portfolio. Question: Why did EMC acquire company X? Answer: To be more like IBM and broaden its ILM solution portfolio.
Information Lifecycle Management is comprised of the policies, processes,practices, and tools used to align the business value of information with the mostappropriate and cost effective IT infrastructure from the time information isconceived through its final disposition. Information is aligned with businessrequirements through management policies and service levels associated withapplications, metadata, and data.
Whitepapers and other materials you might read from IBM, EMC, Sun/StorageTek, HP and others will all pretty much tell you what ILM is, consistent with this SNIA definition, why it is good for most companies, and how it is not just about buying disk and tape hardware. Software, services, and some discipline are needed to complete the implementation.
While the SNIA definition provides a vendor-independent platform to start the conversation, it can be intimidatingto some, and is difficult to memorize word for word.When I am briefing clients, especially high-level executives, they often ask for ILM to be explained in simpler terms. My simplified version is:
Information starts its life captured or entered as an "asset" ...
This asset can sometimes provide competitive advantage, or is just something needed for daily operations. Digital assets vary in business value in much the same way that other physical assets for a company might. Some assets might be declared a "necessary evil" like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assetsare declared "strategically important" but are readily discarded, or at least allowed to walk out the door each evening.
... then transitions into becoming just an "expense" ...
After 30-60 days, many of the pieces of information are kept around for a variety of reasons. However, if it isn'tneeded for daily operations, you might save some money moving it to less expensive storage media, throughless expensive SAN or LAN network gear, via less expensive host application servers. If you don't need instantaccess, then perhaps the 30 seconds or so to fetch it from much-less-expensive tape in an automated tape librarycould be a reasonable business trade-off.
... and ends up as a "liability".
Keeping data around too long can be a problem. In some cases, incriminating, and in other cases, just having toomuch data clogs up your datacenter arteries. If not handled properly within privacy guidelines, data potentially exposes sensitive personal or financial information of your employees and clients. Most regulations require certain data to be kept, in a manner protected against unexpected loss, unethical tampering, and unauthorized access, for a specific amount of time, after which it can be destroyed, deleted or shredded.
So ILM is not just a good idea to save a company money, it can keep them out of the court room, as well as help save the environment and not kill so many trees. Now that 100 percent of iPhone customers have internet access, and a goodnumber of non-iPhone customers have internet access at home, work, school or public library, it makes sense for companies to ask people to "opt-in" to getting their statements on paper, rather than forcing them to "opt-out".
Eventually hardware fails, ... ... eventually software works.
For a solid backup product, consider usingIBM Tivoli Storage Manager.I use it to protect all my data on my laptop. And when switching recently from my old Thinkpad T30 to my newThinkpad T60, used it to transfer my data over as well.[Read More]
In addition to creating the Dilbert cartoon, Scott Adams has a blog, which sometimes is quite serious,and other times quite funny. The anticipated 30x cost of "Flash Drives" for Enterprise disk systems reminded meof one of Scott's articles from November 2007 titled [Urge to Simplify].Here's an excerpt:
Now the casinos have people trained, like chickens hoping for pellets, to take money from one machine (the ATM), carry it across a room and deposit in another machine (the slot machine). I believe B.F. Skinner would agree with me that there is room for even more efficiency: The ATM and the slot machine need to be the same machine.
The casinos lose a lot of money waiting for the portly gamblers with respiratory issues to waddle from the ATM to the slot machines. A better solution would be for the losers, euphemistically called “players,” to stand at the ATM and watch their funds be transferred to the hotel, while hoping to somehow “win.” The ATM could be redesigned to blink and make exciting sounds, so it seems less like robbery.
I’m sure this is in the five-year plan. Longer term, people will be trained to set up automatic transfers from their banks to the casinos. People will just fly to Vegas, wander around on the tarmac while the casino drains their bank accounts, then board the plane and fly home. The airlines are already in on this concept, and stopped feeding you sandwiches a while ago.
Perhaps EMC can redesign its DMX-4 to "blink and make exciting sounds" as well. The Flash Drives were designedfor the financial services industry, so those disk systems could be directly connected to make transfers between the appropriate bank accounts.
Back then, IBM allowed its employees the option to run Windows, Linux or Mac OS. Since then, dual-boot Windows/Linux configurations, like the one I had back then on my Thinkpad T410, proved too difficult for our help desk, so these are no longer allowed.
In 2015, I received my new Thinkpad T440p to replace the old T410 model. For those 20 to 25 percent of the IBM employee population that manage, support and connect directly to client networks, IBM required Linux encrypted with LUKS, using Windows as KVM guests when needed for specific applications. This is more secure than running Windows natively, preventing viruses and other malware to spread between IBM and its clients.
As I am occasionally asked to help out our colleagues in lab services or with critical situations, I decided to implement my laptop to match, just in case. RHEL is rock solid, and running Windows as KVM guests could not be easier. Not having to worry about Windows viruses while travelling on business is a huge benefit as well.
Upgrading from RHEL 6.1 all the way up to RHEL 6.9 was simply a push of a button, all the new applications and kernel get installed, followed by a quick reboot. The migration from RHEL 6.9 to RHEL 7.4, however, was a major undertaking.
In past migrations, I was moving from a working laptop to a second laptop, affording me to be fully productive on the old machine until I was ready to cut over. In this case, I am performing a fresh install on my existing machine. To avoid any problems or delays, I wrote myself an 8-page, 17 step migration plan to capture all the tasks I needed to do to minimize the impact to my productivity.
(Of cousre, IBM has a help desk. You hand over your laptop, they backup the home directory, wipe your system clean, fresh install, restore your home directory, and return the laptop to you 3-5 days later, leaving the rest of the tasks up to you. Basically, this would merely replace the first three of my 17 steps below. I did not feel like burdening our help desk, nor wait 3-5 days without a laptop!)
Here were my steps:
Backup my existing system
In addition to backing up all my individual files to the Cloud, I also used [Clonezilla] to create a full image backup of my 500GB drive to an external USB drive.
Not all data is in file form. I also exported my browser bookmarks, so that I could import them back later. I also ran an "rpm -qa" to get a list of my existing applications installed.
Initially, I thought to format the 4TB external drive in UDF format, which is readable by Windows, Linux and Mac OS and supports files that are larger than 4GB in size.
Not knowing whether I should use [ExFAT] or Universal Disk Format [UDF] format, I split the 4TB into two 1.9TB partitions, and formatted one as ExFAT, and the other as UDF. Both formats support files greater than 4GB in size, which I have, but I discovered that on the older RHEL 6.9 release, based on a 2.6 Linux kernel, you can only write 68GB of data to a UDF partition. This is fixed in later kernels, but doesn't help me with my existing RHEL 6.9 release.
Fortunately, the latest Clonezilla LiveCD chops up the cloned images into files small enough that you can write to a variety of formats, and has a newer kernel that allows writing the full capacity of UDF partition.
In a crisis, I can restore back to RHEL 6.9 within 2 hours. This was my "relief valve" if I encountered any major delays and had to go travel for business on short notice.
Fresh install of RHEL 7.4 Linux
This completely wipes clean my drive, and installs two partitions. A tiny "/boot" partition needed to boot the system, and the remaining drive capacity as a large LUKS-encrypted LVM, to be internally partitioned between "/" and "swap" logical volumes.
Copy all of my files back
The challenge is that some files might clobber some of the configurations of the new applications. For this reason, I created /home/tpearson/RHEL69 and put everything there, so that I can move them to the correct locations as appropriate.
Copying all the files back in this manner eliminated having to be tethered to the external USB drive.
Setup LAN connectivity
I have to connect to IBM and guest systems, so this configuration is important. This includes EAP, TLS and VPN configurations. I thought I could just re-use the certificates I have for RHEL 6.9, but no, I had to create and register fresh new certificates for RHEL 7.4 release.
Configure Cinnamon Desktop
RHEL 7.4 uses Gnome 3 by default, which is quite different than Gnome2 used in RHEL 6.9 release. I don't care for it, so I configured [Cinnamon desktop] instead. Many people who use Linux Mint or Ubuntu might be familiar with this, and for those switching from Windows or RHEL 6.9 Linux, Cinnamon has familiar "Start" button in lower left corner.
By default, our RHEL 7.4 image comes with Firefox and Chrome browsers, so all I needed to do was import the bookmarks that I had exported in step 1 above.
Configure KVM guests
I was able to bring over my Windows7 Kernel-Virtual Machine [KVM] from RHEL 6.9 and run without problems, but this was bloated and now consuming nearly 60GB of space. Therefore, I decided to get a fresh Windows7 and Windows10 guest images instead.
Like with Linux, I wrote down what applications I had installed on Windows, and used that to configure the Windows guests. Nearly everything I do runs natively on Linux, but I do use Microsoft Office (Powerpoint, Excel, Word) and a nice tool called [CutePDF] that allows me to print to PDF instead of an actual printer.
Windows10 comes with the "Print-to-PDF" feature built-in, so no need for CutePDF on that one.
Configure IBM Notes, Sametime and Gnote
IBM is a heavy user of [IBM Notes] (formerly called Lotus Notes), not just for email but also for its document management and database capabilities. Sametime is our "Instant Messenger" app. [Gnote] is a linux-based tool to store short notes, I use it for all of my email templates for quick copy-and-paste responses.
IBM recently made using printers super easy. Print to the common "Cloud printer", and then pick up your print-outs from any printer in the building, any IBM building, worldwide. I could print in Tucson, for example, and pick up my print-outs when I am in the IBM buildings in Austin, Texas!
I also had to configure my printer at home, for those days where I need to print a boarding pass or quick document.
Configure File Sharing
IBM has deployed IBM [Spectrum Scale] internally for employees to share files across the company called "Global Storage Architecture" (GSA). Configuration for me just meant having to find my local cell (tucgsa) for Tucson, and entering my credentials.
Install Docker and DSX Desktop
[DSX Desktop] is the local laptop version of IBM's cloud-based [Data Science Experience], allowing me to perform Hadoop and Spark analytics for the various projects I work on. It runs as a Docker container, so I had to configure Docker as well.
Install Multimedia Codecs
One of the big detractors for Linux, compared to Windows or Mac OS, is the lack of multimedia support. Linux distros, like Red Hat, don't ship with these pre-installed, leaving this as an exercise for the end user.
IBM does a lot of audio and video files, including replays of conference calls and webinars for internal training. I keep a collection of different audio and video files to ensure that I have everything configured correctly for proper playback.
Install GIMP and other software
The GNU Image Manipulation Program [GIMP] is a great tool for quick editing of graphics. Another tool, Inkscape is designed for vector graphics.
Configure file-level backup
In addition to doing full-volume image backups with Clonezilla, I back up individual files, which are sent over the IBM internal network to a central server. All I need is configure to my previous backup set, and create the appropriate include/exclude list.
Many employees might just back up their home directory, but I customize a lot of the Linux configuration, so I like to backup a few more directories. Here is what I choose to back up:
Congigure Grub2 boot configuration
RHEL 7.4 supports [Grub2], which allows you to boot iso files directly. I like to add Clonezilla and [SystemRescueCD] as boot options. These were simple enough to add, just follow instructions, copy files to the /boot directory, and create a menuentry for each.
Validate final configuration
After eight days, I have finally completed all these steps, and am able to validate that everything is working correctly. I did some sample workflows, such as:
Verify that I can launch Windows KVM guest, edit Powerpoint presentation, and print to PDF file.
Verify that I can open email, launching embedded URL links, and copy-and-paste templates from Gnote
Launch GIMP, verify that I can edit graphics, and import the results in a Powerpoint presentation.
Download and play a Webinar replay MP4 file
Fresh Clone of full volume image
Using the Clonezilla that I added to the Grub2 boot menu, I am able to backup my full 500GB drive. At this point, I will keep the RHEL 6.9 for a few weeks as emergency backup, but so far, everything seems to be working just fine.
This took longer than I expected, but am happy with the final result. Red Hat is rock-solid, and the new RHEL 7.4 allows me to run DSX Desktop, Windows 10, and some other applications that were not available on our previous RHEL 6.9 build.
The Harvard Extension School is running a course focused on virtual law with a Second Life component. Rebecca Nesson (’Rebecca Berkman’ in Second Life) is teaching the class. The lectures, which look fascinating, are available to at-large participants on Berkman Island [SLURL: http://slurl.com/secondlife/Berkman/113/70/24].
You can attend the lectures in Second Life on Monday evenings from 8:00-10:00pm EST (5:00-7:00pm SL time). Videos of past lectures are linked on the course’s web site, where you can also find the syllabus, a wiki, and more.
The US version of The Office (which does an excellent job of being almost as funny as the BBC version) is no stranger to life online. It’s fun to spot Kevin, Meredith, Creed, Roy, Pam all on MySpace, and Dwight has a blog. This week they dipped into Second Life. The very same week as CSI:NY; It’s all getting very mainstream.
Of course, the Office’s treatment of SL was as tongue-in-cheek as you’d expect…
Dwight:“Second Life is not a game. It is a Multi User Virtual Environment. It doesn’t have points or scores or winners or losers.”
Jim:“Oh, it has losers.”
Steve Nelson at Clear Ink, the team behind bringing the office into SL for the episode, has [written about the project] and carefully lists the locations and clothing used.
I watched this episode and loved how they were able to blend it in seamlessly without looking out of placeor awkward reference.
Cisco Systems Inc. has been staging virtual meetings between developers and channel partners in Second Life for more than a year, but this invitation was a first for me. So a presentation announcing the winners of a networking technology innovation contest -- inside a Second Life simulation -- seemed like the place to be.
I'm probably an SL noob (for newbie) by most standards, but I've spent enough time there to know most of the ways to move and how to search out islands and events.
In all, I would say the Cisco event sparked my interest in the SL virtual meeting format, but my attention was focused more on making things in SL work smoothly than on the material presented.
I've had some interesting conversations with event-coordinators looking for advice on setting up events in Second Life, so I suspect that is a good sign that this is still growing momentum.
We have some exciting webcasts in the upcoming weeks!
Smarter Enterprises Need Smarter Storage
In this [InformationWeek webcast], my IBM colleague Allen Marin will present a brief overview of IBM Smarter Storage for the enterprise with a focus on new high-end disk and Virtual Tape solutions.
Allen will take you through the recent enhancements [announced earlier this month], highlighting how the new capabilities can address the requirements of your mission-critical applications, as well as your evolving business analytics, and cloud initiatives.
Date: Wednesday, October 24, 2012 Time: 10:00 AM PDT / 10:00AM Arizona / 1:00 PM EDT Duration: 60 Minutes
[Register now!] All registrants will get the independent Clipper Group Report - "When Infrastructure Really Matters - A Focus on High-End Storage" - free!
Smarter Storage for Midsize Businesses
Businesses of all sizes are getting buried in the avalanche of data. Data is coming in at faster rates and in greater volumes. The value of data is increasing. Old processes and technologies aren't working. Midsize businesses have the same issues managing the rapid growth of data as large enterprises, but they don't have the same size budget or staff. They need advanced capabilities at an affordable price that are easy to implement.
Speakers for this webcast include Brian Truskowski, General Manager, IBM System Storage and Networking; Ed Walsh, Vice President of Market and Strategy, IBM System Storage; and Tommy Rickard, IBM Director, UK Storage Development.
Date: Tuesday, November 6, 2012 Time: 8:00 AM PST / 9:00AM Arizona / 11:00 AM EST Duration: 60 Minutes
[Register now!] Learn how new IBM Smarter Storage solutions can help midsize businesses tame the explosion of information and their IT budgets.
I hope you can find time in your busy schedule to participate in one or both of these webcasts.
Are you looking for new storage for 2014? Time to replace that old gear on your IT floor?
The decisions you make about your IT infrastructure affect everything -- from database and business analytics to cloud and virtualization. That's why it's more important than ever to choose wisely.
If you are currently running on storage from HP, HDS, EMC or one of IBM's many other competitors, you might want to take a fresh new look at IBM storage which...
performs faster with greater throughput and lower latency,...
and is easier to use, ...
AND costs less over the next three to five years!
Next week, on January 16, senior IBM executives will share news about breakthrough technologies, featuring Intel® processors, that enhance Smarter Computing servers and storage.
(This webcast will be available worldwide. I, myself, will be in Winnipeg, Canada, freezing my [tuque] off!)
In this webcast, you will learn how to improve decision support and data processing for your mission-critical applications, drive higher performance on analytics and increase agility and flexibility through scalable solutions.
I can't believe we got snow this week on Valentine's Day! It didn't last long on the ground here in Tucson, but there are still some white caps in our mountains. For those of you "trapped" by snow, or too much work, here are two upcoming events you can attend from your desk and computer!
IBM Oracle Virtual University 2012
Please join us for the fourth annual IBM Oracle Virtual University that runs "live" for 24 hours, then continues 'on-demand' replay through the remainder of 2012.
From: Tuesday, February 21, 6:00 am US Eastern Time EST (6:00 pm China Time)
To: Wednesday, February 22, 6:00 am EST
This is a great educational event for IBM and Business Partner sales & technical teams who sell IBM Oracle solutions or have Oracle solutions installed in their account. It is for anyone who is new to or interested in the IBM Oracle Alliance as well as experienced sales & technical people who need all the latest on the IBM/Oracle co-opetition relationship for 2012 and beyond.
This VIRTUAL on-line event will cover key topics around the IBM Oracle Alliance. I am one of the speakers and will cover IBM System Storage offerings as they relate to Oracle software.
This is a chance for sellers to hear an update on what's new, unique and available to sell in 2012. The goal of this session is to help enable you to sell more IBM products and services with Oracle solutions in 2012! Learn where to go for help to better understand these solutions, close more deals and reach your targets.
Even through economic challenges, storage requirements have continued to grow along with the information explosion.
Join us for this informative webcast and hear from Jon Toigo, CEO and Managing Principal of Toigo Partners, as he discusses six cutting-edge storage technologies that are ready for prime time and can help transform your data center.
Date: Tuesday, February 28
Time: 1:00 pm EST, 12"00 pm CST, 10:00 am PST
The featured speaker is fellow blogger Jon Toigo, CEO and Managing Principal, Toigo Partners, an outspoken technology consumer advocate and vendor watchdog whose articles, columns, and blog posts on [DrunkenData.com] are enjoyed by over a million readers per month.