Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Earlier this year, IBM launched its[New Enterprise Data Center vision]. The average data center was built 10-15 years ago,at a time when the World Wide Web was still in its infancy, some companies were deploying their first storage areanetwork (SAN) and email system, and if you asked anyone what "Google" was, they might tell you it was ["a one followed by a hundred zeros"]!
Full disclosure: Google, the company, justcelebrated its [10th anniversary] yesterday, and IBM has partnered with Google on a varietyof exciting projects. I am employed by IBM, and own stock in both companies.
In just the last five years, we saw a rapid growth in information, fueled by Web 2.0 social media, email, mobile hand-held devices, and the convergenceof digital technologies that blurs the lines between communications, entertainment and business information. This explosion in information is not just "more of the same", but rather a dramatic shift from predominantly databases for online transaction processing to mostly unstructured content. IT departments are no longer just the"back office" recording financial transactions for accountants, but now also take on a more active "front office" role. For a growing number of industries, information technology plays a pivotal role in generating revenue, making smarter business decisions, and providing better customer service.
IBM felt a new IT model was needed to address this changing landscape, so IBM's New Enterprise Data Center vision has these five key strategic initiatives:
Highly virtualized resources
Business-driven Service Management
Green, Efficient, Optimized facilities
In February, IBM announced new products and features to support the first two initiatives, including the highlyvirtualized capability of the IBM z10 EC mainframe, and and related business resiliency features of the [IBM System Storage DS8000 Turbo] disk system.
In May, IBM launched its Service Management strategic initiative at the Pulse 2008 conference. I was there in Orlando, Florida at the Swan and Dolphin resort to present to clients. You can read my three posts:[Day 1; Day 2 Main Tent; Day 2 Breakout sessions].
In June, IBM launched its fourth strategic initiative "Green, Efficient and Optimized Facilities" with [Project BigGreen 2.0], which included the Space-Efficient Volume (SEV) and Space-Efficient FlashCopy (SEFC) capabilitiesof the IBM System Storage SAN Volume Controller (SVC) 4.3 release. Fellow blogger and IBM master inventor Barry Whyte (BarryW) has three posts on his blog about this:[SVC 4.3.0Overview; SEV and SEFCdetail; Virtual Disk Mirroring and More]
Some have speculated that the IBM System Storage team seemed to be on vacation the past two months, with few pressreleases and little or no fanfare about our July and August announcements, and not responding directly to critics and FUD in the blogosphere.It was because we were holding them all for today's launch, taking our cue from a famous perfume commercial:
"If you want to capture someone's attention -- whisper."
My team and I were actually quite busy at the [IBM Tucson Executive Briefing Center]. In between doing our regular job talking to excited prospects and clients,we trained sales reps and IBM Business Partners, wrote certification exams, and updated marketing collateral. Fortunately, competitors stopped promotingtheir own products to discuss and demonstrate why they are so scared of what IBM is planning.The fear was well justified. Even a few journalists helped raise the word-of-mouth buzz and excitement level. A big kiss to Beth Pariseau for her article in [SearchStorage.com]!
(Last week we broke radio silence to promote our technology demonstration of 1 million IOPS using Solid StateDisk, just to get the huge IBM marketing machine oiled up and ready for today)
Today, IBM General Manager Andy Monshaw launchedthe fifth strategic initiative, [IBM Information Infrastructure], at the[IBM Storage and Storage Networking Symposium] in Montpellier, France. Montpellier is one of the six locations of our New Enterprise Data Center Leadership Centers launched today. The other five are Poughkeepsie, Gaithersburg, Dallas, Mainz and Boebligen, with more planned for 2009.
Although IBM has been using the term "information infrastructure" for more than 30 years, it might be helpful to define it for you readers:
“An information infrastructure comprises the storage, networks, software, and servers integrated and optimized to securely deliver information to the business.”
In other words, it's all the "stuff" that delivers information from the magnetic surface recording of the disk ortape media to the eyes and ears of the end user. Everybody has an information infrastructure already, some are just more effective than others. For those of you not happy with yours, IBM hasthe products, services and expertise to help with your data center transformation.
IBM wants to help its clients deliver the right information to theright people at the right time, to get the most benefits of information, while controlling costs and mitigatingrisks. There might be more than a dozen ways to address the challenges involved, but IBM's Information Infrastructure strategic initiative focuses on four key solution areas:
Last, but not least, I would like to welcome to the blogosphere IBM's newest blogger, Moshe Yanai, formerly the father of the EMC Symmetrix and now leading the IBM XIV team. Already from his first poston his new [ThinkStorage blog], I can tell he is not going to pullany punches either.
Two weeks ago, I mentioned in my post [Pulse 2008 - Day 2 Breakout sessions] thatHenk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers. I am no stranger to ABN Amro, having helped "ABN" and "Amro" banks merge their mainframe data in 1991. Henk has agreed to let me share with my readers more ofthis success story here on my blog:
Back in December 2005, Henkand his colleagues had come to visit the IBM Tucson ExecutiveBriefing Center (EBC) to hear about IBM products and services. At the time, I was part of our "STG Lab Services" team that performed ILM assessments at client locations. I explained to ABN Amro that the ILM methodology does not requirean all-IBM solution, and that ILM could even provide benefits with their current mix of storage, software and service providers.The ABN Amro team liked what I had to say, andmy team was commissioned to perform ILM assessments atthree of their data centers:
Sao Paulo (Brazil)
Chicago, IL (USA)
Each data center had its own management, its owndecision making, and its own set of issues, so we structuredeach ILM assessment independently. When we presented our results,we showed what each data center could do better with their existing mixed bagof storage, software and service providers, and also showed howmuch better their life would be with IBM storage, software andservices. They agreed to give IBM a chance to prove it, and soa new "Global Storage Study" was launched to take the recommendationsfrom our three ILM studies, and flesh out the details to make aglobally-integrated enterprise work for them. Once completed,it was renamed the "Global Storage Solution" (GSS).
Henk summarized the above with "I am glad to see Tony Pearsonin the audience, who was instrumental to making this all happen."As with many client testimonials, he presented a few charts onwho ABN Amro is today, the 12th largest bank worldwide, 8th largest in Europe. They operate in 53 countries and manage over a trillioneuros in assets.
They have over 20 data centers, with about 7 PB of disk, and over20 PB of tape, both growing at 50 to 70 percent CAGR. About 2/3 of theiroperations are now outsourced to IBM Global Services, the remaining 1/3is non-IBM equipment managed by a different service provider.
ABN Amro deployed IBM TotalStorage Productivity Center, variousIBM System Storage DS family disk systems, SAN Volume Controller (SVC), Tivoli StorageManager (TSM), Tivoli Provisioning Manager (TPM), and several other products. Armed with these products, they performed the following:
Clean Up. IBM uses the term "rationalization" to relate to the assignment of business value, to avoid confusion with theterm "classification" which many in IT relate to identifyingownership, read and write authorization levels. Often, in theinitial phases of an ILM deployment, a portion of the data isdetermined to be eligible for clean up, either to move to a lower-cost tier or deleted immediately. ABN Amro and IBM set a goal to identifyat least 20 percent of their data for clean up.
New tiers. Rather than traditional "storage tiers" which are often justTier 1 for Fibre Channel disk and Tier 2 for SATA disk, ABN Amroand IBM came up with seven "information infrastructure tiers" thatincorporate service levels, availability and protection status.They are:
High-performance, Highly-available disk with Remote replication.
High-performance, Highly-available disk (no remote replication)
Mid-performance, high-capacity disk with Remote replication
Mid-performance, high-capacity disk (no remote replication)
Non-erasable, Non-rewriteable (NENR) storage employinga blended disk and tape solution.
Enterprise Virtual Tape Library with remote replicationand back-end physical tape
Mid-performance physical tape
These tiers are applied equally across their mainframe anddistributed platforms. All of the tiers are priced per "primary GB", so any additional capacity required for replication orpoint-in-time copies, either local or remote, are all folded into the base price.ABN Amro felt a mission-critical applicationon Windows or UNIX deserves the same Tier 1 service level asa mission-critical mainframe application. Exactly!
Deployed storage virtualization for disk and tape. Thisinvolved the SAN Volume Controller and IBM TS7000 series library.
Implemented workflow automation. The key product here is IBM Tivoli Provisioning Manager
Started an investigation for HSM on distributed. This would be policy-based space management to migrate lessfrequently accessed data to the TSM pool for Windows or UNIX data.
While the deployment is not yet complete, ABN Amro feels they have alreadyrecognized business value:
Reduced cost by identifying data that should be stored on lower tiers
Simplified management, consolidated across all operating systems (mainframe, UNIX, Windows)
Increased utilization of existing storage resources
Reduced manual effort through policy-based automation, which can lead to fewer human errors and faster adaptability to new business opportunities
Standardized backup and other operational procedures
Henk and the rest of ABN Amro are quite pleased with the progress so far,although recent developments in terms of the takeover of ABN AMRO by aconsortium of banks means that the model is only implemented so far in Europe. Further rollout depends on the storage strategy of the new owners. Nonetheless,I am glad that I was able to work with Henk, Jason, Barbara, Steve, Tom, Dennis, Craig and othersto be part of this from the beginning and be able to see it rollout successfully over the years.
IBM is hosting a webcast about storage for SAP Environments. Learn how integrated IBM infrastructure solutions, specifically, customized for your SAP environments, can help lower your business costs, increases productivity in SAP development and test tasks, and improve resource utilization. This will include discussion of archive solutions with WebDAV, ArchiveLink and DR550;IBM Business Intelligence (BI) Accelerator; IBM support for SAP [Adaptive Computing]; and performance benchmark results. The session is intended for SAP and storage administrators, IT directorsand managers.
Here are the details:
Date: Wednesday, June 18, 2008
Time: 11:00am EDT (8:00am for those of us in Arizona or California)
Continuing my summary of Pulse 2008, the premiere service managementconference focusing on IBM Tivoli solutions, I attended and presentedbreakout sessions on Monday afternoon.
Tivoli Storage "State-of-the-Subgroup" update
Kelly Beavers, IBM director of Tivoli Storage, presented the first breakout for all of the Tivoli Storage subgroup.Tivoli has several subgroups, but Tivoli Storage leads with revenuesand profits over all the others.Tivoli storage has top performing business partner channel of anysubgroup in IBM's Software Group division.IBM is world's #1 provider of storage vendor (hardware, softwareand services), so this came to no surprise to most of the audience.
Looking at just the Storage Software segment, it is estimatedthat customers will spend $3.5 billion US dollars more in the year 2011 than they did last year in 2007. IBM is #2 or #3 in eachof the four major categories: Data Protection, Replication, Infrastructure management, and Resource management. In eachcategory, IBM is growing market share, often taking away share fromthe established leaders.
There was a lot of excitement over the FilesX acquisition.I am still trying to learn more about this, but what I have gathered so far is that it can:
Like turning a "knob", you can adjust the level of backupprotection from traditional discrete scheduled backups, to morefrequent snapshots, to continuous data protection (CDP). Inthe past, you often used separate products or features to dothese three.
Perform "instantaneous restore" by performing a virtualmount of the backup copy. This gives the appearance that therestore is complete.
This year marks the 15th anniversary of IBM Tivoli StorageManager (TSM), with over 20,000 customers. Also, this yearmarks the 6th year for IBM SAN Volume Controller, having soldover 12,000 SVC engines to over 4,000 customers.
Data Protection Strategies
Greg Tevis, IBM software architect for Tivoli Technical Strategy,and I presented this overview of data protection. We coveredthree key areas:
Protecting against unethical tampering with Non-erasable, Non-rewriteable (NENR) storage solutions
Protecting against unauthorized access with encryption ondisk and tape
Protecting against unexpected loss or corruption with theseven "Business Continuity" tiers
There was so much interest in the first two topics that weonly had about 9 minutes left to cover the third! Fortunately,Business Continuity will be covered in more detail throughoutthe week.
Henk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers using IBM systems, software and services.
Making your Disk Systems more Efficient and Flexible
I did not come up with the titles of these presentations. Theteam that did specifically chose to focus on the "business value"rather than the "products and services" being presented. Inthis session, Dave Merbach, IBM software architect, and I presentedhow SAN Volume Controller (SVC), TotalStorage Productivity Center,System Storage Productivity Center, Tivoli Provisioning Managerand Tivoli Storage Process Manager work to make your disk storagemore efficient and flexible.
I attended the main tent sessions on Day 2 (Monday). The focuswas on Visibility, Control and Automation.
Steve is IBM senior VP and Group Executive of the IBM Software Group, and presented someinsightful statistics from the IBM Global Technology Outlookstudy, some recent IBM wins, and other nuggets of IT trivia:
In 2001, there were about 60 million transistors per humanbeing. By 2010, this is estimated to increase to one billion per human
In 2005, there were about 1.3 billion RFID tags, by 2010this is estimated to grow to over 30 billion
IBM helped the City of Stockholm, Sweden, reduce traffic congestion 20-25% using computer technology
Only about 25% data is original, the remaining75% is replicated
In 2007, there were approximately 281 Exabytes (EB), expected to increase to 1800 EB by the year 2011
70 percent of unstructured data is user-created content, but 85 percent of this will be managed by enterprises
Only 20% of data is subject to compliance rules and standards, and about 30% subject to security applications
Human error is the primary reason for breaches, with34% of organizations experiencing a major breach in 2006
10% of IT budget is energy costs (power and cooling), and thiscould rise to 50% in the next decade
30 to 60 percent of energy is wasted. During the next 5 years, people will spend as much on energy as they will on new hardware purchases.
Al Zollar is the General Manager of IBM Tivoli. He discussedthe 20 some recent software acquisitions, including Encentuate and FilesX earlier this year.
"The time has come to fully industrialize operations" -- Al Zollar
What did Al mean about "industrizalize"? This is theclosed-loop approach of continuous improvement, including design, delivery and management.
Al used several examples from other industries:
Henry Ford used standardized parts and processautomation. Assembly of an automatobile went from 12 hours by master craftsmen, to delivering a new model T every 23 seconds off anassembly line.
Power generation was developed by Thomas Edison. A satellite picture showed the extent of the [Blackout of 2003 in Northeast US and Canada]. The time for "smart grid" has arrived, making sensors andmeters more intelligent. This allows non-essential IP-enabled appliances in our home or office to be turned off to reduce energy consumption.
[McCarran International Airport] integrated the management of 13,000 assets with IBM Tivoli Maximo Enterprise Asset Management (EAM) software, and was able to increase revenues through more accurate charge-back. Unlike traditional EnterpriseResource Planning (ERP) applications, EAM offers the deep management of four areas: production equipment, facilities, transportation, and IT.
When compared to these other industries, management of IT is in itsinfancy. The expansion of [Web 2.0] and Service-Oriented Architecture [SOA] is driving this need.What people need is a "new enterprise data center" that IBM Tivoli software can help you manage across operational boundaries. IBM can integrate through open standards with management software from Cisco, Sun, OracleMicrosoft, CA, HP, BMC Software, Alcatel Lucent, and SAP.Together with our ecosystems of technology partners, IBM ismeeting these challenges.
IBM clients have achieved return on investment from gettingbetter control of their environment. This week there are client experience presentations Sandia National Labs, Spirit AeroSystems, Bank of America, and BT Converged communication services.
Chris O'Connor used some of his staff as "actors" to show an incredible live demo of various Tivoli and Maximo products for the mythical launch of "Project Vitalize", thenew online web store for a new "Aero Z bike" from the mythical VCA Bike and Motorcycle company.
Shoel Perelman played the role of "CIO".The CIO locked down all spending, and asked the IT staff to make the shift from bricks-and-mortar to web salesof this new product on in 15 months. While the company andsituation were mythical, all the products that were part of thelive demo are all readily available.The CIO had three goals:
What do we have? where is it? what's connected to what?Traditionally, these would be answered from lists in spreadsheets.The CIO had a goal to deploy IBM Tivoli Application DependenceDiscover Manager (TADDM) which discovered all hardware and software,with an easy to understand view, and how each piece serves the business applications.
Each of the teams have processes, and needed them consistent andrepeatable, tightly linked together. Time is often wasted on thephone coordinating IT changes. For this, the CIO had a goalto deploy Tivoli Change and Configuration Management Database (CCMDB) for "strict change control".The process dashboard is accessible for all teams, to see how all projects are progressing. There is also aCompliance dashboard, which identifies all changes by role, clearly spelling out who can do what.
There is a lot of computerized machinery, Manufacturing assets and robotics. The CIO set a goal to "do more with existing people", and needed to automate key processes.Sales rep wanted to add a new distributor to key web portal.This was all done through their "service catalog", When they needed to deploy a new application, they were able to find servers with available capacity and adjust using automatic provisioning. Thanks to IBM, the IT staff no longer get paged at 3am in the morning, and fewer days are spent in the "war room". They now have confidence that thelaunch will be successful.
Ritika Gunnar played the role of "Operations manager". She highlightedfive areas:
"Service viewer" dashboard with green/yellow/red indicators forall of their edge, application and datbase servers. This allowsher to get data 4-5 times faster and more accurate.
Tivoli Enterprise Portal eliminates bouncingaround various products.
Tivoli Common Reporting for CPU utilization of all systems, helps find excess capacity usingIBM Tivoli Monitor
On average, 85 percent of problems are caused by IT changes to the environment. IBM can help find dependencies, so that changes in one area do not impact other areas unexpectedly
Process Automation will Show changes that have been completed, in progress, or overdue.She can see all steps in a task or change request. A"workflow" automates all the key steps that need to be taken.
Laura Knapp played the role of "Facilities manager". She wanted to See all processes that apply to her work using a role-based process dashboard. The advantage of using IBM is that it changes work habits, reduces overtimeby 42 percent, improves morale. The IT staff now works as team,collaborates more, and jobs get done faster with fewer mistakes.Employees are online, accessing, monitoring and managing dataquicker. In days not weeks.
IBM Tivoli Enterprise Console (TEP) served as a common vehicle.She was able to pull up floor plan online, displaying all of the managed assets and mapped features. With the temperature overlay from Maximo Spatial, she was able to review hot spots on data center floor. Heat can cause servers to fail or shut down.
Power utilization chart at peak loadsCan now anticipate, predict and watch power consumption,and were able to justify replacement with newer, more energy-efficient equipment.
The CIO got back on stage, and explained the great success of thelaunch. They use Webstore usage tracking, security tools tracking all new registrations, and trackingserver and storage load.It now only takes hours, not weeks, to add new business partners and distributors.Tivoli Service Quality Assurance toolstrack all orders placed, processed, and shipped.Faster responsiveness is competitive advantage. TheirIT department is no longer seen as stodgy group, but as a world classorganization.
The live demo showed how IBM can help clients with rapid decisionmaking, speed and accuracy of change processes, and automation to take actions quickly. The result is a strong return on investment (ROI).
Liz Smith, IBM General Manager of Infrastructure Services, presented the results of an IBM survey to CEOs and CIOs asking questions like: What is the next big impact? Where are you investing?What will new datacenter look like?
The five key traits they found for companies of the future:
They were hungry for change
Innovative beyond customer imagination
Disruptive by nature
Genuine, not just generous
The IT infrastructure must be secure, reliable, and flexible.Taking care of environment is a corporate responsibility, notjust a way to reduce costs.
The five entry points for IBM Service Management: Integrate, Industrialize,Discover,Monitor and Protect.IBM Service management and compliance are critical for theGlobally Integrated Enterprise, with repeatable, scalable and consistent processes that enablechange to an automated workflow. This reduces errors, risks and costs, and improves productivity.IBM has talent, assets and experience to help any client get there.
Lance lives in Austin, TX, where IBM Tivoli is headquartered,so this made a good choice as a keynote speaker.He is best known for winning seven "Tour de France" bicycle races in a row, but he spoke instead gave an inspirational talk about how he survived cancer.
In 1996, Lance was diagnosed with cancer. Surprisingly, He said it was thegreatest thing that happened to him, and gave him new perspective on his life, family and the sport ofbicycling.Back then, there wasn't a webMD, Google or other Web 2.0 socialnetworking sites for Lance to better understand what he wasgoing through, learn more about treatment options, or find othersgoing through the same ordeal.
After his treatment, he was considered "damaged goods" by manyof the leading European bicycle teams. So, he joined the US Postal Serviceteam, not known for their wins, but often invited to sell TVrights to American audiences. Collaborating with his coachesand other members of his team, he revolutionized the bicycling sport, analyzed everything about the race, and built up morale.He won the first "yellow jersey" in 1999, and did so each yearfor a total of seven wins.
Lance formed the [Livestrong foundation] to help other cancer survivors. Nike came to him and proposed donating 5 million "rubber bracelets"colored yellow to match his seven yellow jerseys, with the name "Livestrong" embossed on them, that his foundation couldthen sell for one dollar apiece to raise funds. What some thought was a silly idea at first has started amovement.At the 2004 Olympics, many athletes from all nations and religious backgrounds, wore these yellow braceletsto show solidarity with this cause.To date, the foundation has sold over 72 million yellow bracelets, and these have served to provide a symbol,a brand, a color identity, to his cause.
He explained that doctor's have a standard speech to cancer survivors.As a patient, you can go out this doorway and never tell anyone,keep the situation private. Or you can go out this other doorway, you tell everybody your story. Lance chose the latter, and he felt it was the best decision he ever made.He wrote a book titled [It's Not About the Bike: My Journey Back to Life].
His call to action for the audience: find out what can you do to make a difference.A million non-governmental organizations[NGO] have started in the past 10 years. Don't just give cash, also give your time and passion.
My session was the first in the morning, at 8:30am, but managed to pack the room full of people. A few looklike they just rolled in from Brocade's special get-together in Casey's Irish Pub the night before.I presented how IBM's storage strategy for the information infrastructure fits into the greater corporate-wide themes.To liven things up, I gave out copies of my book[Inside System Storage: Volume I] to those who asked or answered the toughest questions.
Data Deduplication and IBM Tivoli Storage Manager (TSM)
IBM Toby Marek compared and contrasted the various data deduplication technologies and products available, andhow to deploy them as the repository for TSM workloads. She is a software engineer for our TSM software product,and gave a fair comparison between IBM System Storage N series Advanced Single Instance Storage (A-SIS), IBMDiligent, and other solutions out in the marketplace.If you are going to combine technologies, then it isbest to dedupe first, then compress, and finally encrypt the data. She also explained about the many cleverways that TSM does data reduction at the client side greatly reduces the bandwidth traffic over the LAN,as well as reducing disk and tape resources for storage. This includes progressive "incremental forever" backup for file selection, incremental backups for databases, and adaptive sub-file backup.Because of these data reduction techniques, you may not get as much benefit as deduplication vendors claim.
The Business Value of Energy Efficiency Data Centers
Scott Barielle did a great job presenting the issues related to the Green IT data center. He is part of IBM"STG Lab Services" team that does energy efficiency studies for customers. It is not unusual for his teamto find potential savings of up to 80 percent of the Watts consumed in a client's data center.
IBM has done a lot to make its products more energy efficient. For example, in the United States, most datacenters are supplied three-phase 480V AC current, but this is often stepped down to 208V or 110V with powerdistribution units (PDUs). IBM's equipment allows for direct connection to this 480V, eliminating the step-downloss. This is available for the IBM System z mainframe, the IBM System Storage DS8000disk system, and larger full-frame models of our POWER-based servers, and will probably be rolled out to someof our other offerings later this year. The end result saves 8 to 14 percent in energy costs.
Scott had some interesting statistics. Typical US data centers only spend about 9 percent of their IT budgeton power and cooling costs. The majority of clients that engage IBM for an energy efficiency study are not tryingto reduce their operational expenditures (OPEX), but have run out, or close to running out, of total kW ratingof their current facility, and have been turned down by their upper management to spend the average $20 million USDneeded to build a new one. The cost of electricity in the USA has risen very slowly over the past 35 years, andis more tied the to fluctuations of Natural Gas than it is to Oil prices.(a recent article in the Dallas News confirmed this:["As electricity rates go up, natural gas' high prices, deregulation blamed"])
Cognos v8 - Delivering Operational Business Intelligence (BI) on Mainframe
Mike Biere, author of the book [BusinessIntelligence for the Enterprise], presented Cognos v8 and how it is being deployed for the IBMSystem z mainframe. Typically, customers do their BI processing on distributed systems, but 70 percent of the world's business data is on mainframes, so it makes sense to do yourBI there as well. Cognos v8 runs on Linux for System z, connecting to z/OS via [Hypersockets].
There are a variety of other BI applications on the mainframe already, including DataQuant,AlphaBlox, IBI WebFocus and SAS Enterprise Business Intelligence. In addition to accessing traditional onlinetransaction processing (OLTP) repositories like DB2, IMS and VSAM, using the [IBM WebSphere ClassicFederation Server], Cognos v8 can also read Lotus databases.
Business Intelligence is traditionally query, reporting and online analytics process (OLAP) for the top 10 to 15 percent of the company, mostly executives andanalysts, for activities like business planning, budgeting and forecasting. Cognos PowerPlay stores numericaldata in an [OLAP cube] for faster processing.OLAP cubes are typically constructed with a batch cycle, using either "Extract, Transfer, Load" [ETL], or "Change Data Capture" [CDC], which playsto the strength of IBM System z mainframe batch processing capabilities.If you are not familiar with OLAP, Nigel Pendse has an article[What is OLAP?] for background information.
Over the past five years, BI is now being more andmore deployed for the rest of the company, knowledge workers tasked with doing day-to-day operations. Thisphenomenom is being called "Operational" Business Intelligence.
IBM Glen Corneau, who is on the Advanced Technical Support team for AIX and System p, presented the IBMGeneral Parellel File System (GPFS), which is available for AIX, Linux-x86 and Linux on POWER.Unfortunately, many of the questions were related to Scale Out File Services (SOFS), which my colleague GlennHechler was presenting in another room during this same time slot.
GPFS is now in its 11th release since its introducing in 1997. All of the IBM supercomputers on the [Top 500 list] use GPFS. The largest deployment of GPFS is 2241 nodes.A GPFS environment can support up to 256 file systems, each file system can have up to 2 billion filesacross 2 PB of storage. GPFS supports "Direct I/O" making it a great candidate for Oracle RAC deployments.Oracle 10g automatically detects if it is using GPFS, and sets the appropriate DIO bits in the stream totake advantage of GPFS features.
Glen also covered the many new features of GPFS, such as the ability to place data on different tiers ofstorage, with policies to move to lower tiers of storage, or delete after a certain time period, all conceptswe call Information Lifecycle Management. GPFS also supports access across multiple locations and offersa variety of choices for disaster recovery (DR) data replication.
Perhaps the only problem with conferences like this is that it can be an overwhelming["fire hose"] of information!
In explaining the word "archive" we came up with two separate Japanese words. One was "katazukeru", and the other was "shimau". If you are clearing the dinner plates from the table after your meal, for example, it could be done for two reasons. Both words mean "to put away", but the motivation that drives this activity changes the word usage. The first reason, katazukeru, is because the table is important, you need the table to be empty or less cluttered to use it for something else, perhaps play some card game, work on arts and craft, or pay your bills. The second reason, shimau, is because the plates are important, perhaps they are your best tableware, used only for holidays or special occasions only, and you don't want to risk having them broken. As it turns out, IBM supports both senses of the word archive. We offer "space management" when the space on the table, (or disk or database), is more important, so older low-access data can be moved off to less expensive disk or tape. We also offer "data retention" where the data itself is valuable, and must be kept on WORM or non-erasable, non-rewriteable storage to meet business or government regulatory compliance.
The process of archiving your data from primary disk to alternate storage media can satisfy both motivations.
IBM offers software specifically to help with this archival process.For email archive, IBM offers [IBM CommonStore] for Lotus Domino and MicrosoftExchange. For database archive, including support for various ERP and CRM applications, IBM offers [IBM Optim] from the acquisition of Princeton Softech.
The problems occur when companies, under the excuse of simplification or consolidation, feel they can just usetheir backups as archives. They are taking daily backups of their email repositories and databases, and keepingthese for seven to ten years. But what happens when their legal e-discovery team needs to find all emails or database records related to a particular situation, an employee, client or account? Good luck! Most backupsare not indexed for this purpose, so storage admins are stuck restoring many different backups to temporary storage and combing through the files in hopes to find the right data.
Backups are intended for operational recovery of data that is lost or corrupted as a result of hardware failures, application defects, or human error. Disk mirroring or remote replication might help with hardware failures, but any logical deletion or corruption of data is immediately duplicated, so it is not a complete solution. FlashCopy or Snapshot point-in-time copies are useful to go back a short time to recover from logical failures, but since they are usually on the same hardware as the original copies, may not protect against hardware failures. And then there's tape, and while many people malign tape as a backup storage choice, 71 percent of customers send backups to tape, according to a 2007 Forrester Research report.
Backups often aren't viable unless restored to the same hardware platform, with the same operating system and application software to make sense of the ones and zeros. For this reason, people typically only keep two to five backup versions, for no more than 30 days, to support operational recovery scenarios. If you make updatesto your hardware, OS or application software, be sure to remember to take fresh new backups, as the old backupsmay no longer apply.
Archives are different. Often, these are copies that have been "hardened" or "fossilized" so that they make sense even if the original hardware, OS or application software is unavailable. They might be indexed so that they can be searched, so that you only have to retrieve exactly the data you are looking for. Finally, they are often stored with "rendering tools" that are able to display the data using your standard web browser, eliminating the need to have a fully working application environment.
Take any backup you might have from five years ago and try to retrieve the information. Can you do it? This might be a real eye-opener. You might have inherited this backup-as-also-archive approach from someone else, and are trying to figure out what to do differently that makes more sense. Call IBM, we can help.
Rich Bourdeau has written a nice article on InfoStor titled [Software as a Service (SaaS) meets Storage]. Last year, IBM acquired Arsenal Digital, and he mentions both in this article.It is interesting how this has evolved over the years.
Rent warehouse space for tapes
I remember when various companies offered remote storage for tapes. These would be temperature and humidity-controlledrooms, with access lists on who could bring tapes in, who could take tapes out, and so on. In the event of thedisaster, someone would collect the appropriate tapes and take them to a recovery site location.
Rent online/nearline storage from a Storage Service Provider (SSP)
SSPs rented storage space on disk, or provided automated tape libraries that could be written to. With tapes being ejected and stored in temperature/humidity-controlled vaults. Electronic vaulting eliminates a lot of theissues with cartridge handling and transportation, is more secure, and faster. Rented disk space, based on a Gigabytes-per-month rate, could be used for whatever the customer wanted. If these were for backups or archive,then the customer has to have their own software, to do their own processing at their own location, sending the data to the remote storage as appropriate, and manage their own administration.
Backup-as-a-Service and Archive-as-a-Service
We are now seeing the SaaS model applied to mundane and routine storage management tasks. New providers can offerthe software to send backups, the disk to write them to, and as needed the tape libraries and cartridges to rollover when the disk space is full. Disk capacity can be sized so that the most recent backups are on immediately accessible for fast recovery.
The same concept can be applied to archives. The key difference between a backup and an archive is that backups areversion-based. You might keep three versions of a backup, the most recent, and two older copies, in case something is wrong with the most recent copy, you can go back to older copies. This could be from undetected corruption of the data itself, or problems with the disk or tape media. An archive, on the other hand, is time-based. You want this data to be kept for a specific period of time, based on an event or fixed period of years.
Since BaaS and AaaS providers know what the data is, have some idea of the policies and usage patterns will be, can then optimize a storage solution that best meets service level agreements.
Many people have asked me if there was any logic with the IBM naming convention of IBM Systems branded servers. Here's your quick and easy cheat sheet:
System x -- "x" for cross-platform architecture. Technologies from our mainframe and UNIX servers were brought into chips that sit next to the Intel or AMD processors to provide a more reliable x86 server experience. For example, some models have a POWER processor-based Remote Supervisor Adapter (RSA).
System p -- "p" for POWER architecture.
System z -- "z" for Zero-downtime, zero-exposures. Our lawyers prefer "near-zero", but this is about as close as you get to ["six-nines" availability] in our industry, with the highest level of security and encryption, no other vendor comes close, so you get the idea.
But what about the "i" for System i? Officially, it stands for "Integrated" in that it could integrate different applications running on different operating systems onto a [COMMON] platform. Options were available to insert Intel-based processor cards that ran Windows, or attach special cables that allowed separate System x servers running Windows to attach to a System i. Both allowed Windows applications to share the internal LAN and SAN inside the System i machine. Later, IBM allowed [AIX on System i] and [Linux on Power] operating systems to run as well.
From a storage perspective, we often joked that the "i" stood for "island", as most System i machines used internal disk, or attached externally to only a fewselected models of disk from IBM and EMC that had special support for i5/OS using a special, non-standard 520-byte disk block size. This meant only our popular IBM System Storage DS6000 and DS8000 series disk systems were available. This block size requirement only applies to disk. For tape, i5/OS supports both IBM TS1120 and LTO tape systems. For the most part,System i machines stood separate from the mainframe, and the rest of the Linux, UNIX and Windows distributed serverson the data center floor.
Often, when I am talking to customers, they ask when will product xyz be supported on System z or System i?I explained that IBM's strategy is not to make all storage devices connect via ESCON/FICON or support non-standard block sizes, but rather to get the servers to use standard 512-byte block size, Fibre Channel and other standard protocols.(The old adage applies: If you can't get Mohamed to move to the mountain, get the mountain to move to Mohamed).
On the System z mainframe, we are 60 percent there, allowing three of the five operating systems (z/VM, z/VSE and Linux) to access FCP-based disk and tape devices. (Four out of six if you include [OpenSolaris for the mainframe])But what about System i? As the characters on the popular television show [LOST] would say: It's time to get off the island!
Last week, IBM announced the new [i5/OS V6R1 operating system] with features that will greatly improve the use of external storage on this platform. Check this out:
POWER6-based System i 570 model server
Our latest, most powerful POWER processor brought to the System i platform. The 570 model will be the first in the System i family of servers to make use of new processing technology, using up to 16 (sixteen!) POWER6 processors (running at 4.7GHZ) in each machine.The advantage of the new processors is the increased commercial processing workload (CPW) rating, 31 percent greater than the POWER5+ version and 72 percent greater than the POWER5 version. CPW is the "MIPS" or "TeraFlops" rating for comparing System i servers.Here is the[Announcement Letter].
Fibre Channel Adapter for System i hardware
That's right, these are [Smart IOAs], so an I/O Processor (IOP) is no longer required! You can even boot the Initial Program Load (IPL) direclty from SAN-attached tape.This brings System i to the 21st century for Business Continuity options.
Virtual I/O Server (VIOS)
[VirtualI/O Server] has been around for System p machines, but now available on System i as well. This allows multiplelogical partitions (LPARs) to access resources like Ethernet cards and FCP host bus adapters. In the case of storage, the VIOS handles the 520-byte to 512-byte conversion, so that i5/OS systems can now read and write to standard FCP devices like the IBM System Storage DS4800 and DS4700 disk systems.
IBM System Storage DS4000 series
Initially, we have certified DS4700 and DS4800 disk systems to work with i5/OS, but more devices are in plan.This means that you can now share your DS4700 between i5/OS and your other Linux, UNIX and Windowsservers, take advantage of a mix of FC and SATA disk capacities, RAID6 protection, and so on.
To call [IBM PowerVM] the "VMware for the POWER architecture" would not do it quite justice. In combination with VIOS, IBM PowerVM is able to run a variety of AIX, Linux and i5/OS guest images.The "Live Partition Mobility" feature allows you to easily move guest images from one system to another, while they are running, just like VMotion for x86 machines.
And while we are on the topic of x86, PowerVM is also able to represent a Linux-x86 emulation base to run x86-compiled applications. While many Linux applications could be re-complied from source code for the POWER architecture "as is", others required perhaps 1-2 percent modification to port them over, and that was too much for some software development houses. Now, we can run most x86-compiled Linux application binaries in their original form on POWER architecture servers.
BladeCenter JS22 Express
The POWER6-based [JS22 Express blade] can run i5/OS, taking advantage of PowerVM and VIOS to access all of the BladeCenterresources. The BladeCenter lets you mix and match POWER and x86-based blades in the same chassis, providing theultimate in flexibility.
Are you covering the business impact of the internet failure across Asia, the Middle East and North Africa? The outage has brought business in those regions to a standstill. This disaster shines a direct spotlight on the vulnerability of technology and serves as a reminder of the ever increasing importance of protecting business critical information.
Disaster recovery needs to be a critical element of every technology plan. We don’t yet know the financial impact of this wide spread internet failure, but the companies with disaster recovery plans in place, were likely able to failover their entire systems to servers based in other regions of the world.
When I first heard of this outage, I am thinking, so a few million people don't have access to FaceBook and YouTube, what's the big deal? We in the U.S.A. are in the middle of a [Hollywood writer's strike] and don't have fresh new television sitcoms to watch! Yahoo News relays the typical government's response:[Egypt asks to stop film, MP3 downloads during Internet outage], presumably so that real business can take priority over what little bandwidth is still operational. Fellow IBM blogger "Turbo" Todd Watson pokes fun at this, in his post[Could Someone Please Get King Tutankhamun On The Phone?].Like us suffering here in America, perhaps our brothers and sisters in Egypt and India may getre-acquainted with the joys of reading books.
However, the [Internet Traffic Report-Asia] shows how this impacted various locations including: Shanghai, Mumbai, Tokyo, Tehran, and Singapore. In some cases, you have big delays in IP traffic, in other cases, complete packet loss, depending on where each country lies on the["axis of evil"].This is not something just affecting a few isolated areas, the impact is indeed worldwide. This would be a goodtime to talk about how computer signals are actually sent.
DWDM takes up to 80 independent signals, converts each to a different color of light, and sends all the colors down a single strand of glass fiber. At the receiving end, the colors are split off by a prism,and each color is converted back to its original electrical signal.
Similar DWDM, but only eight signals are sent over the glass fiber. This is generally cheaper, becauseyou don't need highly tuned lasers.
Wikipedia has a good article on [Submarine Communications Cable],including a discussion on how repairs are made when they get damaged or broken.It is important to remember that lost connectivity doesn't mean lost data, just lack of access to the data. Thedata is still there, you just can't get to it right now. For some businesses, that could be disruptive to actualoperations. In other cases, it means that backups or disk mirroring is suspended, so that you only have yourlocal copies of data until connectivity is resumed.
When two cables in the Mediterranean were severed last week, it was put down to a mishap with a stray anchor.
Now a third cable has been cut, this time near Dubai. That, along with new evidence that ships' anchors are not to blame, has sparked theories about more sinister forces that could be at work.
For all the power of modern computing and satellites, most of the world's communications still rely on submarine cables to cross oceans.
It gets weirder. In his blog Rough Type, Nick Carr's[Who Cut the Cables?] reportsnow a fourth cable has been cut, in a different location than the other two cable locations. If the people cuttingthe cables are looking to see how much impact this would have, they will probably be disappointed. Nick Carrrelates how resilient the whole infrastructure turned out to be:
Though India initially lost as much as half of its Internet capacity on Wednesday, traffic was quickly rerouted and by the weekend the country was reported to have regained 90% of its usual capacity. The outage also reveals that the effects of such outages are anything but neutral; they vary widely depending on the size and resources of the user.
Outsourcing firms, such as Infosys and Wipro, and US companies with significant back-office and research and development operations in India, such as IBM and Intel, said they were still trying to asses how their operations had been impacted, if at all.
Whether it is man-made or natural disaster, every business should have a business continuity plan. If you don't have one, or haven't evaluated it in a while, perhaps now is a good time to do that. IBM can help.
Continuing my theme of "Innovation that matters", I thought I would cover MapQuest and NeverLost.
When Shawn Callahan on Anecdote wrote[Our need for the knowledge worker is over], he was referring to the fact that we no longer need the term "knowledge worker", because practically everyone isa "knowledge worker" today. He asks "How does knowledge help us to work better?"
It is said that as much as 30 percent of a knowledge worker's time is spent looking for information to do their jobs. This could be information to make a decision, decide between several choices, take specific action, or schedule when these actions should take place. The logistics of planning a business trip, and actually navigating in unfamiliarsurroundings, is a good example of this, and presents some unique challenges.
Before these technologies
Before these technologies, to plan a trip involved finding someone who lives or has been to the destination city,can recommend hotels and restaurants near the meeting facility, and can suggest approximate times it would take to drive from one place to another. I would bring a compass, and would shop for a city map, either before leaving, or upon arrival.
On one trip to Raleigh, I asked a local IBMer who lived in Raleigh for a hotel recommendation. The hotel was nice,but involved a long 45-60 minute commute each day to the meeting facility. When I asked her why she suggested thatparticular hotel, she said it was because it was "close to the airport". I have since learned never to ask for "best" of anything, as this is subject to such interpretation.
On another trip, I was travelling with a colleague in Germany. He asked how I knew which bus to take, and which bus stop to wait at. I pulled out my compass, and told him that based on the schedule, the bus that went in a specific directionmust be the correct one. The entire bus load of people burst out laughing, that we fit the universal stereotype ofmen who refuse to ask for directions. This method works only in Germany, where timeliness is next to godliness. In other countries, time schedules are more of a suggestion.
Sometimes, maps of the destination city were not always easy to find. Now with the Internet and Google Earth, maps are available before leaving on the trip. (See my post on Inner Workings of Storage which discusses how Google Earth works.)
I like using MapQuest, available online at [mapquest.com], and have not yet looked into the similar systems from Google or Yahoo. I map out each leg of my trip that involves driving, walking or trains. These are oftenairport-to-hotel, hotel-to-meeting, meeting-to-airport. Having a feel for the time and distances between locationshelps choose hotels and restaurants, when to leave, and so on.
I even use MapQuest in Tucson. Recently, a route I generated to visit a friend across town took into accountconstruction on Highway I-10 that has been going on for a while, where 8 miles of on-ramps are closed, and routed me around this mess accordingly. This is one key advantage over a static map, either a paper map, or downloaded from Google Earth.
While MapQuest may not always choose the "best" route, it always finds "a route" that works, and generally works for me.
A few problems with a MapQuest print-out I have found are:
It is on paper, which could impact driving, as I have to look away from the road to look at the instructions.
If it can't find a specific address, it provides generic instructions, and often, this involves airports.
It often starts with "Head Northeast...", so unless you brought your compass, or can tell what direction you are pointing from Sun, Moon or stars, you may end up leaving in the wrong direction.
Recently, I checkmarked the "Request NeverLost" box on my Hertz Gold profile, and now I seem to get NeverLost innearly every rental. The system is based on the[Global Positioning System] set of satellites,complemented by a CD-based street information and yellow pages data for US and Canada, stored in the trunk.
The NeverLost system knows which way the car is oriented, can tell which direction you are driving, and tell youwith voice prompts to be in the left lane, right lane, and when to make left and right turns. No need for a compassor any knowledge of which way is North, East, West or South.
I also like that it gives you three choices for route: (a) Shortest time, (b) Most use of Highways, and (c) Least use of Highways. This came in handy when I was in Toronto last week. Apparently, the 407 Highway had recently implementedan Electronic Toll Road (ETR) which bills based on license plate. While this system is fine for residents, it isnot designed for rental car companies. Hertz left a note in my car warning me NOT to use the 407 highway, or I wouldbe charged an $8.50 dollar penalty. I chose "Least use of Highways" and proceeded to tour the city of Toronto for90 minutes from the Pearson Airport to my hotel in Markham, a trip that would have only taken 20 minutes otherwise.
Once you enter your destination street address, it can estimate the distance to get there. This is not a quick process, as there is no keyboard, you have to enter each letter using up/down/left/right keys. You can enter thename of the street, hotel or restaurant. To find "Sal Grosso" restaurant in Smyrna, it was at 1927 Powers Ferry Road,but NeverLost said that Powers Ferry only went from 2750-6350. I had to select 2750 and then hope to be close enough.
In Dallas, I tried to find "P. F. Chang's" restaurant, and you have to make sure that the periods and spaces are entered exactly. I ended up looking for restaurants in Grapevine, Texas, and then just going through the list ofall that start with the letter "P".
Another issue is that sometimes it takes awhile to find the satelites in the sky. I get the car started, I hit theenter button to get the NeverLost started, enter the address, and then it starts looking for satellites? Why doesn'tit look for satellites while you spend 3-5 minutes trying to enter the street address?In my case, I take out my MapQuest print-out, head in the right direction, and hope that NeverLost catches upeventually, in time to help me get to the final location.
It is not clear how often Hertz updates the CDrom that contains the street and yellow pages data. About 30-40 percent of the time, it can't find the street address I am looking for, and I have to be creative on howto get me in the general area.
Part of the problems is that I have not read the entire instruction manual, and do not have time to learn itwhen I am in the car driving. I might have to put this on my reading to-do list before my next trip. Some ofmy other colleagues have purchased their own GPS-based systems, like those from Garmin or Magellan, so that theyalways have it available, and they always know how to use it. This has the advantage that you can use it when walking around, or in your own car when you are home, as well.
Despite these few problems, I am impressed on the innovations involved to make this all happen. All of the mapping information was stored, transmitted, searched, and then plotted in a manner that provides specificinformation that you need to get the job done. For now, I will probably use a combination of these to planand travel on my business trips. Wouldn't it be nice if other areas in your life had this kind of support?
Well, it is Halloween back in the USA. I am in Seoul Korea this week, so it is already Thursday, November 1st here, but thought I would comment on Colin Barker's piece in ZDnet titled[SNW offers the frights].The article starts out with an oversimplification:
The storage industry is enjoying a boom currently thanks to the requirement for IT managers to keep everything. With the possibility of being sued any time by any company for no good reason at all, everyone is keeping everything, or at least all their data. Result? Loads and loads more kit being bought to the benefit of EMC, IBM, HP and every other supplier with any kind of storage product.
While its true that IBM System Storage grew yet again in 3Q07, exceeding our own internal business model, I would not call this an overall "boom" for the storage industry. While companies are growing in "TB capacity" by 30-50%, this translates only to single digit growth in terms of "Dollar revenues". This is because we continue to make storage with declining dollar-per-GB.
One should not confuse what people do with what people are required to do. I am not a lawyer, but most regulations pertaining to storage of information state that certain records need to be kept for a set amount of time, either a fixed period of years, or based on some event. For example, broker/dealers need to keep emails of their clients for six years after the client closes their brokerage account. After those six years, the records can be destroyed.
Unfortunately, many IT managers look at the laws and come up with the simplest solution: keep everything forever. While this might meet the regulators audit requirements, it does expose their employer to subpoenas for data that should have been deleted, and may not be very cost-effective.
The alternative for many IT managers involves having to leave their comfort zone, and talk to their legal counsel, the lines of business, and try to classify their data, determine a set of policies, and inact some forms of enforcement. This is perhaps the "scary" part of the storage of information, it has grown outside the walls of IT, forcing IT managers to interact with the rest of the business to get their jobs done.
Compliance is the only game in town and that is most certainly where the money is.
Anytime an analyst tells you that something is the "only game in town", they are usually wrong. In this case, IBM has had great success in other areas that are not compliance-related. For example, digital video surveillance (DVS) is being used not only to help reduce shoplifting, but also to help identify patterns in customers perusing through aisles and window-shopping. Identifying what people are interested in has proven effective in moving product displays around to better attract buyers and motivate them to make purchases.
Take, the keynote from Andy Monshaw, general manager of IBM storage, and thus a man who is very much in a position to know. He spent his allotted 30 minutes, or whatever, listing all the security, compliance, threats and related issues that are currently making the jobs of most IT manager a cause for concern. Now, there is an argument that suggests that it is absolutely the right thing to do to frighten IT managers into sorting out their issues. They need shaking up say some. Especially analysts.
I helped develop the content of Andy's SNW presentation, working with his speech writers and graphic artists to make a consistent and coherent message fit in the 25 minutes he was given. The challenge with SNW is that we needed to make this presentation applicable across the entire storage industry, without sounding like an infomercial for IBM offerings.
Some people have compared the storage to the "insurance industry", claiming that backups, remote disk mirroring, continuous data protection and other storage related features are costs that can be compared to insurance you pay to protect your home, business, and other assets. You hope you never have to use it, and complain how much it costs, but when bad things happen, you hope it is the best money can buy.
Unlike Y2K, which was a one-time event that had a specific date of occurrence, the threats and risks mentioned by Andy in his presentation may never happen at all, or in other cases, may happen more than once, without knowing when or where. For the sake of your shareholders, and your stakeholders, it is best to be prepared for these possibilities.
The counter argument says that IT companies just smell the money.
Is this a counter argument? Can IBM not both help customers mitigate their risks, and at the same time, turn a profit? Trust me, you do not want to do business with any storage vendor that is not interested in making a profit. The better ones have incorporated addressing client's most pressing challenges into their strategy. I gave a quick summary of IBM's strategy last August in [Day 1 Storage Symposium].
Helping our clients mitigate risks is just one of IBM's core strengths. If you want to learn more, contact your local IBM Business Partner or storage rep.
To get beyond the simple statistics of vendor popularity, we looked at the number and combinations of vendors with which enterprises work. Many were customers of one or two storage providers, but the rest were customers of up to six storage providers. More than one-third were customers of systems vendors only, bypassing storage specialists.
Comparisons between solutions vendors and storage component vendors are not new. One could argue that this can be compared to supermarkets and specialty shops.
Supermarkets offer everything you need to prepare a meal. You can buy your meat, bread, cheese,and extras all with one-stop shopping. In a sense, IBM, HP, Sun and Dell are offering this to clients who prefer this approach. Not surprisingly, the two leaders in overall storage hardware,IBM and HP, are also the two best to offer a complete set of software, services, servers and storage.
IBM and HP are also the leaders in tape.While Forrester reports that many large enterprises in North America prefer to buy diskfrom storage specialists, others have found that customers prefer to buy their tape from solution providers. Recently, Byte and Switch reports thatLTO Hits New Milestones,where the LTO consortium (IBM, HP, and Quantum) have collectively shipped over 2 million LTO tape drives, and over 80 million LTO tape cartridges. Perhaps this is because tape is part of an overallbackup, archive or space management solution, and customers trust a solution vendor overa storage specialist.
Where possible, IBM brings synergy between its servers and storage. For example, we justannounced the IBM BladeCenter Boot Disk System, a 2U high unit that supports up to 28 blade servers, ideal for applications running under Windows or Linux, and helping to reduce the energy consumption for thoseinterested in a "Green" data center.
Some people prefer buying their meat at the slaughterhouse, bread at the French pastry shop, andso on. Storage specialists focus on just storage, leaving the rest of the solution, like servers,to be purchased separately from someone else. Storage vendors like NetApp, EMC, HDS and othersoffer storage components to customers that like to do their own "system integration", or to thosethat are large enough to hire their own "systems integrator".
Storage specialists recognize that not everybody is a "specialty shop" shopper.HDS has done well selling their disk through solution vendorslike HP and Sun. EMC sells its gear through solution vendor Dell.
Interestingly, I have met clients who prefer to buy IBM System Storage N series from IBM, becauseIBM is a solution vendor, and others that prefer to buy comparable NetApp equipment directly fromNetApp, because they are a storage component vendor.
I mostly buy my groceries at a supermarket, buthave, on occasion, bought something from the local butcher, baker or candlestick maker. And if you are ever in Tucson, you might be able to find Mexican tamalessold by a complete stranger standing outside of a Walgreens pharmacy, the ultimate extreme of specialization. You can get a dozen tamales for tenbucks, and in my experience they are usually quite good. Theoretically, if you get sick, or they don't taste right, you have no recourse, and will probably never see that stranger again to complain to.(And no, before I get flamed, I am not implying any major vendor mentioned above is like this tamale vendor)
Of course, nothing is starkly black and white, and comparisons like this are just to help provide context and perspective,but if you are looking to have a complete IT solutionthat works, from software and servers to storage and financing, come to the vendor you can trust, IBM.
The IBM Storage and Storage Networking Symposium concludes today. As typical for manysuch conferences, it ended at noon, so that people can catch airline flights.
TS1120 Tape Encryption - Customer Experiences
Jonathan Barney had implemented many deployments of tape encryption, and shared hisexperiences at two customer locations.
The first company had decided to implement their EKM servers on dedicated 64-bitWindows servers. They had three sites, one in Chicago, Alphareta, and New York City,each with two EKM servers. Each library had a single TS3500 tape library, and pointedto four EKM servers, two local, and two remote.
The clever trick was managing the keystore. They decided that EKM-1 was their trustedsource, made all changes to that, and then copied it to the other five EKM servers.His team deployed one site at a time, which turned out to be ok, but he would notrecommend it. Better to design your complete solution, and make sure that all librariescan access all EKM servers.
This company decided to have a single key-label/key-pair for all three locations, but change it every 6 months. You have to keep the old keys for as long as you have tapesencrypted with those keys, perhaps 10-20 years.The customer found the IBM encryption implementation "elegant" and it can be easily replicated to a fourth site if needed.
The second company had both z/OS and Sun Solaris. Initially they planned to have botha hardware-based keystore on System z, and software-based keystore on Sun, but they realized that System z version was so much more secure and reliable, that it made nosense to have anything on the Sun Solaris platform.
On System z, they had two EKM images, and used VIPA to ensure load balancing fromthe library. Tapes written from z/OS used DFSMS Data Class to determine which tapesare encrypted and which aren't. All Tapes written from Sun Solaris were encryptied, written to a separate logical library partition of the TS3500, which in turn contactedthe System z for the EKM management to provide the keys to use for the encryption.
The "gotcha" for this case was that when they tested Disaster Recovery, they had torecover the two EKM servers first, before any other restores could take place, and thistook way too long. Instead, they developed a scaled-down 10-volume "rescue recovery" z/OS image that would contain the RACF database and all EKM related software to actas the keystore during a disaster recovery. Anytime they make updates, they only haveto dump 10 volumes to tape. Restore time is down to only 2 hours.
He gave this advice to deploy tape encryption:
Some third party z/OS security products, like Computer Associates Top Secret orACF2, require some PTFs to work with the EKM. The latest IBM RACF is good to go.
Getting IP support from IOS to OMVS requires IPL.
At one customer, an OMVS monitor software program killed the EKM because it wasn'tin their list of "acceptable Java programs". They updated the list and EKM ran fine.
DO not update EKM properties file while EKM is running. EKM keeps a lot of stuffin memory, and when it is recycled, copies this back to the EKM properties file, reversing any changes you may have done. It is best to shut down EKM, update theproperties file, then start up EKM back up again. This is why you should always haveat least two EKM servers for redundancy.
TSM for Linux on System z
Randy Larson from our Tivoli group presented this session.There is a lot of interest in deploying IBM Tivoli Storage Manager backup and archivesoftware on Linux for System z. Many customers are already invested in a mainframeinfrastructure, may have TSM for z/OS or z/VM, and want the newer features and functions that are available for TSM on Linux.
TSM has special support for Lotus Domino, Oracle, DB2 and WebSphere Application Servers.TSM clients can send backup data to a TSM server internally via Hipersockets, a virtualLAN feature on the System z platform that uses shared memory to emulate TCP/IP stack.
One of the big questions is whether to run Linux as guests under z/VM, or natively onLPAR. The general deployment is to carve an LPAR and run Linux natively untilyour server and storage administration staff have taken z/VM training classes. Oncetrained, they can easily move native LPAR images to z/VM guests. Unlike VMware that takesa hefty 40% overhead on x86 platforms to manage guests, z/VM only takes 5-10% overhead.
For the TSM database and disk storage pools, Randy recommends FC/SCSI disk, with ext3 file system, combined with LVM2 into logical volumes. ECKD disk and reiserfsworks too. Avoid use of z/VM minidisks. Under LVM2, consider 32KB stripes for the TSM database, and 256KB stripes for the disk storage pools. For multipathing, usefailover rather than multibus method. Read IC45459 before you activate "directio".
The TSM for Linux on z is very much like the TSM on AIX or Windows, and not like theTSM for z/OS. For tape, TSM for Linux on z does not support ESCON/FICON attached tape,you need to use FC/SCSI attached tape and tape libraries. TSM owns the library anddrives it uses, so give it a logical library partition separate from z/OS. ForSun/StorageTek customers, TSM works with or without the Gersham Enterprise Distrbu-Tape(EDT) software. Use the IBM-provided drivers for IBM tape. For non-IBM tape, TSM providessome drivers that you can use instead.
That wraps up my week. This was a great conference! If you missed it, look for the one in Montpelier, France this October. Check out the list of IBM Technical Conferencesto find others that might interest you.
Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data, and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done.
Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary.
(Warning: Windows Vista apparently handles data, Dual Boot, andPartitions differently. These steps may not work for Vista)
For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive.
I like to backup the master boot record to one file, and then the rest of the C: drive to a series of 690MB compressed chunks. These can be directed to the USB-attached drive, and then later burned onto CDrom, or pack 6 files per DVD.Most USB-attached drives are formatted to FAT32 file system, which doesn't support any chunks greater than 2GB, so splitting these up into 690MB is well below that limit.
You can learn more about these commands here and here.
Step 1 - Defrag your C: drive
From Windows, right-click on your Recycle Bin and select "Empty Recycle Bin".
Click Start->Programs->Accessories->System Tools->Disk Defragmenter. Select C: drive and push the Analyze button. You will see a bunch of red, blue and white vertical bars. If there are any greenbars, we need to fix that. The following worked for me:
Right-click "My Computer" and select Properties. Select Advanced, then press "Settings" buttonunder Performance. Select Advanced tab and press the "Change" button under Virtual Memory.Select "No Paging File" and press the "Set" button. Virtual memory lets you have many programs open, moving memory back and forth between your RAM and hard disk.
Click Start->Control Panel->Performance and Maintenance->Power Options. On the Hibernate tab,make sure the "Enable Hibernation" box is un-checked. I don't use Hibernate, as it seems likeit takes just as long to come back from Hibernation as it does to just boot Windows normally.
Reboot your system to Windows.
If all went well, Windows will have deleted both pagefile.sys and hiberfil.sys, the twomost common unmovable files, and free up 2GB of space. You can run just fine without either of these features, but if you want them back, we will put them back on Step 6 below.
Go back to Disk Defragmenter, verify there are no green bars, andproceed by pressing the "Defragment" button. If there are still some green bars,you can proceed cautiously (you can always restore from your backup right?), or seek professional help.
Step 2 - Resize your C: drive
When the defrag is done, we are ready to re-size your file system. This can be done with commercial software like Partition Magic.If you don't have this, you can use open source software. Burn yourself the Gparted LiveCD.This is another Linux LiveCD, and is similar to Partition Magic.
Either way, re-size the C: drive smaller. In theory, you can shrink it down to 15GB if this is a fresh install of Windows, and there is no data on it. If you have lots of data, and the drive wasnearly full, only resize the C: drive smaller by 2GB. That is how much we freed upfrom the unmovable files, so that should be safe.
You could do steps 2 and 3 while you are here, but I don't recommend it. Just re-size C:press the "Apply" button, reboot into Windows, and verify everything starts correctly before going to the next step.
Step 3 - Create Extended Paritition and Logical D: drive
You can only have FOUR partitions, either Primary for programs, or Extended for data. However, theExtended partition can act as a container of one or more logical partitions.
Get back into Partition Magic or Gparted program, and in the unused space freed up from re-sizing inthe last step, create a new extended/logical partition. For now, just have one logical inside theextended, but I have co-workers who have two logical partitions, D: for data, and E: for their e-mailfrom Lotus Notes. You can always add more logical partitions later.
I selected "NTFS" type for the D: drive. In years past, people chose the older FAT32 type, but this has some limitations, but allowed read/write capability from DOS, OS/2, and Linux.Windows XP can only format up to 32GB partitions of FAT32, and each file cannot be bigger than 2GB.I have files bigger than that. Linux can now read/write NTFS file systems directly, using the new NTFS-3Gdriver, so that is no longer an issue.
Step 4 - Format drive D: as NTFS
Just because you have told your partitioning program that D: was NTFS type, you stillhave to put a file system on it.
Click Start->Control Panel->Performance and Maintenance->Computer Management. Under Storage, select Disk Management. Right-click your D: drive and choose format.Make sure the "Perform Quick Format" box is un-checked, so that it peforms slowly.
Step 5 - Move data from C: to D: drive
Create two directories, "D:\documents" and "D:\notes\data", either through explorer, or in a commandline window with "MKDIR documents notes\data" command.
Move files from c:\notes\data to d:\notes\data, and any folder in your "My Documents" over to d:\documents.
(If you have more data than the size of the D: drive, copy over what you can, run another defrag, resize your C: drive even smaller with Partition Magic or Gparted, Reboot, verify Windows is still working,resize your D: bigger, and repeat the process until you have all of your data moved over.)
To inform Lotus Notes that all of your data is now on the D: drive, use NOTEPAD to edit notes.ini and change the Directory line to "Directory=D:\notes\data". If you have a special signature file, leave it in C:\notes directory.
Once all of your data is moved over to D:\documents, right-click on "My Documents" and select Properties. Change the target to "D:\documents" and press "Move" button. Now, whenever you select "My Documents", youwill be on your D: drive instead.
Step 6 - Take A Fresh Backup
If you use IBM Tivoli Storage Manager, now would be a good time to re-evaluate your "dsm.opt" file that listswhat drives and sub-directories to backup. Take a backup, and verify your data is being backed up correctly.
With the USB-attached, backup both C: and D: drives. I leave my USB drive back in Tucson. For a backup copywhile traveling, go to IBM Rescue and Recovery and take a C:-only backup to DVD. Make sure D: drive box is un-checked. Now, if I ever need to reinstall Windows, because of file system corruption or virus, I can do this from my one bootable CD plus 2 DVDs, which I can easily carry with me in my laptop bag, leaving all my data on the D: drive in tact.
In the worst case, if I had to re-format the whole drive or get a replacement disk, I can restore C: and thenrestore the few individual data files I need from IBM Tivoli Storage Manager, or small USB key/thumbdrive,delaying a full recovery until I return to Tucson.
Lastly, if you want, reactivate "Virtual Memory" and "Hibernation" features that we disabled in Step 1.
As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster.