Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
For those in the US, last friday, the day after Thanksgiving, marks the official start of the Holiday shopping season. This has been called [Black Friday] as some stores open as early as 4am in the morning, when it is still dark outside, to offer special discount prices. Some shoppers camp out in sleeping bags and lawn chairs in front of stores overnight to be the first to get in.
Not surprisingly, some folks don't care for this approach to shopping, and prefer instead shopping online. Since 2005, the Monday after Thanksgiving (yesterday) has been called [Cyber Monday].USA Today newspaper reports [Cyber Monday really clicks with customers]. Many of the major online shopping websites indicated a 37 percent increase in sales yesterday over last year's Cyber Monday.
On Deadline dispels the hype on both counts:[Cyber Monday: Don't Believe the Hype?"], indicating that Black Friday is not the peak shopping for bricks-and-mortar shops, andthat Cyber Monday is not the busiest online shopping day of the year, either.
A flood of new video and other Web content could overwhelm the Internet by 2010 unless backbone providers invest up to US$137 billion in new capacity, more than double what service providers plan to invest, according to the study, by Nemertes Research Group, an independent analysis firm. In North America alone, backbone investments of $42 billion to $55 billion will be needed in the next three to five years to keep up with demand, Nemertes said.
Internet users will create 161 exabytes of new data this year, and this exaflood is a positive development for Internet users and businesses, IIA says.
If the "161 Exabytes" figure sounds familiar, it is probably from the IDC Whitepaper [The Expanding Digital Universe] that estimated the 161 Exabytes created, captured or replicated in 2006 will increase six-fold to 988 Exabytes by the year 2010. This is not just video captured for YouTube by internet users, but also corporate data captured by employees, and all of the many replicated copies. The IDC whitepaper was based on an earlier University of California Berkeley's often-cited 2003[How Much Info?] study, which not only looked at magnetic storage (disk and tape), but also optical, film, print, and transmissions over the air like TV and Radio.
A key difference was that while UC Berkeley focused on newly created information, the IDC study focused on digitized versions of this information, and included theadded impact of replication.It is not unusual for a large corporate databases to be replicated many times over. This is done for business continuity, disaster recovery, decision support systems, data mining, application testing, and IT administrator training. Companies often also make two or three copies of backups or archives on tape or optical media, to storethem in separate locations.
Likewise, it should be no surprise that internet companies maintain multiple copies of data to improve performance.How fast a search engine can deliver a list of matches can be a competitive advantage. Content providers may offer the same information translated into several languages.Many people replicate their personal and corporate email onto their local hard drives, to improve access performance, as well as to work offline.
The big question is whether we can assume that an increased amount of information created, captured and replicated will have a direct linear relation to the growth of what is transmitted over the internet. Three fourths of the U.S. internet users watched an average of 158 minutes of online video in May 2007, is this also expected to grow six-fold by 2010? That would be fifteen hours a month, at current video densities, or more likely it would be the same 158 minutes but of much higher quality video.
On the other hand, much of what is transmitted is never stored, or stored for only very short periods of time.Some of these transmissions are live broadcasts, you are either their to watch and listen to them when they happen, or you are not. Online video games are a good example. The internet can be used to allow multiple players to participate in real time, but much of this is never stored long-term. An interesting feature of the Xbox 360 is to allow you to replay "highlight" videos of the game just played, but I do not know if these can be stored away or transferred to longer term storage.
Of course, there will always be people who will save whatever they can get their hands on. Wired Magazine has anarticle [Downloading Is a Packrat's Dream], explaining that many [traditional packrats] are now also "digital packrats", and this might account for some of this growth. If you think you might be a digital packrat,Zen Habits offers a [3-step Cure].
In any case, the trends for both increased storage demand, and increased transmission bandwidth requirements, are definitely being felt. Hopefully, the infrastructure required will be there when needed.
I attended the main tent sessions on Day 2 (Monday). The focuswas on Visibility, Control and Automation.
Steve is IBM senior VP and Group Executive of the IBM Software Group, and presented someinsightful statistics from the IBM Global Technology Outlookstudy, some recent IBM wins, and other nuggets of IT trivia:
In 2001, there were about 60 million transistors per humanbeing. By 2010, this is estimated to increase to one billion per human
In 2005, there were about 1.3 billion RFID tags, by 2010this is estimated to grow to over 30 billion
IBM helped the City of Stockholm, Sweden, reduce traffic congestion 20-25% using computer technology
Only about 25% data is original, the remaining75% is replicated
In 2007, there were approximately 281 Exabytes (EB), expected to increase to 1800 EB by the year 2011
70 percent of unstructured data is user-created content, but 85 percent of this will be managed by enterprises
Only 20% of data is subject to compliance rules and standards, and about 30% subject to security applications
Human error is the primary reason for breaches, with34% of organizations experiencing a major breach in 2006
10% of IT budget is energy costs (power and cooling), and thiscould rise to 50% in the next decade
30 to 60 percent of energy is wasted. During the next 5 years, people will spend as much on energy as they will on new hardware purchases.
Al Zollar is the General Manager of IBM Tivoli. He discussedthe 20 some recent software acquisitions, including Encentuate and FilesX earlier this year.
"The time has come to fully industrialize operations" -- Al Zollar
What did Al mean about "industrizalize"? This is theclosed-loop approach of continuous improvement, including design, delivery and management.
Al used several examples from other industries:
Henry Ford used standardized parts and processautomation. Assembly of an automatobile went from 12 hours by master craftsmen, to delivering a new model T every 23 seconds off anassembly line.
Power generation was developed by Thomas Edison. A satellite picture showed the extent of the [Blackout of 2003 in Northeast US and Canada]. The time for "smart grid" has arrived, making sensors andmeters more intelligent. This allows non-essential IP-enabled appliances in our home or office to be turned off to reduce energy consumption.
[McCarran International Airport] integrated the management of 13,000 assets with IBM Tivoli Maximo Enterprise Asset Management (EAM) software, and was able to increase revenues through more accurate charge-back. Unlike traditional EnterpriseResource Planning (ERP) applications, EAM offers the deep management of four areas: production equipment, facilities, transportation, and IT.
When compared to these other industries, management of IT is in itsinfancy. The expansion of [Web 2.0] and Service-Oriented Architecture [SOA] is driving this need.What people need is a "new enterprise data center" that IBM Tivoli software can help you manage across operational boundaries. IBM can integrate through open standards with management software from Cisco, Sun, OracleMicrosoft, CA, HP, BMC Software, Alcatel Lucent, and SAP.Together with our ecosystems of technology partners, IBM ismeeting these challenges.
IBM clients have achieved return on investment from gettingbetter control of their environment. This week there are client experience presentations Sandia National Labs, Spirit AeroSystems, Bank of America, and BT Converged communication services.
Chris O'Connor used some of his staff as "actors" to show an incredible live demo of various Tivoli and Maximo products for the mythical launch of "Project Vitalize", thenew online web store for a new "Aero Z bike" from the mythical VCA Bike and Motorcycle company.
Shoel Perelman played the role of "CIO".The CIO locked down all spending, and asked the IT staff to make the shift from bricks-and-mortar to web salesof this new product on in 15 months. While the company andsituation were mythical, all the products that were part of thelive demo are all readily available.The CIO had three goals:
What do we have? where is it? what's connected to what?Traditionally, these would be answered from lists in spreadsheets.The CIO had a goal to deploy IBM Tivoli Application DependenceDiscover Manager (TADDM) which discovered all hardware and software,with an easy to understand view, and how each piece serves the business applications.
Each of the teams have processes, and needed them consistent andrepeatable, tightly linked together. Time is often wasted on thephone coordinating IT changes. For this, the CIO had a goalto deploy Tivoli Change and Configuration Management Database (CCMDB) for "strict change control".The process dashboard is accessible for all teams, to see how all projects are progressing. There is also aCompliance dashboard, which identifies all changes by role, clearly spelling out who can do what.
There is a lot of computerized machinery, Manufacturing assets and robotics. The CIO set a goal to "do more with existing people", and needed to automate key processes.Sales rep wanted to add a new distributor to key web portal.This was all done through their "service catalog", When they needed to deploy a new application, they were able to find servers with available capacity and adjust using automatic provisioning. Thanks to IBM, the IT staff no longer get paged at 3am in the morning, and fewer days are spent in the "war room". They now have confidence that thelaunch will be successful.
Ritika Gunnar played the role of "Operations manager". She highlightedfive areas:
"Service viewer" dashboard with green/yellow/red indicators forall of their edge, application and datbase servers. This allowsher to get data 4-5 times faster and more accurate.
Tivoli Enterprise Portal eliminates bouncingaround various products.
Tivoli Common Reporting for CPU utilization of all systems, helps find excess capacity usingIBM Tivoli Monitor
On average, 85 percent of problems are caused by IT changes to the environment. IBM can help find dependencies, so that changes in one area do not impact other areas unexpectedly
Process Automation will Show changes that have been completed, in progress, or overdue.She can see all steps in a task or change request. A"workflow" automates all the key steps that need to be taken.
Laura Knapp played the role of "Facilities manager". She wanted to See all processes that apply to her work using a role-based process dashboard. The advantage of using IBM is that it changes work habits, reduces overtimeby 42 percent, improves morale. The IT staff now works as team,collaborates more, and jobs get done faster with fewer mistakes.Employees are online, accessing, monitoring and managing dataquicker. In days not weeks.
IBM Tivoli Enterprise Console (TEP) served as a common vehicle.She was able to pull up floor plan online, displaying all of the managed assets and mapped features. With the temperature overlay from Maximo Spatial, she was able to review hot spots on data center floor. Heat can cause servers to fail or shut down.
Power utilization chart at peak loadsCan now anticipate, predict and watch power consumption,and were able to justify replacement with newer, more energy-efficient equipment.
The CIO got back on stage, and explained the great success of thelaunch. They use Webstore usage tracking, security tools tracking all new registrations, and trackingserver and storage load.It now only takes hours, not weeks, to add new business partners and distributors.Tivoli Service Quality Assurance toolstrack all orders placed, processed, and shipped.Faster responsiveness is competitive advantage. TheirIT department is no longer seen as stodgy group, but as a world classorganization.
The live demo showed how IBM can help clients with rapid decisionmaking, speed and accuracy of change processes, and automation to take actions quickly. The result is a strong return on investment (ROI).
Liz Smith, IBM General Manager of Infrastructure Services, presented the results of an IBM survey to CEOs and CIOs asking questions like: What is the next big impact? Where are you investing?What will new datacenter look like?
The five key traits they found for companies of the future:
They were hungry for change
Innovative beyond customer imagination
Disruptive by nature
Genuine, not just generous
The IT infrastructure must be secure, reliable, and flexible.Taking care of environment is a corporate responsibility, notjust a way to reduce costs.
The five entry points for IBM Service Management: Integrate, Industrialize,Discover,Monitor and Protect.IBM Service management and compliance are critical for theGlobally Integrated Enterprise, with repeatable, scalable and consistent processes that enablechange to an automated workflow. This reduces errors, risks and costs, and improves productivity.IBM has talent, assets and experience to help any client get there.
Lance lives in Austin, TX, where IBM Tivoli is headquartered,so this made a good choice as a keynote speaker.He is best known for winning seven "Tour de France" bicycle races in a row, but he spoke instead gave an inspirational talk about how he survived cancer.
In 1996, Lance was diagnosed with cancer. Surprisingly, He said it was thegreatest thing that happened to him, and gave him new perspective on his life, family and the sport ofbicycling.Back then, there wasn't a webMD, Google or other Web 2.0 socialnetworking sites for Lance to better understand what he wasgoing through, learn more about treatment options, or find othersgoing through the same ordeal.
After his treatment, he was considered "damaged goods" by manyof the leading European bicycle teams. So, he joined the US Postal Serviceteam, not known for their wins, but often invited to sell TVrights to American audiences. Collaborating with his coachesand other members of his team, he revolutionized the bicycling sport, analyzed everything about the race, and built up morale.He won the first "yellow jersey" in 1999, and did so each yearfor a total of seven wins.
Lance formed the [Livestrong foundation] to help other cancer survivors. Nike came to him and proposed donating 5 million "rubber bracelets"colored yellow to match his seven yellow jerseys, with the name "Livestrong" embossed on them, that his foundation couldthen sell for one dollar apiece to raise funds. What some thought was a silly idea at first has started amovement.At the 2004 Olympics, many athletes from all nations and religious backgrounds, wore these yellow braceletsto show solidarity with this cause.To date, the foundation has sold over 72 million yellow bracelets, and these have served to provide a symbol,a brand, a color identity, to his cause.
He explained that doctor's have a standard speech to cancer survivors.As a patient, you can go out this doorway and never tell anyone,keep the situation private. Or you can go out this other doorway, you tell everybody your story. Lance chose the latter, and he felt it was the best decision he ever made.He wrote a book titled [It's Not About the Bike: My Journey Back to Life].
His call to action for the audience: find out what can you do to make a difference.A million non-governmental organizations[NGO] have started in the past 10 years. Don't just give cash, also give your time and passion.
I am always amused in the manner the IT industry tries to solve problems. Take, for example, theprocess of backups. The simplest approach is to backup everything, and keep "n" versions of that.Simple enough for a small customer who has only a handful of machines, but does not scale well. Inmy post [Times a Million],I coined the phrase "laptop mentality", referring to people's inability to think through solutions in large scale.
Apparently, I am not alone.Steve Duplessie (ESG) wrote in his post[Random Thoughts]:
"I may even get to stop yelling at people to stop doing full backups every week on non-changing data (which is 80 %+) just because that's how they used to do it. They won't have a choice. You can't back up 5X your current data the way you do (or don't) today."
Hu Yoshida (HDS) does a great job explaining that thereare three ways to perform deduplication for backups:
Pre-processing. Have the backup software not backup unchanged data.
Inline processing. Have an index to filter the output of the backup as it sends data to storage.
Post-processing. Have the receiving storage detect duplicates and handle them accordingly.
"A full backup of 1TB data base tablespace is taken on day one. The next day another full backup is taken and only 2GB of that backup has any changes.
Using traditional full backup approaches after 2 nights, the backup capacity required is 2 x 1TB = 2TB
One method of calculating de-duplication ratios could yield a low ratio:
Total de-duplicated backup capacity used = 1TB + 2GB = 1.002TB
If the de-duplication ratio compares the amount of total physical storage used to the total amount that would have been used by traditional backup methods, the ratio = 2TB / 1.002TB = approximately 2:1
Another method of calculating de-duplication ratios could yield a high ratio:
Total de-duplicated backup capacity used still = 1.002TB
If the de-duplication ratio compares the amount of data stored in the most recent (second) backup to the amount that would have been used by traditional backup methods, the ratio 1TB / 2GB = 1000GB / 2GB = 500:1"
While IBM also offers deduplication in the IBM System Storage N series disk systems, I find that for backup, itis often more effective to apply best practices via IBM Tivoli Storage Manager (TSM). Let's take a look at some:
Exclude Operating System files
Why take full backups of your operating system every day? Yes, deduplication will find a lot to reduce fromthis, but best practices would exclude these. TSM has an include/exclude list, and the default version excludesall the operating system files that would be recovered from "bare machine recovery" or "new system install"procedures. Often, if the replacement machine has different gear inside, your OS backups aren't what you need,and a fresh OS install may determine this and install different drivers or different settings.
Exclude Application programs
Again, yes if there are several machines running the same application, you probably have opportunity for deduplication. However, unless you match these up with the appropriate registry or settings buried down in theoperating system, recovering just application program files may render an unusable system. Applications are bestinstalled from a common source that are either "pushed" through software distribution, or "pulled" from an application installation space.
If you have TB-sized databases, and are only doing full backups daily to protect it, have I got a solution for you.IBM and others have software that are "application-aware" and "database-aware" enough to determine what haschanged since the last backup and copy only that delta. Taking advantage of the TSM Application ProgrammingInterface (API) allows for both IBM and third party tools to take these delta backups correctly.
Which leaves us with user files, which are often unique enough on their own from the files of other users,that would not benefit from file-level deduplication. Backing up changed data only, as TSM does with its patented ["progressive incremental backup"] method, generally gets most of the benefits described by deduplication, without having to purchase storage hardware features.
Of course, if two or more users have identical files, the question might be why these are not stored on acommon file share. NAS file share repositories can greatly reduce each user keeping their own set of duplicates.It is interesting that some block-oriented deduplication,such as that found in the IBM System Storage N series, can get some benefit because some user files are oftenderivatives of other files, and there might be some 4 KB blocks of data in common.
Last November, I visited a customer in Canada. All of their problems were a direct result of taking full backupsevery weekend. It put a strain on their network; it used up too many disk and tape resources; and it took too long tocomplete. They asked about virtual tape libraries, deduplication, and anything else that could help them. The answer was simple: switch to IBM Tivoli Storage Manager and apply best practices.
One of the differences between IBM and the other storage vendors is that IBM is also in the business of middleware, application-aware backup software, and advanced copy services. This allows IBM to put togethersolutions that work to address specific challenges for our clients.
IBM has written a whitepaper on a cleverVSS Snapshot Backup for Exchange using IBM Tivoli Storage Manager and the point-in-time copy capabilities of IBM System Storage disk systems.
A problem in the past was that each vendor's point-in-time copy method had its own unique proprietary interface.Microsoft Developed Volume Shadow Copy Services (VSS) as a common interface front-end to resolve this concern.IBM Tivoli Storage Manager for Mail can invoke standard VSS interfaces, and this in turn can invoke FlashCopyon the IBM System Storage SAN Volume Controller, DS8000 series, or DS6000 series disk system.
You might be thinking: Wouldn't it have been less effort to just have TSM for Mail invoke IBM proprietary interfaces,rather than having to put full VSS support into TSM for mail, and then full VSS support into IBM's various disksystems? Perhaps, but IBM doesn't decide to do things because it is the cheapest way, we focus on what is theright way, and in this case, customers now have more choices, then can use TSM for Mail with IBM or non-IBM disksystems that support the VSS interface, and IBM disk systems can be employed into other uses for VSS snapshot.
Of course, we would like our clients to consider both TSM and IBM System Storage disk systems for a combined solution,not because they are required to make the solution work, but because both are best-of-breed, and whitepapers likethis show how they can provide synergy working together.
BladeCenterservers come in many flavors, including blades with Intel, AMD and POWER chipsets, and can be configured in Grid and SuperComputer configurations. Up to 14 blade servers can fit intoa single 7U-high chassis, making this twice as dense as standard 1U-high rack-mounted servers.
System x, the new "IBM Systems" name for our popular xSeries product line, support Intel and AMD chipsets. These come in both rack-mountedand tower configurations. These also are idea for clustered and SuperComputer configurations.[Read More]
Continuing this week's series on Pulse 2009 video, we have a double header. Bob Dalton discusses our entry-level IBM System Storage [DS3000] and midrange IBM System Storage [DS4000] disk systems, followed by Dan Thompson discussing [IBM Tivoli Storage Manager FastBack] software.
IBM Tivoli Storage Manager FastBack is the result of IBM's [acquisition of FilesX], a company in Israel that developed software to backup servers at remote branch offices running Microsoft Windows operating system.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some examples:
Information Lifecycle Management
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
Backup and Disaster Recovery planning
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few administrators,keeping track of how many copies to keep, and how long to keep them.
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
My session was the first in the morning, at 8:30am, but managed to pack the room full of people. A few looklike they just rolled in from Brocade's special get-together in Casey's Irish Pub the night before.I presented how IBM's storage strategy for the information infrastructure fits into the greater corporate-wide themes.To liven things up, I gave out copies of my book[Inside System Storage: Volume I] to those who asked or answered the toughest questions.
Data Deduplication and IBM Tivoli Storage Manager (TSM)
IBM Toby Marek compared and contrasted the various data deduplication technologies and products available, andhow to deploy them as the repository for TSM workloads. She is a software engineer for our TSM software product,and gave a fair comparison between IBM System Storage N series Advanced Single Instance Storage (A-SIS), IBMDiligent, and other solutions out in the marketplace.If you are going to combine technologies, then it isbest to dedupe first, then compress, and finally encrypt the data. She also explained about the many cleverways that TSM does data reduction at the client side greatly reduces the bandwidth traffic over the LAN,as well as reducing disk and tape resources for storage. This includes progressive "incremental forever" backup for file selection, incremental backups for databases, and adaptive sub-file backup.Because of these data reduction techniques, you may not get as much benefit as deduplication vendors claim.
The Business Value of Energy Efficiency Data Centers
Scott Barielle did a great job presenting the issues related to the Green IT data center. He is part of IBM"STG Lab Services" team that does energy efficiency studies for customers. It is not unusual for his teamto find potential savings of up to 80 percent of the Watts consumed in a client's data center.
IBM has done a lot to make its products more energy efficient. For example, in the United States, most datacenters are supplied three-phase 480V AC current, but this is often stepped down to 208V or 110V with powerdistribution units (PDUs). IBM's equipment allows for direct connection to this 480V, eliminating the step-downloss. This is available for the IBM System z mainframe, the IBM System Storage DS8000disk system, and larger full-frame models of our POWER-based servers, and will probably be rolled out to someof our other offerings later this year. The end result saves 8 to 14 percent in energy costs.
Scott had some interesting statistics. Typical US data centers only spend about 9 percent of their IT budgeton power and cooling costs. The majority of clients that engage IBM for an energy efficiency study are not tryingto reduce their operational expenditures (OPEX), but have run out, or close to running out, of total kW ratingof their current facility, and have been turned down by their upper management to spend the average $20 million USDneeded to build a new one. The cost of electricity in the USA has risen very slowly over the past 35 years, andis more tied the to fluctuations of Natural Gas than it is to Oil prices.(a recent article in the Dallas News confirmed this:["As electricity rates go up, natural gas' high prices, deregulation blamed"])
Cognos v8 - Delivering Operational Business Intelligence (BI) on Mainframe
Mike Biere, author of the book [BusinessIntelligence for the Enterprise], presented Cognos v8 and how it is being deployed for the IBMSystem z mainframe. Typically, customers do their BI processing on distributed systems, but 70 percent of the world's business data is on mainframes, so it makes sense to do yourBI there as well. Cognos v8 runs on Linux for System z, connecting to z/OS via [Hypersockets].
There are a variety of other BI applications on the mainframe already, including DataQuant,AlphaBlox, IBI WebFocus and SAS Enterprise Business Intelligence. In addition to accessing traditional onlinetransaction processing (OLTP) repositories like DB2, IMS and VSAM, using the [IBM WebSphere ClassicFederation Server], Cognos v8 can also read Lotus databases.
Business Intelligence is traditionally query, reporting and online analytics process (OLAP) for the top 10 to 15 percent of the company, mostly executives andanalysts, for activities like business planning, budgeting and forecasting. Cognos PowerPlay stores numericaldata in an [OLAP cube] for faster processing.OLAP cubes are typically constructed with a batch cycle, using either "Extract, Transfer, Load" [ETL], or "Change Data Capture" [CDC], which playsto the strength of IBM System z mainframe batch processing capabilities.If you are not familiar with OLAP, Nigel Pendse has an article[What is OLAP?] for background information.
Over the past five years, BI is now being more andmore deployed for the rest of the company, knowledge workers tasked with doing day-to-day operations. Thisphenomenom is being called "Operational" Business Intelligence.
IBM Glen Corneau, who is on the Advanced Technical Support team for AIX and System p, presented the IBMGeneral Parellel File System (GPFS), which is available for AIX, Linux-x86 and Linux on POWER.Unfortunately, many of the questions were related to Scale Out File Services (SOFS), which my colleague GlennHechler was presenting in another room during this same time slot.
GPFS is now in its 11th release since its introducing in 1997. All of the IBM supercomputers on the [Top 500 list] use GPFS. The largest deployment of GPFS is 2241 nodes.A GPFS environment can support up to 256 file systems, each file system can have up to 2 billion filesacross 2 PB of storage. GPFS supports "Direct I/O" making it a great candidate for Oracle RAC deployments.Oracle 10g automatically detects if it is using GPFS, and sets the appropriate DIO bits in the stream totake advantage of GPFS features.
Glen also covered the many new features of GPFS, such as the ability to place data on different tiers ofstorage, with policies to move to lower tiers of storage, or delete after a certain time period, all conceptswe call Information Lifecycle Management. GPFS also supports access across multiple locations and offersa variety of choices for disaster recovery (DR) data replication.
Perhaps the only problem with conferences like this is that it can be an overwhelming["fire hose"] of information!
Earlier this year, IBM launched its[New Enterprise Data Center vision]. The average data center was built 10-15 years ago,at a time when the World Wide Web was still in its infancy, some companies were deploying their first storage areanetwork (SAN) and email system, and if you asked anyone what "Google" was, they might tell you it was ["a one followed by a hundred zeros"]!
Full disclosure: Google, the company, justcelebrated its [10th anniversary] yesterday, and IBM has partnered with Google on a varietyof exciting projects. I am employed by IBM, and own stock in both companies.
In just the last five years, we saw a rapid growth in information, fueled by Web 2.0 social media, email, mobile hand-held devices, and the convergenceof digital technologies that blurs the lines between communications, entertainment and business information. This explosion in information is not just "more of the same", but rather a dramatic shift from predominantly databases for online transaction processing to mostly unstructured content. IT departments are no longer just the"back office" recording financial transactions for accountants, but now also take on a more active "front office" role. For a growing number of industries, information technology plays a pivotal role in generating revenue, making smarter business decisions, and providing better customer service.
IBM felt a new IT model was needed to address this changing landscape, so IBM's New Enterprise Data Center vision has these five key strategic initiatives:
Highly virtualized resources
Business-driven Service Management
Green, Efficient, Optimized facilities
In February, IBM announced new products and features to support the first two initiatives, including the highlyvirtualized capability of the IBM z10 EC mainframe, and and related business resiliency features of the [IBM System Storage DS8000 Turbo] disk system.
In May, IBM launched its Service Management strategic initiative at the Pulse 2008 conference. I was there in Orlando, Florida at the Swan and Dolphin resort to present to clients. You can read my three posts:[Day 1; Day 2 Main Tent; Day 2 Breakout sessions].
In June, IBM launched its fourth strategic initiative "Green, Efficient and Optimized Facilities" with [Project BigGreen 2.0], which included the Space-Efficient Volume (SEV) and Space-Efficient FlashCopy (SEFC) capabilitiesof the IBM System Storage SAN Volume Controller (SVC) 4.3 release. Fellow blogger and IBM master inventor Barry Whyte (BarryW) has three posts on his blog about this:[SVC 4.3.0Overview; SEV and SEFCdetail; Virtual Disk Mirroring and More]
Some have speculated that the IBM System Storage team seemed to be on vacation the past two months, with few pressreleases and little or no fanfare about our July and August announcements, and not responding directly to critics and FUD in the blogosphere.It was because we were holding them all for today's launch, taking our cue from a famous perfume commercial:
"If you want to capture someone's attention -- whisper."
My team and I were actually quite busy at the [IBM Tucson Executive Briefing Center]. In between doing our regular job talking to excited prospects and clients,we trained sales reps and IBM Business Partners, wrote certification exams, and updated marketing collateral. Fortunately, competitors stopped promotingtheir own products to discuss and demonstrate why they are so scared of what IBM is planning.The fear was well justified. Even a few journalists helped raise the word-of-mouth buzz and excitement level. A big kiss to Beth Pariseau for her article in [SearchStorage.com]!
(Last week we broke radio silence to promote our technology demonstration of 1 million IOPS using Solid StateDisk, just to get the huge IBM marketing machine oiled up and ready for today)
Today, IBM General Manager Andy Monshaw launchedthe fifth strategic initiative, [IBM Information Infrastructure], at the[IBM Storage and Storage Networking Symposium] in Montpellier, France. Montpellier is one of the six locations of our New Enterprise Data Center Leadership Centers launched today. The other five are Poughkeepsie, Gaithersburg, Dallas, Mainz and Boebligen, with more planned for 2009.
Although IBM has been using the term "information infrastructure" for more than 30 years, it might be helpful to define it for you readers:
“An information infrastructure comprises the storage, networks, software, and servers integrated and optimized to securely deliver information to the business.”
In other words, it's all the "stuff" that delivers information from the magnetic surface recording of the disk ortape media to the eyes and ears of the end user. Everybody has an information infrastructure already, some are just more effective than others. For those of you not happy with yours, IBM hasthe products, services and expertise to help with your data center transformation.
IBM wants to help its clients deliver the right information to theright people at the right time, to get the most benefits of information, while controlling costs and mitigatingrisks. There might be more than a dozen ways to address the challenges involved, but IBM's Information Infrastructure strategic initiative focuses on four key solution areas:
Last, but not least, I would like to welcome to the blogosphere IBM's newest blogger, Moshe Yanai, formerly the father of the EMC Symmetrix and now leading the IBM XIV team. Already from his first poston his new [ThinkStorage blog], I can tell he is not going to pullany punches either.
"IBM announced that Northwest Radiology Network has gone live with a new virtualized enterprise of IBM servers and storage to support its growing medical imaging needs, giving its four locations an enterprise-class infrastructure which enables its doctors to recover medical image reports faster for analysis and enables remote 24x7 access to its medical image report system.
Founded in 1967, Northwest Radiology (NWR) is ranked as one of the largest physician groups in the Indianapolis, Indiana area. With 180 employees who offer the Central Indiana community comprehensive inpatient and outpatient imaging services such as mammography, ultrasonography, CT scans, PET-CT scans, bone density scans and MRIs – the Network had a dramatic need to develop a centralized infrastructure where large amounts of data could be stored and shared. A new data center would benefit the company’s clientele; which includes area hospitals and doctor’s offices serving thousands of patients each year.
Storing more than ten thousand medical imaging reports and radiographic images each month for doctors to analyze, the Network realized it had single points of failure and at one point a critical report server failed. Northwest Radiology turned to IBM and IBM Business Partner Software Information Systems (SIS) for a more efficient solution to prevent any possible downtime in the future.
SIS recommended and installed a virtualized infrastructure with IBM servers and storage as the heart of Northwest Radiology’s Indianapolis data center. By April 2007, Northwest Radiology replaced eight servers and direct attached storage with just two IBM System x3650 servers connected to an IBM System Storage DS3400. Today, the new servers run 15 virtual servers to ensure the availability of their services 24x7. When the business needs it, a new server can be provisioned in just minutes. With a Fibre Channel on the SAN Disk, the DS3400 not only increased performance but also met NWR’s requirement to not have one single point of failure. With three TB of storage capacity, they can meet the demands of increased business well into the future. The systems are also now easily managed from a remote site."
“Uptime is paramount in our business. We selected IBM based on the reliability and flexibility of IBM System x servers and the IBM System Storage DS3400,” said Marty Buening, IT Director, Northwest Radiology Network. “The virtualized infrastructure and the SAN storage array that SIS and IBM brought to the table is improving our service and giving our doctors and staff piece of mind knowing each patient’s medical imaging reports are always available.”
Second, we have [Iowa Health System], a large enterprise with over 19,000 employees, managing four million patients and hundreds of TBs of data.
Here is a 4-minute video on IBM TV from the good folks at Iowa Health System discussing theIBM Grid Medical Archive Solution (GMAS) as part of their information infrastructure for theirPicture Archiving and Communication Systems (PACS) application.
In both cases, IBM technology was able to provide remote access to medical information, making images and patient records available to more doctors, specialists and radiologists. Last January, in my post[Five in Five], IBM had predicted that remote access to healthcare would have an impact over the next five years.
Whether you are a small company or a large one, IBM probably has the right solution for you.
IBM is hosting a webcast about storage for SAP Environments. Learn how integrated IBM infrastructure solutions, specifically, customized for your SAP environments, can help lower your business costs, increases productivity in SAP development and test tasks, and improve resource utilization. This will include discussion of archive solutions with WebDAV, ArchiveLink and DR550;IBM Business Intelligence (BI) Accelerator; IBM support for SAP [Adaptive Computing]; and performance benchmark results. The session is intended for SAP and storage administrators, IT directorsand managers.
Here are the details:
Date: Wednesday, June 18, 2008
Time: 11:00am EDT (8:00am for those of us in Arizona or California)
Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data, and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done.
Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary.
(Warning: Windows Vista apparently handles data, Dual Boot, andPartitions differently. These steps may not work for Vista)
For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive.
I like to backup the master boot record to one file, and then the rest of the C: drive to a series of 690MB compressed chunks. These can be directed to the USB-attached drive, and then later burned onto CDrom, or pack 6 files per DVD.Most USB-attached drives are formatted to FAT32 file system, which doesn't support any chunks greater than 2GB, so splitting these up into 690MB is well below that limit.
You can learn more about these commands here and here.
Step 1 - Defrag your C: drive
From Windows, right-click on your Recycle Bin and select "Empty Recycle Bin".
Click Start->Programs->Accessories->System Tools->Disk Defragmenter. Select C: drive and push the Analyze button. You will see a bunch of red, blue and white vertical bars. If there are any greenbars, we need to fix that. The following worked for me:
Right-click "My Computer" and select Properties. Select Advanced, then press "Settings" buttonunder Performance. Select Advanced tab and press the "Change" button under Virtual Memory.Select "No Paging File" and press the "Set" button. Virtual memory lets you have many programs open, moving memory back and forth between your RAM and hard disk.
Click Start->Control Panel->Performance and Maintenance->Power Options. On the Hibernate tab,make sure the "Enable Hibernation" box is un-checked. I don't use Hibernate, as it seems likeit takes just as long to come back from Hibernation as it does to just boot Windows normally.
Reboot your system to Windows.
If all went well, Windows will have deleted both pagefile.sys and hiberfil.sys, the twomost common unmovable files, and free up 2GB of space. You can run just fine without either of these features, but if you want them back, we will put them back on Step 6 below.
Go back to Disk Defragmenter, verify there are no green bars, andproceed by pressing the "Defragment" button. If there are still some green bars,you can proceed cautiously (you can always restore from your backup right?), or seek professional help.
Step 2 - Resize your C: drive
When the defrag is done, we are ready to re-size your file system. This can be done with commercial software like Partition Magic.If you don't have this, you can use open source software. Burn yourself the Gparted LiveCD.This is another Linux LiveCD, and is similar to Partition Magic.
Either way, re-size the C: drive smaller. In theory, you can shrink it down to 15GB if this is a fresh install of Windows, and there is no data on it. If you have lots of data, and the drive wasnearly full, only resize the C: drive smaller by 2GB. That is how much we freed upfrom the unmovable files, so that should be safe.
You could do steps 2 and 3 while you are here, but I don't recommend it. Just re-size C:press the "Apply" button, reboot into Windows, and verify everything starts correctly before going to the next step.
Step 3 - Create Extended Paritition and Logical D: drive
You can only have FOUR partitions, either Primary for programs, or Extended for data. However, theExtended partition can act as a container of one or more logical partitions.
Get back into Partition Magic or Gparted program, and in the unused space freed up from re-sizing inthe last step, create a new extended/logical partition. For now, just have one logical inside theextended, but I have co-workers who have two logical partitions, D: for data, and E: for their e-mailfrom Lotus Notes. You can always add more logical partitions later.
I selected "NTFS" type for the D: drive. In years past, people chose the older FAT32 type, but this has some limitations, but allowed read/write capability from DOS, OS/2, and Linux.Windows XP can only format up to 32GB partitions of FAT32, and each file cannot be bigger than 2GB.I have files bigger than that. Linux can now read/write NTFS file systems directly, using the new NTFS-3Gdriver, so that is no longer an issue.
Step 4 - Format drive D: as NTFS
Just because you have told your partitioning program that D: was NTFS type, you stillhave to put a file system on it.
Click Start->Control Panel->Performance and Maintenance->Computer Management. Under Storage, select Disk Management. Right-click your D: drive and choose format.Make sure the "Perform Quick Format" box is un-checked, so that it peforms slowly.
Step 5 - Move data from C: to D: drive
Create two directories, "D:\documents" and "D:\notes\data", either through explorer, or in a commandline window with "MKDIR documents notes\data" command.
Move files from c:\notes\data to d:\notes\data, and any folder in your "My Documents" over to d:\documents.
(If you have more data than the size of the D: drive, copy over what you can, run another defrag, resize your C: drive even smaller with Partition Magic or Gparted, Reboot, verify Windows is still working,resize your D: bigger, and repeat the process until you have all of your data moved over.)
To inform Lotus Notes that all of your data is now on the D: drive, use NOTEPAD to edit notes.ini and change the Directory line to "Directory=D:\notes\data". If you have a special signature file, leave it in C:\notes directory.
Once all of your data is moved over to D:\documents, right-click on "My Documents" and select Properties. Change the target to "D:\documents" and press "Move" button. Now, whenever you select "My Documents", youwill be on your D: drive instead.
Step 6 - Take A Fresh Backup
If you use IBM Tivoli Storage Manager, now would be a good time to re-evaluate your "dsm.opt" file that listswhat drives and sub-directories to backup. Take a backup, and verify your data is being backed up correctly.
With the USB-attached, backup both C: and D: drives. I leave my USB drive back in Tucson. For a backup copywhile traveling, go to IBM Rescue and Recovery and take a C:-only backup to DVD. Make sure D: drive box is un-checked. Now, if I ever need to reinstall Windows, because of file system corruption or virus, I can do this from my one bootable CD plus 2 DVDs, which I can easily carry with me in my laptop bag, leaving all my data on the D: drive in tact.
In the worst case, if I had to re-format the whole drive or get a replacement disk, I can restore C: and thenrestore the few individual data files I need from IBM Tivoli Storage Manager, or small USB key/thumbdrive,delaying a full recovery until I return to Tucson.
Lastly, if you want, reactivate "Virtual Memory" and "Hibernation" features that we disabled in Step 1.
As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster.