Continuing this week's discussion on IBM announcements, today I'll cover our integrated systems.
The problem with spreading out these announcements across several days' worth of blog posts is that others beat you to the punch. Fellow blogger Richard Swain (IBM) has his post [Move that File], and TechTarget's Dave Raffo has an article titled [
"IBM SONAS gains policy-driven tiering, gateway to IBM XIV Storage System"].
By combining multiple components into a single "integrated system", IBM can offer a blended disk-and-tape storage solutions. This provides the best of both worlds, high speed access using disk, while providing lower costs and more energy efficiency with tape. According to a study by the Clipper Group, tape can be 23 times less expensive than disk over a 5 year total cost of ownership (TCO).
The two we introduced recently were the [IBM Information Archive] and the Scale-Out Network Attached Storage (SONAS). This week, IBM announced some enhancements as SONAS v1.1.1 release. SONAS is the productized version of IBM's Scale-Out File Services (SoFS), which I discussed in my posts [Area Rugs versus Wall-to-Wall Carpeting] and [More details about IBM's Clustered Scalable NAS].
- ILM and HSM data movement
I have covered Information Lifecycle Management several times in this post, including my posts [ILM for my iPod], [Times a Million], and [Using ILM to Save Trees], to name a few.
I've also covered Hierarchical Storage Management, such as my post [Seven Tiers of Storage at ABN Amro], and my role as lead architect for DFSMS on z/OS in general, and DFSMShsm in particular.
However, some explanation might be warranted in the use of these two terms in regards to SONAS. In this case, ILM refers to policy-based file placement, movement and expiration on internal disk pools. This is actually a GPFS feature that has existed for some time, and was tested to work in this new configuration. Files can be individually placed on either SAS (15K RPM) or SATA (7200 RPM) drives. Policies can be written to move them from SAS to SATA based on size, age and days non-referenced.
HSM is also a form of ILM, in that it moves data from SONAS disk to external storage pools managed by IBM Tivoli Storage Manager. A small stub is left behind in the GPFS file system indicating the file has been "migrated". Any reference to read or update this file will cause the file to be "recalled" back from TSM to SONAS for processing. The external storage pools can be disk, tape or any other media supported by TSM. Some estimate that as much as 60 to 80 percent of files on NAS have low reference and should be stored on tape instead of disk, and now SONAS with HSM makes that possible.
This distinction allows the ILM movement to be done internally, within GPFS, and the HSM movement to be done externally, via TSM. Both ILM and HSM movement take advantage of the GPFS high-speed policy engine, which can process 10 million files per node, run in parallel across all interface nodes. Note that TSM is not required for ILM movement. In effect, SONAS brings the policy-based management features of DFSMS for z/OS mainframe to all the rest of the operating systems that access SONAS.
- HTTP and NIS support
In addition to NFS v2, NFS v3, and CIFS, the SONAS v1.1.1 adds the HTTP protocol. Over time, IBM plans to add more protocols in subsequent releases. Let me know which protocols you are interested in, so I can pass that along to the architects designing future releases!
SONAS v1.1.1 also adds support for Network Information Service (NIS), a client/server based model for user administration. In SONAS, NIS is used for netgroup and ID mapping only. Authentication is done via Active Directory, LDAP or Samba PDC.
- Asynchronous Replication
SONAS already had synchronous replication, which was limited in distance. Now, SONAS v1.1.1 provides asynchronous replication, using rsync, at the file level. This is done over Wide Area Network (WAN) across to any other SONAS at any distance.
- Hardware enhancements
Interface modules can now be configured with either 64GB or 128GB of cache. Storage now supports both 450GB and 600GB SAS (15K RPM) and both 1TB and 2TB SATA (7200 RPM) drives. However, at this time, an entire 60-drive drawer must be either all one type of SAS or all one type of SATA. I have been pushing the architects to allow each 10-pack RAID rank to be independently selectable. For now, a storage pod can have 240 drives, 60 drives of each type of disk, to provide four different tiers of storage. You can have up to 30 storage pods per SONAS, for a total of 7200 drives.
An alternative to internal drawers of disk is a new "Gateway" iRPQ that allows the two storage nodes of a SONAS storage pod to connect via Fibre Channel to one or two XIV disk systems. You cannot mix and match, a storage pod is either all internal disk, or all external XIV. A SONAS gateway combined with external XIV is referred to as a "Smart Business Storage Cloud" (SBSC), which can be configured off premises and managed by third-party personnel so your IT staff can focus on other things.
See the Announcement Letters for the SONAS [hardware] and [software] for more details.
For those who are wondering how this positions against IBM's other NAS solution, the IBM System Storage N series, the rule of thumb is simple. If your capacity needs can be satisfied with a single N series box per location, use that. If not, consider SONAS instead. For those with non-IBM NAS filers that realize now that SONAS is a better approach, IBM offers migration services.
Both the Information Archive and the SONAS can be accessed from z/OS or Linux on System z mainframe, from "IBM i", AIX and Linux on POWER systems, all x86-based operating systems that run on System x servers, as well as any non-IBM server that has a supported NAS client.
technorati tags: , IBM, Announcements, SONAS, SoFS, Information+Archive, Richard Swain, TechTarget, ILM, HSM, storage tiers, GPFS, TSM, HTTP, NIS, TSM, NAS, iRPQ, XIV, SBSC, z/OS, Linux, AIX
Well, today's Tuesday, and you know what that means... IBM Announcements!
This week, IBM has their huge 3Q Launch. This on top of the [2Q results] IBM released yesterday. You can read how the rest of the company did, but it is good to see that IBM grew in both revenue and market share for storage!
As with any IBM launch of this magnitude, there are so many enhancements, I will spread them across several posts.
- IBM System Storage TS7610 ProtecTIER® Deduplication Appliance Express
The TS7610 is a smaller appliance than the TS7650 we introduced last year, taking up only 3U of rack space (2U for the disk itself, and a 1U slide rail to simplify maintenance). This is designed for smaller deployments, such as midsized businesses between 100 and 1000 employees that backup 3TB of data per week or less. The unit relies on RAID protected SATA drives. Thanks to the same ProtecTIER data deduplication we have on the TS7650, the TS7610 can hold up to 135TB of backup data on just 5.4TB of disk capacity, with in-line data ingest at 80 MB/sec performance. This little Virtual Tape Library (VTL) emulates up to four TS3500 libraries, with 64 LTO-3 drives and over 8000 virtual tapes. See the [Announcement letter] for details.
The [ProtecTIER Entry Edition] offers a hub-and-spoke approach to replication. You can have up to twelve(12) TS7610 boxes (the "spokes") replicate to a central VTL (the "hub"). This can be ideal for protecting remote office or branch office deployments.
Josh Krischer wrote a nice [7-page summary] on this.
- IBM System Storage N series
IBM dobules the storage capacity by utilizing 2TB hard disk drives for the N3300 and N3400 series models, maximizes customer satisfaction through Partner Select Bundles (software bundles) for all of the N3000 series (N3300, N3400, N3600), and offers Application and Server Packs (software bundles) for N3400 models.
For the high-end, IBM introduces an enhanced Performance Acceleration Module (PAM II) bundle for N7900 Gateway. This bundle includes two 512GB Solid State Drive PAM II adapters, two dual-port 10GbE TOE network interface cards (NIC), and various software features.
See the [Announcement letter] for details.
- IBM System Storage DS4000 and DS5000 series
The DS5020 and EXP520 joins their bigger siblings DS5100 and DS5300 in supporting Solid State Drives (SSD), available in 73GB and 300GB capacities. A new air filter bezel is also available for these when used in dusty environments. See the [Announcement letter] for details.
For my friends down in Brazil, A new 2.8 meter length power cord that supports 220-250 volts is now available for all DS4000 and DS5000 series disk systems. Obrigado para o seu negócio!
- IBM Tivoli Storage FlashCopy Manager v2.2
I covered this latest release in my post [FlashCopy Manager v2.2] but the marketing team felt we should include it with this launch to get added exposure and visibility.
I'll try to get to the rest in separate posts over the rest of this week.
technorati tags: IBM, 2Q Results, ProtecTIER, deduplication, TS7610, VTL, N3400, PAM, DS5020, Brazil, power cord, FlashCopy, FlashCopy Manager
This week, July 26-30, 2010, I am in Washington DC for the annual [2010 System Storage Technical University]. As with last year, we have joined forces with the System x team. Since we are in Washington DC this time, IBM added a "Federal Track" to focus on government challenges and solutions. So, basically, offering attendees the option to attend three conferences for one low price.
This conference was previously called the "Symposium", but IBM changed the name to "Technical University" to emphasize the technical nature of the conference. No marketing puffery like "Journey to the Private Cloud" here! Instead, this is bona fide technical training, qualifying attendees to count this towards their Continuing Professional Education (CPE).
(Note to my readers:The blogosphere is like a playground. In the center are four-year-olds throwing sand into each other's faces, while mature adults sit on benches watching the action, and only jumping in as needed. For example, fellow blogger Chuck Hollis (EMC) got sand in his face for promising to resign if EMC ever offered a tacky storage guarantee, and then [failed to follow through on his promise] when it happened.
Several of my readers asked me to respond to another EMC blogger's latest [fistful of sand].
A few months ago, fellow blogger Barry Burke (EMC) committed to [stick to facts] in posts on his Storage Anarchist blog. That didn't last long! BarryB apparently has fallen in line with EMC's over-promise-then-under-deliver approach. Unfortunately, I will be busy covering the conference and IBM's robust portfolio of offerings, so won't have time to address BarryB's stinking pile of rumor and hearsay until next week or later. I am sorry to disappoint.)
This conference is designed to help IT professionals make their business and IT infrastructure more dynamic and, in the process, help reduce costs, mitigate risks, and improve service. This technical conference event is geared to IT and Business Managers, Data Center Managers, Project Managers, System Programmers, Server and Storage Administrators, Database Administrators, Business Continuity and Capacity Planners, IBM Business Partners and other IT Professionals. This week will offer over 300 different sessions and hands-on labs, certification exams, and a Solutions Center.
For those who want a quick stroll through memory lane, here are my posts from past events:
- 2007 Storage Symposium: [Day 1,
- 2009 Storage Symposium: [
Day 2-Server Virtualization,
Day 3-Extraordinary Networks,
Day 5-Meet the Experts]
In keeping up with IBM's leadership in Social Media, IBM Systems Lab Services and Training team running this event have their own [Facebook Fan Page] and
[blog]. IBM Technical University has a Twitter account [@ibmtechconfs], and hashtag #ibmtechu. You can also follow me on Twitter [@az990tony].
technorati tags: IBM, Technical University, Federal, System Storage, System x, Washington DC, CPE, EMC, Facebook, Twitter
Continuing this week's coverage of IBM's 3Q announcements, today it's all about storage for our mainframe clients.
- IBM System Storage DS8700
IBM is the leader in high-end disk attached to mainframes, with the IBM DS8700 being our latest model in a long series of successful products in this space. Here are some key features:
- Full Disk Encryption (FDE), which I mentioned in my post [Different Meanings of the word "Protect"]. FDE are special 15K RPM Fibre Channel drives that include their own encryption chip, so that IBM DS8700 can encrypt the data at rest without impacting performance of reads or writes. The encryption keys are managed by IBM Tivoli Key Lifecycle Manager (TKLM).
- Easy Tier, which I covered in my post [DS8700 Easy Tier Sub Lun Automatic Migration] which offers what EMC promised but has yet to deliver, the ability to have CKD volumes and FBA LUNs to straddle the fence between Solid State Drives (SSD) and spinning disk. For example, a 54GB CKD volume could have 4GB on SSD and the remaining 50GB on spinning drives. The hottest extents are moved automatically to SSD, and the coldest moved down to spinning disk. To learn more about Easy Tier, watch my [7-minute video] on IBM [Virtual Briefing Center].
- z/OS Distributed Data Backup (zDDB), announced this week, provides the ability for a program running on z/OS to backup data written by distributed operating systems like Windows or UNIX stored in FBA format. In the past, to backup FBA LUNs involved a program like IBM Tivoli Storage Manager client to read the data natively, send it over Ethernet LAN to a TSM Server, which could run on the mainframe and use mainframe resources. This feature eliminates the Ethernet traffic by allowing a z/OS program to read the FBA blocks through standard FICON channels, which can then be written to z/OS disk or tape resources. Here is the [Announcement Letter] for more details.
One program that takes advantage of this new zDDB feature already is Innovation's [FDRSOS], which I pronounce "fudder sauce". If you are an existing FDRSOS customer, now is a good time to get rid of any EMC or HDS disk and replace with the new IBM DS8700 system.
- IBM System Storage TS7680 ProtecTIER Deduplication Gateway for System z
When it comes to virtual tape libraries that attach to mainframes, the two main players are IBM TS7700 series and Oracle StorageTek Virtual Storage Manager (VSM). However, mainframe clients with StorageTek equipment are growing frustrated over Oracle's lack of commitment for mainframe-attachable storage. To make matters worse, Oracle recently missed a key delivery date for their latest enterprise tape drive.
Unfortunately, neither of these offer deduplication of the data. IBM solved this with the IBM TS7680. I covered the initial announcement six months ago in my post [TS7680 ProtecTIER Deduplication for the mainframe].
What's new this week is that IBM now supports native IP-based asynchronous replication of virtual tapes at distance, from one TS7680 to another TS7680. This replaces the method of replication using the back end disk features. The problem with using disk replication is that all the virtual tapes will be copied over. Instead, the ProtecTIER administrator can decide which subset of virtual tapes should be replicated to the remote site, and that can reduce both storage requirements as well as bandwidth costs. See the [Announcement Letter] for more details.
These new solutions will work with existing mainframes, as well as the new IBM [zEnterprise mainframe models] announced this week.
technorati tags: , IBM, DS8700, FDE, Easy+Tier, zDDB, SSD, TS7680, Deduplication, VTL, Oracle, Sun, StorageTek, STK, VSM, zEnterprise
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
- Storage for the Green Data Center
I presented this topic in four general categories:
- Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
- Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
- Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
- Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
- Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
technorati tags: IBM, Technical University, Green Data Center, PUE, DCiE, Free Cooling, ASHRAE, LEED, SSD, Disk, Tape, SONAS, Archive
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
- Jim Northington
Jim Northington, IBM System x Business Line Executive, covered the IT industry's "Love/Hate Relationship" with x86 platform. Many of the physical limitations that were previously a pain on this platform are now addressed, through a combination of IBM's new innovative eX5 architecture and virtualization technologies.
Jim also presented the [IBM CloudBurst] solution. IBM CloudBurst is one of the many "Integrated Systems" designed to help simplify deployment. Based on IBM BladeCenter, the IBM CloudBurst is basically a Private Cloud rack for those that are ready to deploy in their own data center.
Jim feels that server virtualization on x86 platforms is still in its infancy. IBM calls it the 70/30 rule: 70 percent of x86 workloads are running virtualized on 30 percent of the physical servers.
- Maria Azua
Maria Azua, IBM Vice President of Cloud Computing Enablement, presented on Cloud Computing. Technology is being adopted at faster rates. It took 40 years for radio to get 60 million listeners, 20 years for 60 million television viewers, 3 years to get 60 million surfers on the Internet, but it only took 4 months to get 60 million players on Farmville!
Maria covered various aspects of Cloud Computing: virtualization images, service catalog, provisioning elasticity, management and billing services, and virtual networks. With Cloud Computing, the combination of virtualization technologies, standardization, and automation can reduce costs and improve flexibility.
We've seen this happen before. Telcos transitioned from human operators to automated digital switches. Manufacturers went from having small teams of craftsmen to assembly lines of robots. Banks went from long lines of bank tellers to short lines at the ATM.
Maria said that companies are faced with three practical choices:
- Do-it-Yourself, buy the servers, storage and switches and connect everything together.
- Purchase pre-installed "integrated systems" to simplify deployment.
- Subscribe to Cloud computing, allowing a service provider do all this for you.
In countries where network access is not ubiquitous, IBM has developed tools for the cloud that work in "offline" mode. IBM has also developed or modified tools to run better in the cloud. Launching a computer instance from the cloud from the service catalog is so easy to do, your 5-year-old child can do this!
Want to see Cloud Computing in action? Check out [Innovation.ed.gov], which is run in the IBM cloud, for the US Department of Education's website to foster innovation.
Whether you adopt public, private or a hybrid cloud computing approach, Maria suggests you take time to plan, test your applications for standardization, examine all risks, and explore new workloads that might be good candidates. Otherwise, moving to the cloud might just mean "More mess for less". Maria provided a list of applications that IBM considers good fit for Cloud Computing today.
I heard several audience members indicate that this is the first time someone finally explained Cloud Computing to them in a way that made sense!
technorati tags: IBM, Technical University, eX5, CloudBurst, x86, Maria Azua, cloud computing, Department of Education, private cloud, public cloud, hybrid cloud
Well, I'm back from my adventure. For those who did not follow my tweets, here is a quick recap. Not counting the day we flew from Tucson to Minneapolis, or the day we flew from Memphis back to Tucson, Mo and I spent nine days on the road, covering 1549 miles, or roughly two thirds of the Mississippi River.
- Starting in Minneapolis, MN - roller coaster rides at the [Mall of America], the [SPAM museum in Austin, MN], the windmill farm in Southeastern Minnesota, and drove along the river from Red Wing down to Reads Landing. Stayed in a nice B&B called [The River Nest].
- The [National Eagle Center], Buena Vista park at Alma, WI, wine tasting at the [Danzinger Vineyards], see the paintings at the [Minnesota Marine Art Museum], tasty "walnut balls" at [Historic Trempealeau Hotel], the [world's largest six pack] and the [Shrine of our Lady of Guadalupe] in La Crosse, WI. Stayed at a motel in Prarie Du Chien, WI.
- Villa Louis in Prarie Du Chien, pictures in front of the Lady Luck pink elephant in Marquette, IA. Cheese Curds at Pike's peak just south of McGregor where the Wisonsin river merges into the Mississippi river, wine tasting at the [Double L vineyards], lunch at [Breitbach's in Balltown], rode the [Fenelon Place Elevator] in Dubuque, walked through the [Grotto at Dickeyville, IL], deep fried chicken livers at [Kalmes General Store] in St. Donatus, IA. Stayed at a hotel in Clinton, IA.
- Celebrated Fourth of July at the [Wide River Winery] just north of Clinton, IA. Saw "The Last Airbender" at the local cinema.
- Buffalo Bill Cody museum was closed on Monday, ate my first loose-meat sandwich lunch at Maid-Rite in Moline, IL, the button museum, aka [Muscatine History and Industry Center] was also closed on Monday, took pictures in the corn fields at Oquawka, IA, ate smoked Carp from [Quality Fisheries, in Niota, IA], ate raisin pie at the Maid-Rite in Quincey, IL. Stayed in a hotel in Hannibal, MO - home of Mark Twain.
- Took the Mark Twain paddleboat tour up and down Mississippi river to see Jackson island, almost drove car into the river at Winfield, MO where the Ferry was supposed to be, ate one of everything on the menu at [Fast Eddy's Bon-Air], rode up to the top of the [Gateway Arch] in St. Louis. We stayed in a hotel in downtown St. Louis, MO.
- Ate donuts at World's Fair Donuts and frozen custard called "concrete" at [Ted Drewes'] in St. Louis. Popeye museum in Chester, IL, ate dinner at Dixie BBQ in Jonesboro, and took pictures of the huge Superman statue in Metropolis, IL. Stayed in a hotel in Paducah, KY.
- Read the murals on the flood walls and toured the [National Quilt Museum] in Paducah, KY. Lunch at Nicky's BBQ just north of Clinton, KY, stopped for photos at Reelfoot Lake in Tennessee. Stayed in a hotel in Memphis, TN.
- Tour of [Graceland Mansion], home of Elvis Presley, and [Mud Island], ate dinner at Gus' World Famous Hot & Spicy Chicken, all in Memphis, TN.
Well, now I have a lot of unread emails and blogs to get through! My next trip is the [IBM System Storage Technical University] in Washington DC, July 26-30.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], here is my quick recap of the keynote sessions presented Monday morning. Marlin Maddy, Worldwide Technical Events Executive for IBM Systems Lab Services and Training, served as emcee.
- Roland Hagen
Roland Hagan, IBM Vice President for IBM System x server platform, presented on how IBM is redefining the x86 computing experience. More than 50 percent of all servers are x86 based. These x86 servers are easy to acquire, enjoy a large application base, and can take advantage of readily available skilled workforce for administration. The problem is that 85 percent of x86 processing power remains idle, energy costs are 8 times what they were 12 years ago, and management costs are now 70 percent of the IT budget.
IBM has the number one market share for scalable x86 servers. Roland covered the newly announced eX5 architecture that has been deployed in both rack-optimized models as well as IBM BladeCenter blade servers. These can offer 2x the memory capacity as competitive offerings, which is important for today's server virtualization, database and analytics workloads. This includes 40 and 80 DIMM models of blades, and 64 to 96 DIMM models of rack-optimized systems. IBM also announced eXFlash, internal Solid State Drives accessible at bus speeds. FlexNode allows a 4-node system to dynamically change to 2 separate 2-node systems.
By 2013, analysts estimate that 69 percent of x86 workloads will be virtualized, and that 22 percent of servers will be running some form of hypervisor software. By 2015, this grows to 78 percent of x86 workloads being virtualized, and 29 percent of servers running hypervisor.
- Doug Balog
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented how the growth of information results in a "perfect storom" for the storage industry. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
- Storage Efficiency - getting the most use out of the resources you invest
- Service Delivery - ensuring that information gets to the right people at the right time, simplify reporting and provisioning
- Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
He wrapped up his talk covering the success of DS8700 and XIV. In fact, 60 percent of XIV sales are to EMC customers. The TCO of an XIV is less than half the TCO of a comparable EMC VMAX disk system.
- Dave McQueeney
Dave McQueeney, IBM Vice President for Strategy and CTO for US Federal, covered how IBM's Smarter Planet vision for smarter cities, smarter healthcare, smarter energy grid and smarter traffic are being adopted by the public sector. Almost every data center in US Federal government is out of power, floor space and/or cooling capability. An estimated 80 percent of US Federal government IT budgets are spent on maintenance and ongoing operations, leaving very little left over for the big transformational projects that President Barack Obama wants to accomplish.
Who has the most active Online Transaction Processing (OLTP)? You might guess a big bank, but it is the US Department of Homeland Security (DHS), with a system processing 600 million transactions per day. Another government agency is #2, and the top Banking application is finally #3. The IBM mainframe has solved problems 10 to 15 years ago that the distributed systems are just now encountering today. Worldwide, more than 80 percent of banks use mainframes to handle their financial transactions.
IBM's recent POWER7 set of servers are proving successful in the field. For example, Allianz was able to consolidate 60 servers to 1. Running DB2 on POWER7 server is 38 percent less expensive than Oracle on x86 Nehalem processors. For Java, running JVM on POWER7 is 73 percent better than JVM on x86 Nehalem.
The US federal government ingests a large amount of data. It has huge 10-20 PB data warehouses. In fact, the amount of GB received every year by the US federal government alone exceed the production of all disk drives produced by all drive manufacturers. This means that all data must be processed through "data reduction" or it is gone forever.
- Clod Barrera
The last keynote for Monday was given by Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for System Storage. He started out shocking the audience with his view that the "disk drive industry is a train wreck". While R&D in disk drives enjoyed a healthy improvement curve up to about 2004, it has now slowed down, getting more difficult and more expensive to improve performance and capacity of disk drives. The rest of his presentation was organized around three themes:
- Integrated Stacks - while new-comers like Oralce/Sun and the VCE coalition are promoting the benefits of integrated stacks, IBM has been doing this for the past five decades. New advancements in Server and Storage virtualization provide exciting new opportunities.
- Integrated Systems - solutions like IBM Information Archive and SONAS, and new features like Easy Tier that help adopt SSD transparently. As it gets harder and harder to scale-up, IBM has moved to innovative scale-out architectures.
- Integrated Data Center management - companies are now realizing that management and governance are critical factors of success, and that this needs to be integrated between traditional IT, private, public and hybrid cloud computing.
This was a great inspiring start for what looks like an awesome week!
technorati tags: IBM, Technical University, Marlin Maddy, Roland Hagen, Doug Balog, Dave McQueeney, Clod Barrera, x86, eX5, FlexNode, Barack Obama, DHS, OLTP, DB2, POWER7, Oracle, JVM, Intel, Nehalem
Some of my favorite debates on the blogosphere concern the future of things. On his blog The Bigger Truth, fellow blogger Steve Duplessie (ESG) gives his thoughts on [Why the Cloud will Vaporize]. This was countered with TechTarget's Joseph Faran response, [Why Cloud Computing is Here to Stay]. Chris Mellor on The Register covers [HDS's pay-per-use private cloud storage] and [Nirvanix's hybrid cloud taster] offerings. Fellow blogger Alex McDonald has a hilarious send-up, poking fun at EMC's latest in their series of commercial failures, [Atmos Online, The Jezhov Of The Cloud].
Of course, EMC isn't the first, and won't be the last, vendor to [hear the sirens] of Cloud Computing and crash their ships on rocky shores. Just because you manufacture hardware or write software does not guarantee your success as a Cloud service provider.
(FTC disclaimer: I work for IBM. IBM is a successful public cloud service provider, as well as offering products that can be used to deploy a private, hybrid or community cloud, and provides technology to other cloud service proviers.)
An amusing excerpt from Steve Duplessie's post:
"Side Note: There is no such thing as a private cloud. A private cloud is called IT. We don’t need more terms for the same stuff."
I have to agree that when vendors like EMC say "Journey to the Private Cloud", skeptics hear "How to keep your IT administrator job by sticking with a traditional IT approach". Butchers, bakers, candlestick makers and the specialty shop "arms dealers" of Cloud Computing IT equipment may not want to see their market shrink down to a dozen or so service providers, and drum up the fear that "Public Cloud" deployments will "disintermediate" the IT staff.
But does that mean the use of term "Private Cloud" should be discontinued? The US National Institute of Standards and Technology [NIST] offers their cloud model composed of five essential characteristics, three service models, and four deployment models. Here's an excerpt:
- On-demand self-service
- Broad network access
- Resource pooling
- Rapid elasticity
- Measured Service
- Cloud Software as a Service (SaaS)
- Cloud Platform as a Service (PaaS)
- Cloud Infrastructure as a Service (IaaS)
- Private cloud.
- Community cloud.
- Public cloud.
- Hybrid cloud"
Like traditional IT, a private cloud infrastructure is operated solely for an organization, so I can see how many might consider the term unnecessary. However, unlike traditional IT, a private cloud may be managed by the organization or a third party and may exist on premise or off premise.
How many traditional IT departments meet the five essential characteristics above? Instead of "on-demand self-service", many IT departments have complicated and lengthy procurement and change control procedures. A few might have "measured service" with a charge-back scheme, and a few others prefer to use a "show-back" aproach instead, showing end users or managers how much IT resources are being consumed without assigning a monetary figure or other penalty. Rapid elasticity? Giving any resource you asked for back can be just as painful because re-purposing that equipment follows the same complicated and lengthy change control procedures.
Last December, I wrote a post covering a conference session by US Department Information Services Agency (DISA) on their [Rapid Access Computing Environment].
Just like the term "intranet" refers to a private network that employs Internet standards and technologies, I feel the term "private cloud" is useful, representing an infrastructure that meets the above criteria, employing Public Cloud standards and technologies, that can distinguish itself from traditional IT in key ways that provide business value.
What I do hope "vaporizes" is all the hype, and all the misuse of the Cloud terminology out there.
technorati tags: IBM, Cloud Computing, Private Cloud, Public Cloud, ESG, DISA, RACE
I've been so busy with travel and transitioning to my new laptop that I finally now have a chance to catch my breath.
I saw this great article by Nathan Willis on how to [Spring Clean your Photo Collection]. Since I took over 1100 pictures on my last vacation down the Great River Road, this seemed like a good weekend project. For more about my vacation, see my posts [Eight States in Eight Days], and [More Like Seven States in Nine Days].
I use two Cloud-Computing based photo-sharing services, [KodakGallery.com] and [Flickr.com], which serve two completely different purposes.
- Kodak Gallery
Formerly, this was Ofoto, but was acquired by Kodak. I started using this service back in 2002, and had over 12,000 photos uploaded over the past 8 years. I was able to share all my photos with my friends and family, and they could simply order whichever prints they want and have them shipped directly to them. They have incredibly high-professional photo-based products, like calendars and coffee table books, that you can produce from your own photos.
Sadly, the fine folks at Kodak Gallery decided they did not want my business anymore, and purged my 36GB of files from their system. To be fair, they did hint that they were having financial problems with an "Archive CD" offering, which would have allowed me to get a set of CDs or DVDs holding the high-resolution graphics of all my uploaded photos. This would have cost $150 or so, and if you uploaded more photos, there was no option to get the "delta" of photos uploaded since your last archive, so it would have cost me $150 every year or so to get an updated "backup" of my files. It seemed expensive and unnecessary at the time, given that I was sure that Kodak was not going out of business anytime soon, and that I was sure they took their own backups of all the photos that people put in their charge.
The problem is that Kodak Gallery was a free service, subsidized by people ordering physical prints and other products. As such, I got lots of email from Kodak every month, offering me free shipping, special promotions, and seasonal discounts. It was so much that I had all email from them automatically routed to a different sub-folder, that I would never look at, unless I was about to make a purchase and needed to find the best coupon code or free shipping option currently offered. This also had the unintended consequence that I missed the following series of notes:
- Important: From the Gallery's General Manager (April 17)
- Second notice: Our storage policy has changed (April 24)
- Final notice: Your stored photos may be deleted (May 8)
- We don't want to delete your photos (May 22)
All the notes mentioned the new "Storage Policy", here is a quick excerpt:
"The fact is, we store billions of photos for our 75 million members. The quality storage service the Gallery provides is significant in terms of our business costs.
So that we can provide the highest level of service, we're now asking all Gallery customers to make an annual nominal purchase in exchange for photo storage. We've modified our Terms of Service policy accordingly: if your Gallery photo storage equals 2 gigabytes or less, we're asking you to spend $4.99 annually; if more than 2 gigabytes, $19.99 annually.*
One last thought: We value and appreciate your business, and we want to continue our relationship with you in a spirit of mutual support and benefit. That's always been the Kodak way."
Since they had no response from me, nor saw any purchase activity, my 36GB of files were deleted on June 17. I discovered all of this when I contacted Kodak to find out where my files were last weekend during my "Spring Cleaning". I asked if I could at least get the final set of "Archive CDs", but they told me they were purged completely.
I understand the economy is in a recession, and many free cloud-based services are losing money and going under. I can understand they were faced with tough choices, Kodak opted to switch from a free service to fee-based service.
Albert Einstein defined Insanity as "doing the same thing over and over again and expecting different results." In general, if I am trying to get a hold of someone, and email isn't working, then I try something different, try them by phone, try them by snail mail, and so on. With the deluge of emails, people sometimes declare "email bankruptcy" by deleting everything in their inbox after coming back from vacation, or implement filters to automatically route mail to separate folders. I think it is unrealistic to expect that everybody reads every piece of email that you send them.
I would have liked for Kodak to have done at least one or more of the following, given that I had been such a long time customer, and they had earned hundreds of dollars in revenues from all the purchases, over the years, not just directly from me, but from my friends and family, of photos I uploaded to their website:
- Send me a letter after not receiving any response from the first three notices. They sent me promotional materials and offers for 20 percent discounts, so they had my active snail mail address on file correctly. With 75 million users, it would have cost $33 million USD to send out snail mail letters to everyone, but for the subset of power-users who have more than 2GB of files, a snail mail letter might have gotten more $19.99 purchases they needed to stay in business.
- Called me on the phone. Yes, they also had my phone number in their database.
- Go ahead and charged my credit card on file $19.99 without a purchase, and given me a credit towards a future purchase. Something like: "You have not purchased anything in the last 12 months, so we charged your credit card, per our Terms of Service, but you can use this as a credit towards something in the next 60 days."
On the plus side, my "Spring Cleaning" project was done. You can't organize what you don't have anymore.
- Flickr from Yahoo
I started using Flickr back in 2008 to hold photos and graphics for this blog. Flickr holds various sizes of photos that I can use directly with HTML tags. Clicking on the photo in the blog will take you to Flickr's service and allow you to see the large size resolution. The "Lotus Connections" that I have on IBM DeveloperWorks only offers 24MB of photo space, so Flickr was a nice alternative.
Unfortunately, Flickr had adopted a new policy that only the most recent 200 pictures would be visible, and I had already reached 170 photos. Rather than start deleting photos from my older blog posts, I opted to upgrade to the "Flickr Pro" account, with a fee of only $24.99 per year.
Hopefully, by paying an annual fee and choosing a successful and profitable Cloud-Computing company, I won't experience another traumatic loss. However, it does remind me that it is my responsibility to keep my own copies of these photos, just in case.
Fortunately, many "photo product" providers are connected to Flickr. For example, my publisher [<a href="http://www.lulu.com/">Lulu.com</a>] can access my Flickr photos to make photo-based coffee table books. As for my last eight years of memories that were lost, I will just have to treat it as if my house burned down. Rebuild and move on.
technorati tags: Spring Cleaning, photography, Kodak, Kodak Gallery, Flickr, Yahoo, Cloud Computing, Photo Sharing