On Tuesday, I covered much of the Feb 26 announcements, but left the IBM System Storage DS8000 for today so that it can haveits own special focus.
Many of the enhancements relate to z/OS Global Mirror, which we formerly called eXtended Remote Copy or "XRC", not to be confused with our "regular" Global Mirror that applies to all data. For those not familiar with z/OS Global Mirror, here is how it works. The production mainframe writes updates to the DS8000, and the DS8000 keeps track of these in cache until a "reader" can pull them over to the secondary location.The "reader" is called System Data Mover (SDM) which runs in its own address space under z/OS operating system. Thanks to some work my team did several years ago, z/OS Global Mirror was able to extend beyond z/OS volumes and include Linux on System z data. Linux on System z can use a "Compatible Disk Layout" (CDL) format (now the default) that meetsall the requirements to be included in the copy session.
IBM has over 300 deployments of z/OS Global Mirror, mostly banks, brokerages and insurance companies. The feature can keep tens of thousands of volumes in one big "consistency group" and asynchronously mirror them to any distance on the planet, with the secondary copy recovery point objective (RPO) only a few seconds behind the primary.
- Extended Distance FICON
Extended Distance FICON is an enhancement to the industry-standard FICON architecture (FC-SB-3) that can help avoid degradation of performance at extended distances by implementing a new protocol for "persistent" Information Unit (IU) pacing. This deals with the number of packets in flight between servers and storage separated by long distances, andcan keep a link fully utilized at 4Gpbs FICON up to 50 kilometers. This is particularly important for z/OS GlobalMirror "reader" System Data Mover (SDM). By having many "reads" in flight, this enhancementcan help reduce the need for spoofing or channel-extender equipment, or allow you to choose lower-costchannel extenders based on "frame-forwarding" technology. All of this helps reduce your total cost of ownership (TCO)for a complete end-to-end solution.
This feature will be available in March as a no-charge update to the DS8000 microcode.For more details, see the [IBM Press Release]
- z/OS Global Mirror process offload to zIIP processors
To understand this one, you need to understand the different "specialty engines" available on the System z.
On distributed systems where you run a single application on a single piece of server hardware, you mightpay "per server", "per processor" or lately "per core" for dual-core and quad-core processors. Software vendors were looking for a way to charge smaller companies less, and larger companies more. However, you might end up paying the same whether you use 1GHz Intelor 4GHz Intel processor, even though the latter can do four times more work per unit time.
The mainframe has a few processors for hundreds or thousands of business applications.In the beginning, all engines on a mainframe were general-purpose "Central Processor" or CP engines. Based on theircycle rate, IBM was able to publish the number of Million Instructions per Second (MIPS) that a machine witha given number of CP engines can do. With the introduction of side co-processors, this was changed to "Millionsof Service Units" or MSU. Software licensing can charge per MSU, and this allows applications running in aslittle as one percent of a processor to get appropriately charged.
One of the first specialty engines was the IFL, the "Integrated Facility for Linux". This was a CP designatedto only run z/VM and Linux on the mainframe. You could "buy" an IFL on your mainframe much cheaper than a CP,and none of your z/OS application software would count it in the MSU calculations because z/OS can't run on theIFL. This made it very practical to run new Linux workloads.
In 2004, IBM introduced "z Application Assist Processor" (zAAP) engines to run Java, and in 2006, the "z Integrated Information Processor" (zIIP) engines to run database and background data movement activities.By not having these counted in the MSU number for business applications, it greatly reduced the cost for mainframe software.
Tuesday's announcement is that the SDM "reader" will now run in a zIIP engine, reducing the costs for applicationsthat run on that machine. Note that the CP, IFL, zAAP and zIIP engines are all identical cores. The z10 EC hasup to 64 of these (16 quad-core) and you can designate any core as any of these engine types.
- Faster z/OS Global Mirror Incremental Resync
One way to set up a 3-site disaster recovery protection is to have your production synchronously mirrored to a second site nearby, and at the same time asynchronously mirrored to a remote location. On the System z,you can have site "A" using synchronous IBM System Storage Metro Mirror over to nearby site "B", and alsohave site "A" sending data over to size "C" using z/OS Global Mirror. This is called "Metro z/OS Global Mirror"or "MzGM" for short.
In the past, if the disk in site A failed, you would switch over to site B, and then send all the data all over again. This is because site B was not tracking what the SDM reader had or had not yet processed.With Tuesday's announcement, IBM has developed an "incremental resync" where site B figures out what theincremental delta is to connect to the z/OS Global Mirror at site "C", and this is 95% faster than sendingall the data over.
- IBM Basic HyperSwap for z/OS
What if you are sending all of your data from one location to another, and one disk system fails? Do you declare a disaster and switch over entirely? With HyperSwap, you only switch over the disk systems, but leave therest of the servers alone. In the past, this involved hiring IBM Global Technology Services to implementa Geographically Dispersed Parallel Sysplex (GDPS) with software that monitors the situation and updates thez/OS operating system when a HyperSwap had occurred. All application I/O that were writing to the primary locationare automatically re-routed to the disks at the secondary location. HyperSwap can do this for all the disk systems involved,allowing applications at the primary location to continue running uninterrupted.
HyperSwap is a very popular feature, but not everyone has implemented the advanced GDPS capabilities.To address this, IBM now offers "Basic HyperSwap", which is actually going to be shipped as IBMTotalStorage Productivity Center for Replication Basic Edition for System z. This will run in a z/OSaddress space, and use either the DB2 RDBMS you already have, or provide you Apache Derby database for thosefew out there who don't have DB2 on their mainframe already.
Update: There has been some confusion on this last point, so let me explain the keydifferences between the different levels of service:
- Basic HyperSwap: single-site high availability for the disk systems only
- GDPS/PPRC HyperSwap Manager: single- or multi-site high availability for the disk systems, plus some entry-level disaster recovery capability
- GDPS/PPRC: highly automated end-to-end disaster recovery solution for servers, storage and networks
I apologize to all my colleagues who thought I implied that Basic HyperSwap was a full replacement for the morefull-function GDPS service offerings.
- Extended Address Volumes (EAV)
Up until now, the largest volume you could have was only 54 GB in size, and many customers still are using 3 GB and 9 GB volume sizes. Now, IBM will introduce 223 GB volumes. You can have any kind of data set on these volumes,but only VSAM data sets can reside on cylinders beyond the first 65,280. That is because many applications still thinkthat 65,280 is the largest cylinder number you can have.
This is important because a mainframe, or a set of mainframes clustered together, can only have about 60,000disk volumes total. The 60,000 is actually the Unit Control Block (UCB) limit, and besides disk volumes, youcan have "virtual" PAVs that serve as an alias to existing volumes to provide concurrent access.
Aside from the first item, the Extended Distance FICON, the other enhancements are "preview announcements" which means that IBM has not yet worked out the final details of price, packaging or delivery date. In many cases, the work is done, has been tested in our labs, or running beta in select client locations, but for completeness I am required to make the following disclaimer:
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Availability, prices, ordering information, and terms and conditions will be provided when the product is announced for general availability.
technorati tags: IBM, z10 EC, DS8000, z/OS Global Mirror, XRC, SDM, CDL, RPO, FICON, dual-core, quad-core, Intel, MIPS, MSU, zAAP, IFL, zIIP, Hyperswap, DB2, Apache, Derby, UCB, VSAM, EAV
Yesterday, I asked if you were prepared for the future? The future is now. Today, IBM announced its["New Enterprise Data Center"
] vision and strategy which spans software, hardware and services in dealing withthe latest challenges that our clients are faced with today, or will face sooner or later this century.
Here's an excerpt:
Align IT with business goals
These changes demand that IT improve cost and service delivery, manage escalating complexity, and better secure the enterprise. And aligning IT more closely with the business becomes a primary goal. The new enterprise data center is an evolutionary new model for efficient IT delivery that helps provide the freedom to drive business innovation. Through a service oriented model, IT will be able to better manage costs, improve operational performance and resiliency, and more quickly respond to business needs. This approach will deliver dynamic and seamless access to IT services and resources, improving both productivity and satisfaction.
IBM's Vision for the New Enterprise Data Center
The new enterprise data center can improve the integration of people, process, and technology in your business to help you improve efficiency and effectiveness. As you implement a new enterprise data center strategy, your infrastructure becomes open, efficient, and easy to manage. And your IT staff can move from a focus on fixing IT problems to solving business challenges. Ultimately your processes become standardized and efficient, focused on business needs rather than technology.
A lot was announced today, so I will give a quick recap now, and cover specific areas over the rest of the week.
- IBM System z10 Enteprise Class
IBM introduces its most powerful mainframe. Before you think "Wait, that's a mainframe, that doesn't apply to me"stop to consider all that IBM has done to make the mainframe an "open system" without sacrificing security oravailability:
- Open standard connectivity, including TCP/IP and now 6Gbps Infiniband and 10GbE Ethernet.
- Unix System Services. Yes, z/OS is certified to provide UNIX interfaces for today's applications.
- HFS and zFS file systems that can be mounted, shared, and used by traditional z/OS applications and JCL.
- Linux and Java. Many of today's largest websites are run on mainframes behind the scenes.
- Extreme bandwidth. The z10 EC handles up to 336 FICON channels (4Gbps) for large data processing workloads
The z10 EC is as powerful as 1,500 x86 (such as Intel or AMD) servers, but consumes 85 percent less floorspace and85 percent less energy. (They should put a "green" stripe down the front of this box just to remind everyone how energy efficient this server really is!) For more on the z10 EC, see the[Press Release].
- Enhanced IBM System Storage DS8000
With the XIV acquisition taking the role as the best place to put unstructured files for Web 2.0 applications,the IBM DS8000 can focus on its core strength, managing databases and online transactions for the mainframe.There's enough here to justify its own post, so I will cover this later.
- IT Service Management Center for z (ITSMCz)
Trust me, I don't make up these acronyms. IT Service Management are the policies and procedures for managingan IT environment, such as following the best practices documented in the IT Infrastructure Library (ITIL).In the past, IBM tools have focused on Linux, UNIX and Windows on distributed servers, but today ITSMCz bringsall of that to the mainframe! (or perhaps more correct to say "brings the mainframe to all that"!)
- IT Transformation & Optimization - Infrastructure Strategy and Planning services
I don't make up the names of our service offerings either. However, one thing is clear, it is time for peopleto re-evaluate their current data center, and come up with a new plan. The average data center is 15 years old.According to Gartner Group, more than 70 percent of the world's "Global 1000" organizations will have to make significant modifications to their data centers in the next five years. IBM can help, and is rolling outa new set of services specifically to help clients make this transition, to better align their IT to their business strategies.
- Economic Stimulus Package
IBM borrowed this idea from the U.S. government. IBM Global Financing is offering special terms and ratesfor new equipment installed by December 31 this year.
Want to learn more? Read this 15-page[IBM's Vision
technorati tags: IBM, New Enterprise Data Center, vision, strategy, z10 EC, mainframe, Enterprise Class, Jim Stallings, Linux, UNIX, Windows, z/OS, ITSMCz, Gartner, DS8000, infrastructure, services, economic stimulus package
Wrapping up my week on the Feb 12 announcements, I will finish off talking about thenew Half-High (HH) LTO4 drives available for our TS3100 and TS3200 tape libraries.
Small and medium sized business (SMB) clients are looking for small, affordable tapesystems. Tape is inherently green, using orders of magnitude less energy than disk,and is very scalable by simply purchasing more tape cartridges.
|When IBM first announced them, the TS3100 supported one drive with 24 cartridges,and the TS3200 (see picture at left) supported two drives and 48 cartridges. Unlike disk, that mentions RAWcapacity and then lowers it to indicate usable capacity in RAID configurations, tapeis just the opposite. LTO4 cartridges have 800 GB raw capacity, but with an average of 2:1compression, can hold a usable 1.6 TB of data. LTO4 also supports WORM cartridges fornon-erasable, non-rewriteable (NENR) types of data, and encryption capability.|
As a follow-on to our HH LTO3 drives, IBM is the first major storage vendor to offerthe new HH LTO4 drives in entry-level automation, which directly attach via 3Gbps SAS connections to your host servers. The HH models allows you to have two drives in the TS3100, and four drives in the TS3200.
You can mix and match, LTO3 and LTO4. Why would anyone do that? Well, the Linear Tape Open [LTO]consortium --made up of technology provider companies IBM, HP and Quantum--decided to support N-2 generation read, and N-1 generation read/write. So, anLTO3 can read LTO1 cartridges, and read/write LTO2 and LTO3 cartridges. TheLTO4 can read LTO2 cartridges, and read/write LTO3 and LTO4 cartridges. For SMBcustomers that still have some LTO1 cartridges they might want to read some day,mixing LTO3 and LTO4 is a viable combination.
Of course, IBM still offers full-high (FH) versions of LTO3 and LTO4, which offer a bit faster acceleration, back-hitch and rewind times than their HH counterparts, and also offer additional attachment choices of LVD Ultra160 SCSIand 4 Gbps Fibre Channel as well.
So, for SMB customers that are simply using their tape for backup and archive,and probably not driving maximum rated speeds, having twice as many slowerdrives might be just the right fit.
For more information on IBM's Feb. 12 announcement, see the[IBM Press Release].
technorati tags: IBM, HH, LTO3, LTO4, TS3100, TS3200, SMB, WORM, NENR, FH, LVD, Ultra160, SCSI
Yesterday, I promised I would cover other products from the Feb 12 announcement. Today I will focus on the IBM SAN768B director. Some people are confused on the differences between switchesand directors. I find there are three key differences:
- Directors are designed to be 24x7 operation, highly available with no single points of failure or repair. Generally, all components in directors are redundant and hot-swappable, including Control Processors. In switches, some components are redundant and hot-swappable, such as fans and power supplies), but not the “motherboard” or controller. Often you have to take down a switch to make firmware or major hardware changes or upgrades.
- Directors are designed to take in "blades" with different features, port counts, or protocol capabilities. You can add or remove blades while the system is up and running. Switches have a fixed number of ports. (A Small Form-factor Pluggable optical transceiver [SFP] is the component that turns electric pulses into light pulses (and visa versa). You plug the SFP into the switch, and then the fiber optic cable is plugged into the SFP).
With switches, you often start with a base number of active ports, and then can enable the rest of the ports as you need them.
- Directors have hundreds of ports. Switches tend to have 64 ports or less.
Last year, Brocade acquired McDATA. Both were OEMs for IBM, and IBM distinguished that in the naming convention. The IBM SAN***B name was used to denote products manufactured for IBM by Brocade, and a SAN***M name was used to denote products manufactured by McDATA.
At that time, Brocade and McDATA equipment did not mix very well on the same fabric, so IBM retained the naming convention so that you as a customer knew what it worked with.
Brocade now has released with new levels of both operating systems--Brocade's FOS and McDATA's EOS--and their respective fabric managers--Brocade Fabric Manager (FM) and McDATA's Enterprise Fabric Connectivity Manager (EFCM)--so that they have full interoperability.
Brocade's goal is to enhance EFCM to be a common software management platform for all of their products going forward.
IBM used the maximum port count in the name to provide some clue as to the size of the switch or director. The SAN16B-2 or the SAN32B-3 are switches that have a maximum of 16 and 32 ports. The SAN256B supports a maximumeight blades of your choosing.Two different types were supported for FC ports, a 16-port blade and a 32-port blade.If all eight were 32-port blades then the maximum was 256 ports, hence the name. But then Brocade began offering 48-port blades. Should IBM change the name? No, it decided to leave itthe SAN256B even though it can now have a maximum of 384 ports.
Not to confuse anyone, the SAN768B also has a maximum of 384 ports, in the same 14U dimensions, but with a special twist. Normally to connect two directors together you use up ports from each, in what are called "inter-switch links" (ISL).These are ports you are taking away from availability from the servers and storage controllers. The SAN768Boffers a new alternative called "inter-chassis links". Each SAN768B has two processing blades, and each has two ICL ports, so with just four two-meter (2m) cables, you get the equivalent of 128 FC 8 Gbps ISL links without using 128 individual ports on each side. That is like giving you 256 ports back for use with servers and storage!
Since IBM directors require 240 volt power, IBM TotalStorage SAN Cabinet C36 include power distribution units (PDUs). PDUs are just glorified power strips, but a new intelligent PDU (iPDU) option introduces additional intelligence to monitor energy consumption for customers looking to measure, and perhaps charge back, energy consumption to the rest of the business. You can stack two SAN768B in one cabinet, one on top of the other, and connected via ICLs, it wouldlook like one huge 768-port backbone.
As a backbone for your data center, the SAN768B is positioned for two emerging technologies:
- 8 Gbps Fibre Channel (FC)
The SAN768B is powerful enough to have 32-port blades run full speed on all ports off-blade without oversubscription. Oversubscription is an emotional topic.
Normally, blades (like switches) can handle all traffic at full speed without delays provided the in-bound and out-bound ports involved are all on the same blade. In a director, however, if you need to communicate from a port on one blade to a port on a different blade, it is possible that off-blade traffic might be constrained or delayed in its transit across the backplane.
On the SAN768B, both the 16-port and 32-port blades can run at full 8 Gbps speed, and the 48-port is exposed to oversubscription only if you have more than 32-ports running at full 8 Gbps transferring data off-blade concurrently.
The new 8 Gbps SFPs support auto-negotiation at N-1 and N-2 generation link speeds. This means that they will automatically slow down when communicating with 4Gpbs and 2 Gbps devices, but they cannot communicate with 1 Gbps devices. If you are still using 1 Gbps devices in your data center, you will need to use 4 Gbps SFPs (which also support 2 Gbps and 1 Gbps link speeds) to communicate with those older devices.
- Fibre Channel over Ethernet (FCoE)
Wikipedia has a good summary of [FCoE].
Basically, this new technology enables transport of Fibre Channel packets over 10 Gbps Ethernet links. This 10 Gbps Ethernet can also be used to carry traditional iSCSI and TCP/IP traffic. FCoE introduces new extensions to provide Fibre Channel characteristics, like being lossless, and offering consistent performance. The ANSI T11 team is driving FCoE as an open standard, and at the moment it is not fully baked. I suggest you don't buy any FCoE equipment prematurely, as pre-standard devices or host bus adapters could get you burned later when the standard is finalized.
The idea is that FCoE blades can be installed in a SAN768B along with traditional FC blades, allowing routing of traffic between traditional FC and new FCoE ports. Those who have invested in FCIP for long distance replication will be able to continue using either FC or FCoE inputs.
One of the big drivers of FCoE is IBM BladeCenter. Currently, most BladeCenter blades support both Ethernet and FC connectivity and are connected to both Ethernet and FC switches on the back of each BladeCenter chassis. With FCoE, we have the potential to run both FC and IP traffic across simpler all-Ethernet blades, connecting through all-Ethernet switches on the backs of each chassis.
For more information on the IBM SAN768B, see the [IBM Press Release]. For more detailson Brocade's strategy, here is an 8-page white paper on their[Data Center Fabric] vision.
technorati tags: IBM, SAN768B, SAN, switch, director, backbone, SFP, Brocade, McDATA, BOS, EOS, BFM, EFCM, blade, ISL, ICL, FC, FCP, FCIP, FCoE, BladeCenter, Ethernet, 8Gbps, 10GbE, Data Center Fabric
It's Tuesday, and you know what that means-- IBM makes its announcements.
Today, IBM announced a variety of storage offerings, but I am going to just focus this poston just the new DR550 models. The DR550 is the leading disk-and-tape solution forstoring non-erasable, non-rewriteable (NENR) data. This type of data, often called fixed-contentor compliance data, was previously writtento Write-Once-Read-Only (WORM) optical media. However, Optical technology has not advanced as fastas magnetic recording, so disk and tape have taken over this role. While there are still a fewlaws on the books that mandate "optical media" as the storage solution, new laws like SEC 17a-4and Sarbanes-Oxley (SOX) allow for NENR solutions based on magnetic disk or tape instead.
As we had done for the IBM SAN Volume Controller (SVC), the DR550 was based on "off the shelf"components. The File System Gateway (FSG) was based on System x server, the DR550 hardwarebased on System p server and DS4000 disk arrays, with "hardened" versions of the AIX,DS4000 Storage Manager and IBM Tivoli Storage Manager (TSM) that we renamed the IBM SystemStorage Archive Manager (SSAM).
The DR550 is Ethernet-based, so it can be used with all IBM server platforms, from System xand BladeCenter, to System i, and System p, and even System z mainframe customers, as wellas non-IBM platforms from Sun, HP and others. There are two ways to get data stored ontothe DR550:
- Sending archive objects via the SSAM archive API. This is an API based on the XBSA open standardthat many applications have coded to.
- Writing files via standard CIFS and NFS protocols through the File System Gateway (FSG), an optional priced feature that you can have incorporated into the DR550.
Generally, business applications like SAP or Microsoft Exchange don't do this directly, but ratheryou have an "archive management application" that acts as the go-between broker. IBM offers IBM Content Manager, IBM CommonStore for eMail (Exchange and Lotus Domino), and IBM CommonStore for SAP.IBM also recently acquired FileNet and Princeton Softech that provide additional support. Third partyproducts like Zantaz and Symantec KVS Enterprise Vault have also passed System Storage Provencertification for the DR550. These go-between applications understand the underlying storagestructure of their respective applications, and can apply policies to extract database rows, individualemails, or other attachments, as appropriate, and either move or copy them into the DR550.
The DR550 has built in support to move data from disk to tape, through policy-based automation behind the scenes. This is the key differentiator fromdisk-only solutions. Rather than filling up an EMC Centera, and watching it sit there idle burning energyfor five to seven years, or however long you are required to keep the data, you can instead use the disk for the most recent months worth of data on a DR550. The DR550 attaches to tapedrives or libraries, not just IBM TS1120 or LTO based models, but hundreds of systems from other vendorsas well. You can combine this with either rewriteable or WORM tape cartridge media, depending on yourcircumstances. This can be directly cabled, or through a SAN fabric environment. Storing the bulk ofthis rarely-referenced data on tape makes the DR550 substantially more affordable and more green thandisk-only alternatives.
Let's take a look at the specific models:
- IBM System Storage DR550 DR1
The DR1 machine-type-model replaces the "DR550 Express" for small and medium size business workloads. This is a singleSystem p server with anywhere from 1 to 36 TB of raw disk capacity in a nice lockable 25U cabinet (see picture at left). On the original DR550 Express, the 25U cabinet was optional, but so many people opted for it, that wemade it standard feature. You can add the File System Gateway, which is a System x running Linuxwith NFS and CIFS protocols converted to SSAM API calls.
- IBM System Storage DR550 DR2
The DR2 machine-type-model replaces the larger "DR550" for enterprise workloads. This can be either a single or dual node System p configuration, anywhere from 6 to 168 TB in raw disk capacity, in a lockable 36U cabinet. This also allows for an optional File System Gateway, and in the case of thedual node configuration, you can have two System p servers, and two System x servers with two Ethernetand two SAN switches for complete redundancy.
Common Information Model (CIM) and SMI-S interfaces have been added so that IBM Director can providea "single pane of glass" to manage all of the components of the DR550.
The system is based on high-capacity 750GB SATA drives, installed in half-drawer (eight drives, 6 TB)and full-drawer (16 drives, 12 TB) increments. Your choices will be 7+P RAID5 or 6+P+Q RAID6.Here is an Intel article that explains [RAID6 P+Q].In the future, as new disk technologies are introduced, the DR550 supports moving the disk datafrom old to new seamlessly, without disrupting the data retention policies enforcement.
For more information, here is a [6-page brochure] thathas specifications for both the DR1 and DR2 models.
Previous posts about the DR550: [DR550 File System Gateway | What happened to CAS? | Optimizing Data Retention and Archiving | Blocks, Files and Content-Addressable Storage | Dilemma over future storage formats | Storage Predictions for 2007]
I'll cover some of the other announcements in later posts this week. If you can't wait, you can go read the[IBM Press Release].
technorati tags: IBM, DR550, Express, DR1, DR2, SSAM, TSM, FSG, NFS, CIFS, NENR, WORM, fixed-content, compliance, SEC, SOX, SVC, XBSA, API, SAP, CommonStore, Microsoft Exchange, Lotus Domino, FileNet, Princeton Softech, Zantaz, EnterpriseVault, EMC, Centera, AIX, Linux, cabinet, RAID5, SATA, RAID6, P+Q, CAS
When times are tough, people revert back to their "default programming", and companies search for their"core strengths".The Redwoods Group calls this the[Native Language Theory
]. Here'san excerpt:
A young carpenter immigrates to the United States from Italy, unable to speak a word of English. Upon arrival, he moves into a small apartment by himself and begins looking for a job in construction. With some luck and a lot of hard work, he quickly lands a job at a local construction site. Over the coming weeks he learns how to say “hello” and “goodbye” to his English-only coworkers. As time goes on, he is able to learn more complex phrases and commands and is now able to begin taking on jobs that better match his level of expertise.
Several years after the carpenter moved to the US, he now speaks fluent English and has started a family with an American woman and now speaks only English on the job site and at home. One afternoon, while hammering at the framing of a new home, the carpenter strikes his thumb. In what language does he curse?Italian, of course.
We believe that this story illustrates the nature of reacting to difficult, stressful, and, yes, painful situations by reverting to what you know best. This is the reason that coaches ask their players to make certain actions “instinctual” – simply, when times get tough, we do what we fall back on our native language.
Last September, in my post[Supermarketsand Specialty Shops] I mentioned how Forrester Research identified two kinds of IT vendors selling storage. On one side were the"information infrastructure" companies (IBM, HP, Sun, and Dell) that focus on providing one-stop shopping for clients that want all parts of an IT solution, including servers, storage, software and services. These I compared to "supermarkets".
On the other side were the storage component vendors (EMC, HDS, NetApp, and many others) that focus on specificstorage components. These I compared to "specialty shops", like butchers, bakers and candlestick makers.These often appeal to customers with big enough IT staffs with the skills to do their own system integration.The key difference seems to be that the supermarkets are client-focused, and the specialty shops are technology-focused, and different people prefer to do business with one side or another.This came in handy last November to explain Dell's acquisition of EqualLogic and discuss[IBMEntry-Level iSCSI offerings].
Some recent news seems to fit this model, in relation to the Native Language Theory.
Several argued that EMC was in the process of shifting sides, from disk specialty shop over to an everything-but-servers supermarket. Certainly many of its acquisitions in software, services, and VMwarewould support the notion that perhaps they are going through an identity crisis.The immediate beneficiary was HDS, the #2 disk specialty shop, that passedup EMC with innovative features in its USP-V disk system.
However, times are tough, especially in the U.S. economy that many storage vendors are focused on. EMCappears to have found its native language, going back to its roots of solid state storage systems thatthey started with back in 1979. This week EMC announced [Symmetrix DMX-4 support of Flash drives].Several bloggers review the technology involved:
Overall smart move for EMC to go back to its technology-focused disk specialty shop mode and go head-to-head against the HDS threat. With Web 2.0 workloads moving off these monolithic solutions and onto [clustered storage more appropriate for "cloud computing"], large enterprise-class disk systems like theIBM System Storage DS8000 and EMC DMX-4 can shift focus on what they do best: online transaction processing (OLTP) and large databases. However,I noticed the EMC press release mentions EMC as an "information infrastructure" company, so perhaps they stillhaven't resolved their identity crisis.
(For the record, IBM shipped [Flash drive-based storage last year], and announced [larger drive models] this week. As we have learned from last year, terms like "First" or "Leader" in corporate press releases should not always be taken literally.)
- Sun Microsystems
After Sun acquired StorageTek specialty shop, they too had a bit of an identity crisis.Fortunately, they realized their core strengths were on the "supermarket" side,moved storage in with servers in their latest restructuring, changed their NYSE symbol from SUNW to JAVA, and reset their focus on providing end-to-end solutions like IBM. For example, fellow blogger Taylor Allis from Sun mentions their latest in "clustered storage" in his post[IBM Buys XIV - Good Move].
Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 rack-optimized servers onto 33 mainframes,and that this was part of our announcement that[since 1997, IBM has consolidated its strategic worldwide data centers from 155 to seven].I noticed in Nick Carr's Rough Type blog post[The Network is the Data Center] thatHP and Sun have followed suit:
In an ironic twist, some of today's leading manufacturers of server computers are also among the companies moving most aggressively to reduce their need for servers and other hardware components. Hewlett-Packard, for instance, is in the midst of a project to slash the number of data centers it operates from 85 to 6 and to cut the number of servers it uses by 30 percent. Now, Sun Microsystems is upping the stakes. Brian Cinque, the data center architect in Sun's IT department, says the company's goal is to close down all its internal data centers by 2015. "Did I just say 0 data centers?" he writes on his blog."Yes! Our goal is to reduce our entire data center presence by 2015."
While Nick feels this is ironic for Sun, known for UNIX servers based on their SPARC chip technology, I don't. Sun has shifted from being technology-focused to being client-focused.This is where the marketplace is going, and the supermarket vendors, being client-focused, are best positioned to adapt to this new world. In a sense, Sun found its roots. Nick summarizes this as:"The network, to spin the old Sun slogan, becomes the data center."
So, each move seems to strengthen their respective identities back to their origins, or at least help them communicate that to the market.
technorati tags: core strengths, native language, Forrester Research, supermarket, specialty shops, IBM, HP, Sun, Dell, information infrastructure, client-focused, technology-focused, EqualLogic, EMC, HDS, NetApp, USP-V, DMX-4, Flash, disk, drive, systems, Java, Taylor Allis, UNIX, SPARC, Nick Carr
So here we are in January, named after the two-faced Roman god Janus, who in their mythology was the god of gates and doors, and beginnings and endings.
-- Roger von Oech[Our "Janus-Like" Powers]
Well, it's 2008, which could mark the end to RAID5 and mark the beginnings of a new disk storagearchitecture. IBM starts the year with exciting news, acquiring new disk technology from a smallstart-up called XIV, led by former-EMCer Moshe Yanai. Moshe was ousted publicly in 2001 from hisposition as EMC's VP of engineering, and formed his own company. It didn't take long for EMC bloggersto poke fun at this already. Mark Twomey, in his StorageZilla blog, had mentioned XIV before back in August,[XIV], and again todayin [IBM Buys XIV].
The following is an excerpt from the [IBM Press Release]:
To address the new requirements associated with next generation digital content, IBM chose XIV and its NEXTRA™ architecture for its ability to scale dynamically, heal itself in the event of failure, and self-tune for optimum performance, all while eliminating the significant management burden typically associated with rapid growth environments. The architecture also is designed to automatically optimize resource utilization of all the components within the system, which can allow for easier management and configuration and improved performance and data availability.
"We are pleased to become a significant part of the IBM family, allowing for our unique storage architecture, our engineers and our storage industry experience to be part of IBM's overall storage business," said Moshe Yanai, chairman, XIV. "We believe the level of technological innovation achieved by our development team is unparalleled in the storage industry. Combining our storage architectural advancements with IBM's world-wide research, sales, service, manufacturing, and distribution capabilities will provide us with the ability to have these technologies tackle the emerging Web 2.0 technology needs and reach every corner of the world."
The NEXTRA architecture has been in production for more than two years, with more than four petabytes of capacity being used by customers today.
Current disk arrays were designed for online transaction processing (OLTP) databases. The focus was onusing fastest most expensive 10K and 15K RPM Fibre Channel drives, with clever caching algorithmsfor quick small updates of large relational databases. However, the world is changing, and peoplenow are looking for storage designed for digital media, archives, and other Web 2.0 applications.
One problem that NEXTRA architecture addresses is RAID rebuild. In a standard RAID5 6+P+S configuration of 146GB 10K RPM drives, the loss of one disk drive module (DDM) was recovered by reconstructing the data from parity of the other drives onto the spare drive. The process took46 minutes or longer, depending on how busy the system was doing other things. During this time,if a second drive in the same rank fails, all 876GB of data are lost. Double-drive failures are rare,but unpleasant when they happen, and hopefully you have a backup on tape to recover the data from.Moving to slower, less expensive SATA drives made this situation worse. The drives have highercapacity, but run at slower speeds. When a SATA drive fails in a RAID5 array, it could take severalhours to rebuild, and that is more time exposure for a second drive failure. A rebuild for a 750GBSATA drive would take five hours or more,with 4.5 TB of data at risk during the process if a second drive failure occurs.
The Nextra architecture doesn't use traditional RAID ranks or spare DDMs. Instead, data is carved up into 1MBobjects, and each object is stored on two physically-separate drives. In the event of a DDM loss, allthe data is readable from the second copies that are spread across hundreds of drives. New copies aremade on the empty disk space of the remaining system. This process can be done for a lost 750GB drive in under20 minutes. A double-drive failure would only lose those few objects that were on both drives, so perhaps1 to 2 percent of the total data stored on that logical volume.
Losing 1 to 2 percent of data might be devastating to a large relational database, as this could impactthe entire access to the internal structure. However, this box was designed for unstructuredcontent, like medical images, music, videos, Web pages, and other discrete files. In the event of a double-drivefailure, individual files would be recovered, such as with IBM Tivoli Storage Manager backup software.
IBM will continue to offer high-speed disk arrays like the IBM System Storage DS8000 and DS4800 for OLTP applications, and offer NEXTRA for this new surge in digital content of unstructured data. Recognizing this trend, diskdrive module manufacturers will phase out 10K RPM drives, and focus on 15K RPM for OLTP, and low-speedSATA for everything else.
Update: This blog post was focused on the version of XIV box available as of January 2008 that was built by XIV prior to the IBM acquisition. IBM has since made a major revision, made available August 2008 thataddresses a variety of workloads, including database, OLTP, email, as well as digital content and unstructuredfiles. Contact your IBM or IBM Business Partner for the latest details!
Bottom line, IBM continues to celebrate the new year, while the EMC folks in Hopkington, MA will continue to nurse their hangovers. Now that's a good way to start the new year!
technorati tags: Janus, two-faced, Roman god, Roger Von Oech, IBM, RAID5, XIV, EMC, Moshe Yanai, Mark Twomey, StorageZilla, NEXTRA, double-drive failure, rebuild, HDD, DDM, HDD, digital content, unstructured data
Today, IBM announced a software/server/storage combo that out-performed both HP and Sun. Here is an excerpt from the[IBM Press Release
IBM today announced that its recently introduced E7100 Balanced Warehouse(TM), consisting of the IBM POWER6(TM) processor-based System p(TM) 570 server, the IBM System Storage(TM) DS4800 and DB2(R) Warehouse 9.5, is already lapping the field in performance. The new data warehousing solution is now ranked number one in both performance and in price/performance in the TPC-H business:
- 2 x speed-up over HP system with Oracle 10g and equal number of cores;
- 3.17 x speed up over Sun with Oracle 10g and 38 percent price advantage;
- A new world record by loading 10 terabytes (TB) data at six TB per hour (TB/hr).
"These latest benchmark results further prove IBM's strength and leadership in the business intelligence arena," said Scott Handy, vice president of marketing and strategy, IBM Power Systems. "The E7100 Balanced Warehouse is a complete data warehousing solution comprised of pre-tested, scalable and fully integrated system and storage components, designed to get customers up and running quickly to get to the real benefit of unprecedented business insight and intellect."
Those not familiar with the [IBM Balanced Warehouse], it is the productized version of DB2's ["Balanced Configuration Unit" or BCU] reference configuration. The IBM Balanced Warehouse presents a pre-tested, pre-configured solution for Business Intelligence (BI) applications. These are in the form of "building blocks" thatcan be combined to get to the size you need, with incremental growth as your business expands. Each building block expertly matches the CPU processor and RAM memory of the server, with the appropriate I/O bus, cabling, and capacity of the disk system, resulting in optimal performance.
IBM DB2 software is designed to allow you to combine multiple building blocks into a single system image. This greatly simplifies your data warehouse deployment, and can help ensure success. For example, for a 50TB deployment, you can take a base 2TB building block, add 24 more, each with 2TB of disk capacity, and have a completely balanced environment. IBM clients have built systems over 300TB in this manner with these building blocks.
The IBM Balanced Warehouse is offered in several configurations:
The [C-class models] are designed for SMB customers, employing an IBM System x server with internal or direct attached EXP3000 disk.
The [D-class models] are the next step up, offering department-level data marts and data warehouse for larger deployments, employing an IBM System x server with EXP3000 or System Storage DS3400 entry level disk.
The [E-class models] represent our top-of-the line configurations for our largest enterprise deployments. The [E6000] run Linux on an IBM System x server with System Storage DS48000 disk. The [E7000] run AIX on an IBM System p575 server with DS4800 disk. The new [E7100] mentioned above runsAIX on a POWER6-based IBM System p570 with DS4800 disk.
As I have mentioned before, in my post[Supermarketsand Specialty Shops],companies are looking for complete solutions, preferably from a single vendor like IBM, HP and Sun, rather than buying piece part components from different vendors and hoping the combined ["Frankenstein"] configuration meets business requirements.
The DS4800 is an obvious choice for this solution, providing an excellent balance of cost and performance, in a modular packaging that is ideal for the incremental growth design inherent in the IBM Balanced Warehouse philosophy. To learn more about this disk system, see the official [DS4800 website] for details, descriptions and specifications.
technorati tags: IBM, HP, Sun, Balanced Warehouse, balanced, configuration, unit, BCU, Oracle, 10g, EXP3000, DS3400, DS4800, disk, storage, system, datamart, data, warehouse, Business Intelligence, BI, Frankenstein, supermarket, specialty shop, E6000, E7000, E7100
It's official! My "blook" Inside System Storage - Volume I
is now available.
|This blog-based book, or “blook”, comprises the first twelve months of posts from this Inside System Storage blog,165 posts in all, from September 1, 2006 to August 31, 2007. Foreword by Jennifer Jones. 404 pages.|
- IT storage and storage networking concepts
- IBM strategy, hardware, software and services
- Disk systems, Tape systems, and storage networking
- Storage and infrastructure management software
- Second Life, Facebook, and other Web 2.0 platforms
- IBM’s many alliances, partners and competitors
- How IT storage impacts society and industry
You can choose between hardcover (with dust jacket) or paperback versions:
This is not the first time I've been published. I have authored articles for storage industry magazines, written large sections of IBM publications and manuals, submitted presentations and whitepapers to conference proceedings, and even had a short story published with illustrations by the famous cartoon writer[Ted Rall].
But I can say this is my first blook, and as far as I can tell, the first blook from IBM's many bloggers on DeveloperWorks, and the first blook about the IT storage industry.I got the idea when I saw [Lulu Publishing] run a "blook" contest. The Lulu Blooker Prize is the world's first literary prize devoted to "blooks"--books based on blogs or other websites, including webcomics. The [Lulu Blooker Blog] lists past year winners. Lulu is one of the new innovative "print-on-demand" publishers. Rather than printing hundredsor thousands of books in advance, as other publishers require, Lulu doesn't print them until you order them.
I considered cute titles like A Year of Living Dangerously, orAn Engineer in Marketing La-La land, or Around the World in 165 Posts, but settled on a title that matched closely the name of the blog.
In addition to my blog posts, I provide additional insights and behind-the-scenes commentary. If you go to the Luluwebsite above, you can preview an entire chapter in its entirety before purchase. I have added a hefty 56-page Glossary of Acronyms and Terms (GOAT) with over 900 storage-related terms defined, which also doubles as an index back to the post (or posts) that use or further explain each term.
So who might be interested in this blook?
- Business Partners and Sales Reps looking to give a nice gift to their best clients and colleagues
- Managers looking to reward early-tenure employees and retain the best talent
- IT specialists and technicians wanting a marketing perspective of the storage industry
- Mentors interested in providing motivation and encouragement to their proteges
- Educators looking to provide books for their classroom or library collection
- Authors looking to write a blook themselves, to see how to format and structure a finished product
- Marketing personnel that want to better understand Web 2.0, Second Life and social networking
- Analysts and journalists looking to understand how storage impacts the IT industry, and society overall
- College graduates and others interested in a career as a storage administrator
And yes, according to Lulu, if you order soon, you can have it by December 25.
technorati tags: IBM, blook, Volume I, Jennifer Jones, system, storage, strategy, hardware, software, services, disk, tape, networking, SAN, secondlife, Web2.0, facebook, Lulu, publishing, Blooker Prize, articles, magazines, proceedings, Ted Rall, insights, glossary, early-tenure, mentors, library, classroom, administrator, print, publish, on demand
Well it's Tuesday, which means its time to look at recent announcements.While I was on vacation last week, IBM made a lot of storage announcements October 23.Josh Krischer gives his summary on WikiBon [October 2007 Review
].Austin Modine of the The Register
went so far as to say that [IBM goes crazy with storage system updates
- IBM System Storage DS8000 series
This is "Release 3" software/microcode upgrades on our existing "Turbo" hardware.
- IBM FlashCopy SE -- Here "SE" stands for Space Efficient. Rather than allocating a full 100% of the space for the FlashCopy destination, you can set aside just a fraction, and this will hold all the changed blocks, similar to whatIBM already offers on the DS4000 series.
- Dynamic Volume Expansion -- In the past, if you needed more space for a LUN, you had to carve out a newer one elsewhere, and then copy the data over from the old to the new, leaving the old LUN around to be re-used or leftstranded. With this enhancement, you can just upgrade the LUN in place, making it bigger as needed, similar to whatIBM already offers on the DS4000 series and SAN Volume Controller. This applies to CKD volumes for the System zmainframe users out there as well.
- Storage Pool Striping -- striping volumes across RAID ranks to eliminate or reduce hot-spots, and provide betterload balancing. Many used SAN Volume Controller in front of the DS8000 to do this, but now you can do it natively inthe DS8000 itself.
- z/OS Global Mirror Multiple Reader -- for System z customers, "z/OS Global Mirror" is the new name for XRC. Thisenhancement improves the throughput of sending updates to the remote disaster recovery location.
- DS Storage Manager enhancements, the element manager software has been enhanced, and is pre-installed on the new IBM System Storage Productivity Center, which I will talk about below.
- Intermix of DS8000 machine types -- this is especially useful to allow new frames to have co-terminating warrantieswith the base units. In other words, as you expand your system, you can ensure that the entire chunk of iron runs outof warranty all at the same time, to simplify your decision making process to upgrade or contract for extended service.
See the [DS8000 Announcement Letter] for more details.
- IBM System Storage SAN Volume Controller 4.2.1
BarryW summarizes the updates in the[Announced SVC 4.2.1] release.If you have problems with the link he provides in his post, here is the [SVC 4.2.1 Announcement Letter].
- IBM System Storage Productivity Center
One of the biggest complaints about IBM TotalStorage Productivity Center is that it is software that needs to beinstalled on its own server, and that this installation process can take a day or two. Why wait? Now you can havea hardware console that has the DS8000 Storage Manager software, SVC Admin Console software, and IBM TotalStorageProductivity Center "Basic Edition" pre-installed. Here are the key features.
- Pre-installed and tested console
- DS8000 R3 GUI integration
- Cohabitation of SVC 4.2.1 GUI and CIMOM
- Automated device discovery
- Asset and capacity reporting, including tape library support
- Device configuration
- Advanced topology viewer
See the[System Storage Productivity Center - Announcement letter] for more details."Basic Edition" can be upgraded to "Standard Edition" to get full functionality.
- IBM System Storage N series
Our "Release 9" applies across the board, from N3000 to N5000 to N7000 series models, includingnew host bus adapters, and the new Data OnTAP 7.2.4 release level.
The Virtual File Manager (VFM) was announced as one of our latest [Storage Virtualization Solutions]. VFMprovides a global namespace that aggregates the file systems from Linux, UNIX, and Windows file servers, as well asN series storage, into a consolidated environment.
See the [N series Announcement Letter] for more details.
- IBM TS7520 Virtualization Engine
IBM's virtual tape library (VTL) for the distributed systems platform, has been enhanced to provide:
- Up to 12TB of disk cache, using 750GB SATA disk.
- F05 Tape Frames installed as TS7520 base units through a 32 port fibre channel switch
- Support for LTO generation 4 tape drives, both as virtual tape drives and as physical tape drives within IBM automated tape libraries attached to the TS7520. This allows you to use Encryption capabilities of LTO4.
See the [TS7520 Announcement Letter] for more details.
- IBM System Storage DS6000 series
The latest 300GB 15K RPM drives are now supported.
- IBM System Storage DS4000 series
This was a software/microcode upgrade release for the DS4000 series.
- RAID-6 support on the DS4700 Express models and the DS4200 Model 7V
- Support for greater than 2 TB volumes on selected supported operating systems
- 8K cache block size
- DS4000 Storage Manager v10.10 increases the number of FlashCopy tasks, remote mirror pairs, and storage partitions.
See the [DS4000 Announcement Letter] for more details.
- IBM System Storage DS3000 series
DS3000 series now supports SATA disk, and can be attached to AIX and Linux on System p servers. This appliesto the DS3200, DS3300 and DS3400 models.See the [DS3000 Announcement Letter] for more details.
- IBM System Storage TS2240
These are LTO4 Half-High drives, which can support encryption.See the [TS2240 Announcement Letter] for details.
Perhaps Austin is right, we might have gone crazy announcing all of this at once.
technorati tags: Josh Krischer, Austin Modine, IBM, DS8000, Turbo, FlashCopy, SE, space efficient, dynamic volume expansion, DVE, striping, z/OS Global Mirror, XRC, System Storage, Productivity Center, TotalStorage, Basic Edition, topology viewer, ONTAP, VFM, global namespace, TS7520, Virtualization Engine, virtual tape library, VTL, F05, SATA, LTO, LTO4, LTO-4, DS6000, DS4000, RAID6, RAID-6, AIX, Linux, System p, servers, TS2240, half-high, drives