The smart people at the University of Pittsburgh manage five campuses and over 33,000 students, andneeded to create an enterprise storage solution that would give it three key benefits. Of course, they turnedto IBM, the number one overall storage hardware vendor, to deliver.
Here is what Jinx Walton, Director of Computing Services and Systems Development at the University of Pittsburgh, had to say about it...
"The University of Pittsburgh supports large enterprise systems, and the number and complexity of new systems continue to grow. To effectively manage these systems it was necessary to identify an enterprise storage solution that would leverage our existing investments in storage, make allocation of storage flexible and responsive to project needs, provide centralized management, and offer the reliability and stability we require. The integrated IBM storage solution met these requirements"
You can read the details in the official IBM press release.Read More]
ESG Analyst, Tony Asaro, talks about the many small storage startups having aBillion Dollar Impact on the storage system industry. Tony has counted over50 storage system vendors that are now in the marketplace. Is it really that many?Most of the time, the media only focus on the top seven major players, but I agree that big players like IBM should take trends about small startups like this seriously.
EMC Blogger Chuck Hollis suggests that this trend might be the start of a squeeze play, where top players and new upstarts squeeze out the middle playerslike Sun and HDS, in his postDesperate Times In Storage Land?
(His statement that IDC and Gartner have listed EMC as number one in "almost all"market segments is perhaps a bit misleading. IBM is number one in overall storage hardware, as wellas leading in tape drives, tape libraries, tape virtualization, and for that matter,disk virtualization. I don't know if IDC or Gartner count EMC Disk Library in the "tape virtualization" category, or if either analyst distinguishes between "cache-based" versus "switch-based" disk virtualization as separate categories.Perhaps Chuck should have qualified this to say "almost all of themarket segments that EMC does business in," which of course is better than the othervendors in the middle.)
This time around, Chuck pokes fun at HDS, IBM, Sun, NetApp and HP, much like "that guy" that skewersour favorite SouthPark characters Cartman, Kenny, Stan, Kyle in thisComedy Central MMORPG parody video. (And no, I am not suggesting Chuck looks anythinglike the cartoon character or his corresponding avatar)
Perhaps putting me in the same not-
Comment (1) Visits (4273)
Often, when looking at disk storage it is easy to focus on comparisons to other disk storage, but disruptive technologies cross boundaries. Already we have seen Flash Memory drives on the IBM BladeCenter, replacing traditional disk drives internal to each blade server. They are smaller than regular disk drives, but big enough to hold the operating system to boot from.
The New York Times has an article by John Markoff, Redefining the Architecture of Memory that talks about IBM's research on "Racetrack Memory".The article is a good read, but here are some interesting excerpts:
Now, if an idea that Stuart S. P. Parkin is kicking around in an I.B.M. lab here is on the money, electronic devices could hold 10 to 100 times the data in the same amount of space.
This technology has the potential to break some of the physical limitations that are currently worrying disk drive designers. I look forward to see how this plays out.Read More]
This Doonesbury cartoonabout Second Life reminded me about our September 20 event.
Registration for the "Meet the Storage Experts" event in Second Life will close this week fornext week's September 20 event. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
If you miss this one, we plan to have another one in November!Read More]
Comments (6) Visits (7369)
The Storage Architect writes in his post:
Array-based replication does have drawbacks; all externalised storage becomes dependent on the virtualising array. This makes replacement potentially complex. To date, HDS have not provided tools to seamlessly migrate away from one USP to another (as far as I am aware). In addition, there's the problem of "all your eggs in one basket"; any issue with the array (e.g. physical intervention like fire, loss of power, microcode bug etc) could result in loss of access to all of your data. Consider the upgrade scenario of moving to a higher level of code; if all data was virtualised through one array, you would want to be darn sure that both the upgrade process and the new code are going to work seamlessly...
I would argue that the IBM System Storage SAN Volume Controller (SVC) is more like the HDS USP, and less like the Invista. Both SVC and USP provide a common look and feel to the application server, both provide additional cache to external disk, both are able to provide a consistent set of copy services.
IBM designed the SVC so that upgrades can occur non-disruptively. You can replace the hardware nodes, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the software, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the firmware on the managed disk arrays behind the SVC, again, without disruption to reading and writing data on virtual disk.
More importantly, SVC has the ultimate "un-do" feature. It is called "image mode". If for any reason you want to take a virtual disk out of SVC management, you migrate over to an "image mode" LUN, and then disconnect it from SVC. The "image mode" LUN can then be used directly, with all the file system data in tact.
I define "virtualization" as technology that makes one set of resources look and feel like a different set of resources with more desirable characteristics. For SVC, the more desirable characteristics include choice of multi-pathing driver, consistent copy services, improved performance, etc. For EMC Invista, the question is "more desirable for whom?" EMC Invista seems more designed to meet EMC's needs, not its customers. EMC profits greatly from its EMC PowerPath multi-pathing driver, and from its SRDF copy services, so it appears to have designed a virtualization offering that:
A post from Dan over at Architectures of Control explains the anti-social nature of public benches. City planners, in an effort to discourage homeless people from sleeping on benches in parks or sidewalks, design benches that are so uncomfortableto use, that nobody uses them. These included benches made of metal that are too hot or too cold during certainmonths, benches slanted at an angle that dump you on the ground if you lay down, or benches that have dividers sothat you must be in an upright seated position to use.
This is not a disparagement of split-path switch-based designs. Rather, EMC's specific implementation appears to be designed for it to continuevendor lock-in for its multi-pathing driver, continuevendor lock-in for its copy services when used with EMC disk, and only provide slightly improved data migration capability for heterogeneous storage environments. Other switch-based solutions, such as those from Incipient or StoreAge, had different goals in mind.
Sadly, my IBM colleague BarryW and I have probably spent more words discussing Invista than all eleven EMC bloggers combined this year. While everyone in the industry is impressed how often EMC can sell "me, too" products with an incredibly large marketing budget, EMC appears not to have set aside funds for the Invista.
If a customer could design the ideal "storage virtualization" solution that would provide them the characteristics they desire the most from storage resources, it would not be anything like an Invista. While there are pros and cons between IBM's SVC and HDS's TagmaStore offerings, the reason both IBM and HDS are the market leaders in storage virtualization is because both companies are trying to provide value to the customer, just in different ways, and with different implementations.