RichardSwain 060000VQ8G Tags:  tsm tivoli snapshots rto data wine nas sonas rpo hsm ltfs backup recovery ibm protection 8,818 Views
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use.
Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
RichardSwain 060000VQ8G Tags:  pearson x3650 nseries sonas tony ibm storage r1.2 nas 5,330 Views
May 9th has been a target on my calendar for some time now. Inside of IBM, we have been waiting for this day to come so we could talk about the new things being released in the storage platform. It almost feels like Christmas morning with a bunch of new presents under the tree. Each gift has inside something that is either really cool or something very useful. The only difference is your Aunt Matilda and her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the entire storage platform. I will concentrate on just the IBM NAS ones but if you are interested in knowing what is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of gifts for him under the tree this morning. Not only did he find presents under the tree but there were a few little things in his stocking. Here is what Santa brought:
This SONAS release is labeled R1.2 and can be obtained by contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few gifts. A new N6270 to replace the N6070. This new system is in line with the N6200 series with larger amounts of RAM and processors. Just like the smaller N6240, there is an expansion controller where customers can add more PCI control cards like HBAs, 10GbE or even FCoE. A new disk shelf was also released which uses the smaller 2.5 inch drives with improved back end performance.
And over at the Real Time Compression house they got new support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as these were just a fraction of the Storage announcements today. Also today is the IBM Storage Executive Summit in New York City. My friend and fellow blogger Tony Pearson is covering this great event and will be updating his blog and twitter feed. If you were not able to make it to NYC for the event, feel free to tweet him your questions @az990tony You can also send questions to our IBM Storage feed at @ibmstorage
I was on my way down to Miami today and was talking to the gentleman sitting next to me about storage technology and the conversation turned to how everyone is scrambling to be in the cloud business. He had heard multiple vendors come in and start talking about cloud technology and how it was going to save him money, time and effort. This gentleman worked for a retail chain that has multiple district offices through out the eastern US and headquarters in Atlanta. He has multiple technologies all helping him keep the business running but nothing planned and as the company grew, they simply cookie cut the previous installation and planted it into the new office. Each office would also replicate back to HQ and that would be the main repository for backups/restores. I would guess there are thousands of companies out there with similar setups.
So instead of going into how he could leverage cloud storage technology, I asked him what were his problems and listened. They basically came down to this:
1. Multiple independent islands of storage that are aging, causing his support contracts to go up.
2. Backups take way to long and systems are slowing down as they get closer to 'capacity'.
3. Future growth was expensive as every time they added a new capacity, they had to add entire systems.
Now they were not cutting edge technolgy leaders nor were they wanting to be, but he needed a way to solve some of these traditional storage problems. He didn't want to go out and buy a new large system that would take forever to get in and while it may solve his problems, it would bring in even more issues. What he needed is less overhead and more throughput.
We sat there for a while thinking, we didn't say much until I offered this tidbit, "So what does cloud mean to you?" After a nice laugh, he stated that he really didn't know and the more he read, the more 'cloudy' became the answer.
There are many interpretations about what cloud really is and it differs between storage vendors. If there is a true declination of what cloud storage really could be, I think it could be defined using NAS technology. NAS lends to be a kinder and gentler protocol set and the need is growing leaps and bounds. Our traditional way of adding more systems and creating more independent silos works for smaller environment but it does not scale when clients want large disk pools of storage under one umbrella. There are ways of making volumes span in to large pools but the underlying storage is still made up smaller components that are typically active/active/passive nodes, even the best load balance will not help if you are overloading that system.
There are ways to find a balance between the same old way and going out and dropping tons of cash on huge storage gear. Find a system that will grow and scale as your storage needs do. Think of ways to keep everything under one umbrella (name space for example) and also try to solve issues that you are having today with real technology and not work arounds.
With NAS technology, we will always be at the mercy of the backup target whether its disk or tape. No matter if we are taking snapshots or ndmp backups, we have to write that out to some target to have a restore point. This is your basic strategy on how to do a backup/restore, why not consider using different types of disks to create a tier and offload disks to slower pools as the data gets 'older'. A few vendors have said there is no need for tiering, mainly because their systems can't take advantage of this and therefore they shun those who do. ILM tiering can help you achieve not only higher utilization rates with the storage but it puts the data that is accessed more frequently on faster disk, and moves the rest away to makes more room. Why pay for fast disk if the data on it is not being accessed frequently?
Future expansion has always been tough for administrators, they tend to over buy on controller size and skimp on the disk. Systems like SONAS from IBM allows you to grow both in storage capacity and server throughput; independently. If a customer needs more storage but doesn't need the additional throughput, why force him to add more controllers? SONAS systems can scaled up to 30 storage servers and 14.4 PB of spinning disk all under one name space. No more having multiple nodes with their own names; this storage is called Accounting1, Accouting2 .... etc. They are called Storage and everyone gets the benefit of having all of the nodes, not just one system.
By the time we had gone through all of this, our flight was landing. It was a great talk and both of us gained a different perceptive on how cloud is perceived. If any of you want to find out more information about the IBM Cloud strategy or SONAS go to the following links:
SONAS by IBM
RichardSwain 060000VQ8G Tags:  nseries nas lsi marriage ibm ds5000 xiv ds4000 sql exchange netapp engenio onstor ds3000 15,335 Views
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
IBM Storwize® V7000 Unified stores up to five times more unstructured data in the same space with integrated Real-time Compression
IBM Storwize® V7000 Unified stores up to five times more unstructured data in the same space with integrated Real-time Compression
Today IBM announced the enhancement of compressing not only block data on the V7000 but also now it includes the file data on the V7000 Unified. The V7000 was first set up with compression back in the summer with a big announcement surrounding “Smarter Storage”. This optimization was the same code and engine that was purchased from a company named Storwize a few years ago.
IBM initially kept the compression appliance that Storwize was first known for in the market. Using LZ compression with a RACE (Random Access Compression Engine) providing an optimized real-time compression without performance degradation. Thus slowing down data growth and reducing the amount of storage to be managed, powered and cooled.
The compression does not require the compression or decompression of entire files to access the data block. The engine will compress and decompress the relevant data blocks “on the fly”. As data is written the RACE engine compresses the data into a smaller chunk and its 100% transparent for systems, storage and applications.
The V7000 Unified can now deliver a larger compressed platform than any other mid-range platform. With compression percentages around 75%, a system that was maxed out at 2.8 PB (960 drives x 3TB each) can now see the system handle up to 5 PB of storage.
Each V7000 Unified with code base 6.4 has the option of turning on a 45 day trial of the compression software. After setting the license to “45” then you can add new compressed volumes on the system. You can also compress data on virtualized storage arrays.
Compression has been part of NAS for a very long time. We have seen compression of files from jpeg to office documents. But the best part is the end user will never have to worry about which files needed to zipped or compressed. Everything that comes through the V700 Unified can be compressed in line before it writes the data to disks.
A couple of other improvements that IBM announced were the addition of a integrated LDAP server to V7000 Unified. This now allows customers to use both local authentication and external authentication servers to allow access to data. Another feature was the ability to upgrade a V7000 to a V7000 Unified in the field. If you currently own a V7000 but need to add file access to the system, IBM will sell you the two file modules and corresponding software to upgrade you system. Now mind you there is a list of requirements that will need to be met so check with your local storage engineer for more information. And finally we now have support for a 4 way cluster on V7000 unified. This allows for more disks to be provisioned and can compete with some of the other mid-range storage platforms in the market.
This all together makes a nice round of improvements that will make life easier for IBM customers. As the V7000 platform matures it looks like IBM is putting their money where their mouth is and making storage smarter and more efficient. More to come on this platform as I suspect we will see bigger things down the road.
Labor day has come and gone and so has all of the holidays between now and Thanksgiving. This is only augmented with the hope that your favorite football team (both American football and what we call Soccer) has a great weekend match and you get to celebrate with the beverage of your choice.
During your work-week, which can and sometimes does include weekends, all you hear is no more money to do the things you have to do to keep the business running. If you have kept up with squeezing more out your systems with virtualization that’s great but your network is now overtaxed. The staff that used to take care of certain aspects of the day to day running of your data center has been let go and their job has been ‘given’ to you with no thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the price of gas is so high that you decide to bike to work to help save the planet. You spend more time on the road commuting and look like you need a shower when you get to work after dodging traffic all morning. Your coffee is priced higher now because the coffee house wants to use Fair Trade coffee from farmers in a county you have never been. And your dog is on anti-depressing meds because you are not home as much and he can’t go out in the yard because of the killer bees migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice when we find things that not only help us but are easy to use. When you come across these items they make such an impression that you like to tell others about your great fortunes. I came by a solution that was very easy to use and the value was so great that at first I didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and all had wonderful things to say about the technology. I listened but was hesitant to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other analyst, they all point to compression of the data as being one of the larger items for handling future growth. There are a lot of vendors that claim they can compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I understood how deduplication works and where it was useful. But this was compression which is a different ball game. Now we are able to shrink the storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the performance numbers were going to be and found an astonishing outcome. The RTC appliance made a performance improvement on the overall solution. It does help by adding cache and adding processing to the serving of data but it also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no compression, then all of the data has to be laid out on the disk, that sping for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB of data. But if we get 2:1 or 3:1 compression ratios, then all of the components have to work less. No longer are they working to save 100GB of data but 50GB or 25GB or data. This allows the system to process more data and have cycles to respond quicker to I/O requests (IE lower latency).
So the final thing is always the question of how hard is this to install. Is there a period of time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy. So easy that there is a good YouTube video that goes through the entire process of unpacking to racking to compressing data. I think the video speaks for itself:
So if you are back at work today and find your life swirling around you like a hurricane, stop and be reassured there is a few things out there that still can make your life a little easier. It doesn’t make the killer bees go away but maybe it will give you peace of mind that your storage doesn’t run out in the near future.
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
WHEN: Thursday, March 11th, 10-11:30am CST.
PRESENTED BY: Andrew Grimes
Click here to REGISTER TODAY!
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management
2. Storage Efficiency
3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors
4. Most importantly, see how we recover these applications in a matter of minutes!
Today IBM is releasing two new N6200 systems that will be a huge improvement over the existing N6000 systems. The two new systems will show a bump in capacity and performance and more flexibility. For a very crowded midrange market this new N series product set will bridge the gap between entry level and enterprise class systems
The first thing that stands out to me is the footprint of the new system. The older N6000 has a 6 U requirement for an HA pair. The new N6200 is half the size, only occupying 3U for the two HA pair, or a single node with a I/O expansion module, providing an additional four PCI-e cards. Another configuration is two controllers with two expansion modules in a total space of 6U (equal of the older N6000 systems) but with a total of 12 PCI-e slots (vs 8 on the older N6000).
We will recommend using the two slots built into the controller for high performance 10GbE and / or 8 Gb PC adapters. The additional expansion slots in the expansion module can be used for Flash Cache and other connectivity for disks.
The on-board hardware is getting an face lift as well. While the new system sports a 10GbE port this is used mainly for the interconnect and nothing else. This was one of the disappointments I have with this systems, but understand this is how Netapp will accomplish scale out clustering.
FC ports were kept at 4 Gbps but there is two new SAS ports with matching ACP (alternate control path) ports used for off loading some of the traffic from the data path.
One of the unsung updates was in the NVRAM. Instead of using the same memory in the past, we now see an improvement of the memory by using something called Asynchronous DRAM Refresh (ADR). This is a new self-refresh mode in the Intel chipset that allows a portion of the main memory to be backed by an on-board battery. This gives the NVRAM the same high bandwidth as main memory and also simplifies the design of the motherboard.
This gives the new N6200 systems a bump in performance along with the introduction of the new Intel processors. The SPECsfs benchmark on the highest N6200 system showed 101,183 ops at 1.66ms ORT compared to the N6060 showing 60,507 ops and 1.58ms ORT, an improvement of about 70% in SFS throughput.
IBM is introducing the IBM System Storage N6210 Series and the IBM System Storage N6240 Series These new systems replace the IBM N3600 and N6040 Series respectively. GA date is scheduled for January 28, 2011 (N6240) and February 25, 2011(N6210). Here is the slide deck that is published with the release.
Now available is the IBM System Storage N series with VMware vSphere
Redbooks are a great way of learning a new technology or a reference for configuration. I have used them for years not just in storage but for X series servers and for software like TSM. The people that write the books spend a great deal of time putting them together and I believe most of them are written by volunteers.
This is the third edition of this Redbook and if you have read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1 environments
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction of both the N series systems and VMware vSphere. There are sections on installing the systems, deploying the LUNs and recovery. After going through this Redbook, you will have a better understanding of a complete and protected VMware system. If you need help with how to size your hardware there is a section for you. If you are looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making sure you have proper alignment between the system block and the storage array. This will negatively impact the system by a factor of 2 in most random reads/writes as two blocks will be required for one request. To avoid this costly mistake or to correct VMs you have already setup a section in the book called Partition alignment walks you through the entire process of correctly setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of deduplication, compression and cloning to drive the efficiency of the storage higher. These software features allow customers to store more systems on the storage array than if they used traditional hard drives. Also there is how to use snapshots for cloning, mirrors for Site Recovery Manager and long term storage aka Snapvaults. At the end of the book are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene, there is a great guide that will help you from start to finish setting up your vSphere environment. The information is there, use the search feature or sit down on a Friday with a high-lighter, which ever fits your style and learn a little about using a N series system with VMware.
Here is the link to this Rebook:
IBM N series and VMware vShpere
For more information on Redbooks go here!
I keep hearing how great our compression appliance really is and how quick and easy it is to setup. I did some asking around the office and was sent this video. It does look simple and I wish other products had this type of instructional video. If you want more information about RTC, check out the IBM RTC site here. Enjoy the video and if you like this and have a suggestion for another one let me know!
IBM has been working to enhance the way we do business from day one. From clock, to typewriters, mainframes, PCs, software, storage... the idea behind our innovation is to make it easier for our clients to do their business. Now we are taking it one step further to help our clients make the world better.
If you have been watching standard TV, Youtube or Hulu, you probably have seen a commercial for the IBM Smarter Planet initiative. These great adverts keep the tradition of IBM marketing our message to the masses. They describe how IBM is making our world better by using technology through many disciplines; Healthcare, Traffic, Food, etc. If you dig a little deeper than the catchy ads, you see a real movement not only to 'save the planet' but to make our lives better.
One of the ways IBM is making our planet better is by increasing the utilization of our systems. Today's average commodity server rarely uses more than 6% of its available capacity. This hold true for our storage systems as well. We find storage systems are bound by traditional technologies that keep us from keeping up with demand.
Looking at how this relates to my involvement I see how both SONAS and N series fit this mantra. The technologies allow clients to conserve energy by decreasing the amount of storage needed to achieve typical installations.
N series software allows a client to over subscribe a system by cloning the volumes with out adding additional space. This software called FlexClone allows clients to use products like VMWare, Xen or Hyper-V to create zero space copies of the original image. This zero consumption keeps the original blocks locked to the original image and any new changes are added to the free space as a delta. In traditional storage systems, a 10GB image would consume 1 TB for just 100VMs. With FlexClone, the only space needed for all 101 VMs would be just 10GB. Lowering both the OPEX and CAPEX for this system.
The IBM Scale out NAS system (SONAS) is a gaining steam as the private cloud business has increased in the business market. Not only are research universities and high performance computing labs seeing the benefits, so are mid-market to enterprise business leaders. Typical storage systems are not utilized to their full potential because of the purpose of the system or how it was integrated into their data center.
With a SONAS system, we no longer have to think how this system will be provision as all of the equipment can respond to requests from multiple parts of the business. If you have 5 systems that provide storage to your business and one of those systems is struggling to keep up with demand, the only way to keep up with the requests is to move data off by hand to the other systems. This is time consuming and could introduce mistakes and possible data loss. SONAS allows clients to be flexible in a dynamic-on-demand business environment. No longer will you have one system slowing down productivity as all of the storage in a SONAS system can be distributed through out the entire client interface. This will increase your efficiency rates and lower the required amount of systems in your data center, lowering environmental cost, CAP/OPEX.
There are other Storage systems that can increase utilization, Information Archive moves older data off to low cost, slower disk allowing you to store more on primary, faster disks. XiV keeps data spread throughout the entire system in case of a failure with no traditional RAID overhead. We at IBM are constantly looking for ways to increase the utilization of our systems.
IBM is working hard to build a smarter planet that not helps our clients, but helps the human race. Either through smarter storage systems, servers, software or consulting, IBM is working hard to bring this vision to a realization. Take a look at your systems and take stock of their utilization. Can they be doing more for you? Find out more about the IBM Smarter Planet initiative here.
This week, I am at SNW in San Jose, CA. If you have never heard of the conference, its all about storage and networking and pulls in all of the big vendors to put on labs, lectures and a vendor hall. People come from all over the world to this event to learn what is new and how to do things better.
One thing that I love doing at these events is talking to customers and potential customers about IBM storage technology solutions. Often we find the conversations do not talk about products as much as the technology in them that fix some sort of issue in the data center. I think this is best seen when you come in to the IBM booth. There is no hardware to see blinking lights or yank cables. We have something better, people who know the solutions to your issues.
If you ask any of the IBMers that work these events, they always say it’s a love hate relationship. The hours are long and you stand on feet for 4-8 hrs. The best part is talking about IBM solutions and finding out what people are doing in the field. This is the best way to help drive innovation, listening to the customer. IBM has programs that send our developers into the field to listen to customers and this is just one example of that program.
Another event at SNW this year was a gathering of the storage social media moguls. This is a non-vendor specific event and is open to everyone. It is associated with a certain hash tag of #storagebeers and they have been going on all over the world. Last night was the largest storagebeers to date and it was a whos who of this community. But what was better than meeting the people that you see on twitter or those who write blogs, was the idea of putting all of the vendor fighting behind us and just a group of people who work in the storage industry talking about whatever was on their mind. If you find yourself at an event like SNW or VMWorld, check to see if there is a #storagebeers and go back and meet some really cool people.
If you are at SNW and want to come by for a chat, you will find me at the IBM Booth today between 11am and 3pm. I would love to spend some time learning about what you are doing in the data center.
RichardSwain 060000VQ8G Tags:  nas ibm trainer" sonas "tom cloud chris_mellor ddn storage 9,184 Views
I just read the blogs from Chris Mellor from the Register and Tom Trainer Network Computing and thought how insightful are these two outsiders about the inner workings of IBM.
First off, yes IBM is no longer selling the DCS9900, a DDN OEM rebranded system in the very large IBM storage portfolio. There is no question that this product is no longer available after the October 15 date.
Second, the DCS 3700 is already part of our portfolio and is now an OEM box from Netapp/Engenio/LSI. The density of this system is the same as the DCS 9900 and makes sense to use the DCS 3700 as a replacement for the DCS9900.
Third, Tom’s blog about SONAS being a monolithic NAS storage is very skewed. SONAS is a very flexible in the way we can scale both storage and the throughput with out affecting either variable. Most “scale out” systems you have to scale both in order to keep up with demand. SONAS uses some of the best technology on the market with a huge amount of throughput.
His statement about IBM dropping DDN from SONAS is un-true and goes to show how much research Tom put into writing this blog. I am sure Tom is looking out to write a non-biased blog for Network Computing but maybe those days at HDS are still making a big influence in his ability to look at announcement letter and make a extrapolations about other products.
Finally, If HDS thought BlueArc was so great, why didn’t they buy them back when they could have gotten the company for a better deal? Has the product changed THAT much since 2006? I wish HDS only the best for dealing with the transition and getting that product under the HDS umbrella.
If you do your homework and base your assumptions on facts instead of conjecture, you will find SONAS is a solid platform in the enterprise NAS market. SONAS has proven it can be the market leader with a low cost to performance ratio and will only get better as time goes on.