I sneaked in a little early to listen to the speakers practice. Al Zollar is a hoot!
After the general session today, I wanted to update everyone. What a great release session!
IBM released a new Data Ontap version last Friday along with some other minor releases but more about those later. Data Ontap 8 7mode was the first release of a new 64-bit architecture that will allow N series customers to take advantage of larger aggregates.
A little history. Back about 8 years ago, Netapp purchased a company named Spinnaker for the use of their 64 bit code, global name space and some other odds and ends. For the most part, Netapp has been re-branding this code as their GX platform allowing customers who want the feature set to purchase it aside from their Data Ontap base. GX was not a heavy seller as it was complicated to install and much more pricey than the other brand and Netapp decided to co-mingle the two code streams into one. At first glance this sounds like a good idea. The Data Ontap code definitely had some limitations (small aggregates sizes, limited growth and no global name space) but the merging of the two streams was harder than Netapp imagined. This was shown by Netapp promising a release of the new merged code for over years and finally a release was available for testing.
There were many bugs (as RC code can be) but Netapp worked through the majority of them to produce a stepping stone release of the merged code called 7 mode. The developers used bits and pieces of the GX code to get the 64-bit architecture allowing customers to build larger aggregates, up to 100TB in size. This was really important as the release of the 2 TB Sata drives were coming and the limitation of 16TB in an aggregate would of killed any performance on the system. With only 8 2TB drives in the aggregate, the maximum IOPs throughput would be limited to about 400 IOPS per 16TB of drive space, not a good ratio at all. Therefor having a larger aggregate size allows them to put up to 50 2TB drives achieving a more respectable 2500 IOPS per aggregate.
Now that we have the 7 mode available, there are some upsides and some downsides. First, as stated above, the aggregate sizes have increased tremendously. Allowing for more data disks in the aggregate increases the amount of IOPs the filer can pool. On the downside of this news, we see that you can not simply flip a switch and increase an aggregate created in the old 32-bit code to a new 64-bit aggregate. Customers will have to create a new aggregate after upgrading to the 7-mode version of Data Ontap 8 and then migrate with some restore method (think DR restore from backup) on to the new space. You can not mirror the two as SnapMirror can only mirror between like for like aggregates (32-bit to 32-bit and 64-bit to 64-bit). No big deal if you are new customer or if the filer is a new addition to the filer farm, but for those existing customers I believe this will be a lot tougher. If you do not have the drive space to create a new 100TB or less aggregate, you will have to either wait to buy more disks or do a manual backup (not snapshot), destroy the existing aggregate, and build a new aggregate on the 64-bit code, then restore. This is and the fact this is the first release of the new code family, will be why customers will not adopt the new code very quickly.
There are also some other gotchas like no support for Performance Accelerator Cards (PAMII), no real interoperability between the two code bases and more. When I was an administrator, I hated having to read the release notes for the 'fine print gotchas' but in this case I encourage everyone to read the notes thoroughly and perhaps engaging your local IBM Storage engineer to help you access if you are a good candidate to upgrade or not.
The fact this is a stepping stone to the full code line does help customers that need to move to the 64-bit architecture today without slowing down Netapp's development team. They are working on the next release of Data Ontap 8 called cluster mode. This will be the code that allows customers to cluster more than one pair of systems under one global name space. I suspect this will be a great addition to the Data Ontap code line and will give Netapp more traction in the larger enterprise business.
There were also some firmware releases for the EXN3000 shelf on Friday as well. For more information on what was released, visit www.ibm.com support page
Well the last two days have been crazy with really good sessions, lots of networking with tons of people and great discussions throughout the entire conference. The sessions have been well attended and people are really asking great questions. For the most part, I hear that everyone is learning from the sessions, which I hope they dont get overloaded with so much information.
Today I presented on PAM II technology for the N series system. We disussed the need for large Read Cache systems and how its not only the size of the disks that are driving this need, but also the business requesting for lower return times on data. During this session, the question was brought up about the new acquisition of StorWize and how that would effect the NAS solutions at IBM.
Here is IBM VP of Storage, Doug Balog, talking about the product.
I think its going to be a good product to put in front of our NAS systems and it will drive the heavy read cache systems like PAM II and the huge amounts of cache in the SONAS systems. Speaking of StorWize, I wanted to give everyone a little more information about this product and maybe why IBM purchased them. They provide real time compression technology that will reduce the storage need by compressing the data into images. They have an engine called Random Access Compression Engine (RACE) which is just a compression algorithm that does the conversion with no overhead. The Storwize appliance will work with popular NAS systems, including IBM N series and SONAS, as well as non-IBM NAS systems from EMC, HP, NetApp and others. Storwize real-time compression can provide added value to clients already using data deduplication, thin provisioning and other storage efficiency technologies.
For more information on the StorWize acquisition, go here for the press release.
I am at the IBM Storage University this week with the hope to spread the good word about NAS technology at IBM. The opening session was awesome and SONAS was mentioned a couple of times as part of the IBM Storage strategy. Listen below to a few remarks (short clip) from IBM VP Storage Doug Balog.
My session on NAS technology was well attended and people asked thoughtful questions. We talked about the N series and a couple of new features we have been adding through out the year. Then we talked about the SONAS platform which I think is one of the hottest topics being discussed here this week. I also worked in the solution center where all of the vendors setup booths even Netapp, who is a platinum sponser came with a very large booth this year, right at the door. I didn't get a chance to talk to that team afterward but I hope they were able to speak to alot of people here about N series.
I had a ton of people coming by and asking about SONAS, and not just what is it, but how can it help them.
Today there are some great sessions that I am hoping to attend. One is a N series client from IBM talking about Managing the Largest AGFA PACS solution in the Americas. Then there is my session on ILM/HSM in the SONAS system. I am hoping we will have a great turnout for that ! There are so many sessions that I want to attend, I need to clone myself so that I can get them all.
Just a quick note on two new Solution Briefs that were posted to the IBM SONAS front page. These briefs have important information on using Symantec's EndPoint and NetBackup to protect data on the SONAS system. As always, the devil is in the details but this gives SONAS clients a look into how we can utilize these two solutions with the very powerful SONAS system.
SONAS with Symantec EndPoint
SONAS with Symantec NetBackup
I am also headed up to the IBM Storage University to present on NAS technology at IBM. If you get a chance to stop by and see one of my sessions, please come up and give me feedback on the blog (and other things). I will also be in the Solution Center at the SONAS booth.
Here is a You Tube video of Windows Explorer VSS and IBM SoNAS snapshot demonstration by Roshan Ratnayak..
RichardSwain 060000VQ8G 2,059 Views
Move that File! You know that show were people are moved out of their old house, an army of contractors come in and build a new house, then the people come back and are astonished at their new home. I was watching an older episode the other night and released how much this improves a family's mobility, productiveness and state of mind. While, their old house was ok, it provided some what of a shelter, the new house was 100x better.
I think of SONAS in the same way. There are many ways to do NAS technologies. Some take time to develop and build, but others are just as effective with little to no planning. I was talking to a client the other day and his response to NAS was to put NFS servers in all of their locations. It's cheap and something they can repeat like a cookie cutter many times over. What he was not taking in to his planning was administrating all of these islands of storage and how much he was spending on data sitting on expensive disk. If he was able to consolidate these servers and have a way of moving data around and eventually off to the greenest storage media out there, tape, then how much more money and time would that save him? He didn't have an answer but we are working on plan for him today.
IBM announced yesterday that SONAS version 1.1.1 will now support ILM tiering with GPFS and then moving data off to tape using Tivoli's Storage Manger HSM. These two work in concert with the policy manager on the SONAS system to move data on and out of pools based on the meta data properties. As discussed in previous posts, SONAS separates the meta data which allows the scan engine to pass the needed data on to the ILM or TSM agents. These agents then move data between the pools and allows the client to free up space on valuable spinning disks.
If you are one of the people that says tape and tiering is not needed, then think about the idea of putting data that hasn't been touched on a disk that costs $0.03 per GB. Its not that your storage isn't cool and you may not need tiers for your high performance, but what if the only data that was on the system was data that was actively being used and not my old spreadsheet from 2009.
Along with the ILM announcement, IBM released the following with version 1.1.1
If you want more information on the IBM Storage release announcements click here.
There is always a part of the business that gets over looked and usually its the people that are in the trenches making things work and keeping those machines going. I recently had the pleasure of spending some time with three great IBM CEs here in the Raleigh NC area. I was impressed with their professionalism and thoroughness while working on the SONAS upgrade. They made sure everything was installed, cabled and tabled according to the documentation. It is one thing to have a great product and lots of features, but it is even more important to have people who can service the system and do it with the highest level of craftsmanship. Thanks guys and gals for everything you do to help make our job easier!
Today, I helped our local Client Engineers install a couple of new nodes and some more storage into a local SONAS system. This was exciting for me as I love working with the hardware and software and it keeps up my keyboard skills. This client is bringing online more demand and needs both horsepower (interface nodes) and storage to accommodate a new business line. I was amazed at how easy the system is to upgrade and soon his little starter rack is almost full.
We added two interface nodes, IBM xSeries 3650 m2 and two 60 disk shelves to the unit. Once the disks are online and presented up to the interface modules, they can start creating shares for the new operation. As they need more storage or more interface nodes, another rack will be but in and the same process of pooling these resources together will come together.
The idea of having multiple interface nodes and storage pools is to not have single points of failure. In traditional storage, if a controller goes down, its partner has to pick up the entire work load for the down hardware. Not so in SONAS, if a controller goes down, the work is then evenly spread across all of the other nodes in the system. This is why we do not have a problem of loosing CIFS connections when systems go down.
The addition of new storage is also interesting as we are tripling the amount of storage the base system had originally with two 4 U shelves. These shelves are highly dense, top loading containers using either SAS or SATA disks. In this instance today, we were installing 120 2 TB SATA drives. A total of 240TB in 8 U of space. Not too shabby.
At the end of the day, I was pleased to see that IBM is moving forward with smarter storage systems. If you look at the entire portfolio, you can see that our systems like XiV grid, the auto tiering on DS8700, SVC virtualzation, all of these systems are helping our goal of a Smarter Planet. Look for some more pictures and maybe a video on Monday.
IBM has been working to enhance the way we do business from day one. From clock, to typewriters, mainframes, PCs, software, storage... the idea behind our innovation is to make it easier for our clients to do their business. Now we are taking it one step further to help our clients make the world better.
If you have been watching standard TV, Youtube or Hulu, you probably have seen a commercial for the IBM Smarter Planet initiative. These great adverts keep the tradition of IBM marketing our message to the masses. They describe how IBM is making our world better by using technology through many disciplines; Healthcare, Traffic, Food, etc. If you dig a little deeper than the catchy ads, you see a real movement not only to 'save the planet' but to make our lives better.
One of the ways IBM is making our planet better is by increasing the utilization of our systems. Today's average commodity server rarely uses more than 6% of its available capacity. This hold true for our storage systems as well. We find storage systems are bound by traditional technologies that keep us from keeping up with demand.
Looking at how this relates to my involvement I see how both SONAS and N series fit this mantra. The technologies allow clients to conserve energy by decreasing the amount of storage needed to achieve typical installations.
N series software allows a client to over subscribe a system by cloning the volumes with out adding additional space. This software called FlexClone allows clients to use products like VMWare, Xen or Hyper-V to create zero space copies of the original image. This zero consumption keeps the original blocks locked to the original image and any new changes are added to the free space as a delta. In traditional storage systems, a 10GB image would consume 1 TB for just 100VMs. With FlexClone, the only space needed for all 101 VMs would be just 10GB. Lowering both the OPEX and CAPEX for this system.
The IBM Scale out NAS system (SONAS) is a gaining steam as the private cloud business has increased in the business market. Not only are research universities and high performance computing labs seeing the benefits, so are mid-market to enterprise business leaders. Typical storage systems are not utilized to their full potential because of the purpose of the system or how it was integrated into their data center.
With a SONAS system, we no longer have to think how this system will be provision as all of the equipment can respond to requests from multiple parts of the business. If you have 5 systems that provide storage to your business and one of those systems is struggling to keep up with demand, the only way to keep up with the requests is to move data off by hand to the other systems. This is time consuming and could introduce mistakes and possible data loss. SONAS allows clients to be flexible in a dynamic-on-demand business environment. No longer will you have one system slowing down productivity as all of the storage in a SONAS system can be distributed through out the entire client interface. This will increase your efficiency rates and lower the required amount of systems in your data center, lowering environmental cost, CAP/OPEX.
There are other Storage systems that can increase utilization, Information Archive moves older data off to low cost, slower disk allowing you to store more on primary, faster disks. XiV keeps data spread throughout the entire system in case of a failure with no traditional RAID overhead. We at IBM are constantly looking for ways to increase the utilization of our systems.
IBM is working hard to build a smarter planet that not helps our clients, but helps the human race. Either through smarter storage systems, servers, software or consulting, IBM is working hard to bring this vision to a realization. Take a look at your systems and take stock of their utilization. Can they be doing more for you? Find out more about the IBM Smarter Planet initiative here.
I was working with a company today that had 5 storage vendors supplying gear for their data center. They were interested in SONAS not only for the scalability but for the idea that we could consolidate their storage footprint from 5 vendors to 1 with out having to sacrifise performance. Now must vendors would say, sure we can rip and replace all of the storage there and replace it with their own, but can they do it to the same scale (both horizontal and vertical), with a high performance engine like GPFS, one global names space and one single management tool? The customer was really impressed by our solution as is very interested in how we can help him go from 40+ racks of storage down to 13 and increase his efficiency.
The idea is to use multiple teirs to put meta data on the faster disk, data that is being accessed currently on a medium speed and then 'near to archive on slow, fat disk. Currently they have no way of moving data between their 5 different systems. This causes them to keep running into issues where they are buying expensive disk to run applications/databases where they have data that is stagnant or non as important on the existing fast disks. SONAS will allow them to create pools of storage (tiers) and policies to move the data between the pools. Then with the HSM integration, the data can be archived off to tape until needed down the road.
From a cost perceptive, the cost associated with the size and speed of the disk match their purpose with out them having to manually move the data themselves. With IT departments that have a huge growth in storage and are not hiring more admins, this makes sense. They can set the policies up front and not have to worry about some old file sitting on a expensive SAS disk. The other expense savings is in support contracts. The reduction from five support contracts to one will save customers money, and allows them to have one place to get all of their support.
I can not wait to start work on this account as it looks like we will be putting in a great system and helping a client save money.
and emerging markets; expanding its customer base in storage in particular has been a big
You can find the full write up here:
I was on my way down to Miami today and was talking to the gentleman sitting next to me about storage technology and the conversation turned to how everyone is scrambling to be in the cloud business. He had heard multiple vendors come in and start talking about cloud technology and how it was going to save him money, time and effort. This gentleman worked for a retail chain that has multiple district offices through out the eastern US and headquarters in Atlanta. He has multiple technologies all helping him keep the business running but nothing planned and as the company grew, they simply cookie cut the previous installation and planted it into the new office. Each office would also replicate back to HQ and that would be the main repository for backups/restores. I would guess there are thousands of companies out there with similar setups.
So instead of going into how he could leverage cloud storage technology, I asked him what were his problems and listened. They basically came down to this:
1. Multiple independent islands of storage that are aging, causing his support contracts to go up.
2. Backups take way to long and systems are slowing down as they get closer to 'capacity'.
3. Future growth was expensive as every time they added a new capacity, they had to add entire systems.
Now they were not cutting edge technolgy leaders nor were they wanting to be, but he needed a way to solve some of these traditional storage problems. He didn't want to go out and buy a new large system that would take forever to get in and while it may solve his problems, it would bring in even more issues. What he needed is less overhead and more throughput.
We sat there for a while thinking, we didn't say much until I offered this tidbit, "So what does cloud mean to you?" After a nice laugh, he stated that he really didn't know and the more he read, the more 'cloudy' became the answer.
There are many interpretations about what cloud really is and it differs between storage vendors. If there is a true declination of what cloud storage really could be, I think it could be defined using NAS technology. NAS lends to be a kinder and gentler protocol set and the need is growing leaps and bounds. Our traditional way of adding more systems and creating more independent silos works for smaller environment but it does not scale when clients want large disk pools of storage under one umbrella. There are ways of making volumes span in to large pools but the underlying storage is still made up smaller components that are typically active/active/passive nodes, even the best load balance will not help if you are overloading that system.
There are ways to find a balance between the same old way and going out and dropping tons of cash on huge storage gear. Find a system that will grow and scale as your storage needs do. Think of ways to keep everything under one umbrella (name space for example) and also try to solve issues that you are having today with real technology and not work arounds.
With NAS technology, we will always be at the mercy of the backup target whether its disk or tape. No matter if we are taking snapshots or ndmp backups, we have to write that out to some target to have a restore point. This is your basic strategy on how to do a backup/restore, why not consider using different types of disks to create a tier and offload disks to slower pools as the data gets 'older'. A few vendors have said there is no need for tiering, mainly because their systems can't take advantage of this and therefore they shun those who do. ILM tiering can help you achieve not only higher utilization rates with the storage but it puts the data that is accessed more frequently on faster disk, and moves the rest away to makes more room. Why pay for fast disk if the data on it is not being accessed frequently?
Future expansion has always been tough for administrators, they tend to over buy on controller size and skimp on the disk. Systems like SONAS from IBM allows you to grow both in storage capacity and server throughput; independently. If a customer needs more storage but doesn't need the additional throughput, why force him to add more controllers? SONAS systems can scaled up to 30 storage servers and 14.4 PB of spinning disk all under one name space. No more having multiple nodes with their own names; this storage is called Accounting1, Accouting2 .... etc. They are called Storage and everyone gets the benefit of having all of the nodes, not just one system.
By the time we had gone through all of this, our flight was landing. It was a great talk and both of us gained a different perceptive on how cloud is perceived. If any of you want to find out more information about the IBM Cloud strategy or SONAS go to the following links:
SONAS by IBM
This weekend I was working on moving some of my winter clothes and spring/summer clothes in and out of my closet and into containers. Last Fall I purchased a few plastic containers that sealed so I could put my short sleeve golf shirts away and some of my shorts. Here in North Carolina, we can get a mild day and it is nice to have a short sleeve shirt to wear. On these days I would go back to the containers and dig through the nicely folded items until I found the shirt I wanted. Sometimes I had to go through multiple containers because I had forgotten which one I had put it in a few months ago. This weekend when I pulled out the containers they were in a mess, nothing was folded and it took me more time trying to figure out what was what as they all were mixed up. I then wondered what if I bought a bigger container and instead of using multiple ones, I could use one large container to store all of my winter clothes? What would the issues be, would I have enough space to store the container? Would there be someway of indexing the clothing inside to quickly find what I was looking for? Was there a way to put some clothes that I would need in case of cool day in a separate container just in case I needed them?
There seemed to be more issues with just using one larger container than I thought. It would be easy to dump all the clothes into the larger bin and claim victory but that did not help me down the road. I needed a system, something to help me consolidate efficiently while still giving me access to those things I used on occasion. I also had to keep in mind the space I was going to use in my storage area. I didn't want to buy one large container and not be able to get it in the space I already had allocated. I needed a flexible system, maybe a few boxes that had labels and I could get to quickly if I needed something inside.
Take a look at some of the noise some of the storage vendors are making about data storage consolidation. Most of them are telling you we can come in and take your smaller boxes and dump the data into one big box. While that helps on saving you space and might keep you from administrating multiple storage devices, you need to look at the downside of just having one big pool of disks. A large storage system that is replacing multiple smaller systems will need more cache and processor power to handle the same load as before. If you want to move data around to different tiers of disks or tape, can you achieve that with the new system?
I started down the road of buying the biggest container I could find but decided against it as it would be too much trouble to find things. Your data storage systems need to be flexible enough to have multiple storage pools so that data that is able to move off faster disk and make room for data that is accessed more frequently. This not only allows your clients to have better response times on files they frequently use, but it tells you how much 'real' data people are using in your data center. The other issue I had was I needed some type of labeling systeming or an index to tell me where the shirts were and were was my ski jacket, etc. Your data is much the same, you need to keep up with where data lives in the storage system. As our storage systems get larger, we need save the meta file data easily and keep it in a table so we can run queries against it.
There is also last part of moving my clothes around that I hated the most, the purge. I went through and found the shirts that had been worn too many times or may not fit the same as when I bought them originally. I packed these in a cheap cardboard box and took them to a donation place. This is the same as getting rid of old data in your system. Old data that is not being accessed is costing you money. You not only have to pay the environmental cost of keeping those bits spinning, but its taking up room where new data could reside therefore costing you money to expand. True archive and purging of data will be needed for any system large or small. Make sure you find a system that is easy to work with and automates this process based on policy.
In the end, if you are looking at consolidation of your data storage, there are multiple things you will need to find out about a system. Just because a bigger container can replace multiple smaller containers may not give you the flexibility needed to meet your changing needs. For more information on a better way to consolidate your storage platform and moving your data, check out the information on SONAS and TSM.
Sorry Bill, there is a new question burning in our minds today. There seems to be a lot of buzz lately about tiering your data storage and who can and who can not, why and how but not alot of people are talking about when to tier your storage. Netapp has indicated they are not as concerned with a tiering approach and this is true for the IBM XiV product. Others like 3par and IBM' SONAS has it built in for clients to move data from one pool to the next. But how does one gauge this old standard of giving the best to the most demanding and the least to the dregs of our storage footprint?
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
WHEN: Thursday, March 11th, 10-11:30am CST.
PRESENTED BY: Andrew Grimes
Click here to REGISTER TODAY!
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management
2. Storage Efficiency
3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors
4. Most importantly, see how we recover these applications in a matter of minutes!
The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
There are few times that I look at what a company markets as the 'Next big thing' in the storage world and get the same reaction I got when I started learning about the SONAS product. There is already some technical details in the announcement and in Tony's blog from a few days ago so I wont go into that today, but I will go over how this product really makes a paradigm shift in the NAS storage world.
Traditionally NAS storage is looked as the little brother to the bigger systems of SAN. SAN systems tend to be the athletes of the storage high school with their matching letter jackets and oversized girth. All the while, NAS was the band geeks, some frail and thin and some over sized but always in large numbers and not very organized. NAS technology was born from the need to share data over he company and as the amount of information grew so did the servers, network bandwidth and backups. SAN storage is still the big guy on campus but the people that track trends for our industry say NAS has become just as important as the large databases, ERP systems and the like.
If you look at how we have stored NAS data, it has been on single file systems that had local disk drives shared out over a single 10/100 mb network. As storage systems became more advanced, we saw people using clustering, snapshots, thin provision, de-duplication and replication to help keep our companies communicating. When we needed more throughput or more storage we added a server or added disks which created islands of unshared power.
If you look at 2009 and one of the hottest buzz words in the storage market, it was cloud computing. Having a large source of power in one area to pull resources from without having to provision new equipment. We also saw more and more clients looking at NAS protocols as the Ethernet could support faster speeds than traditional fibre channel. A huge amount of you have been looking at and moving your virtual environments to NFS to help cut down on administration overhead and to take advantage of the CNA technology.
With a higher demand for NAS technology, comes the burden of being able to scale at the same rate the storage, network and throughput increases. Older NAS systems allowed clients to increase the amount of storage but once you reach the critical mass the system allowed you had to purchase another clustered system. This creates multiple islands of storage pools that have be managed, provisioned and backed up. Not a great solution for companies that are growing and have fewer administrators to do the work.
Now, IBM has a product that allows our NAS clients to grow and scale as their companies grow. SONAS is a highly scable NAS that works like a cloud. The underlying technology, GPFS, is the same found in some of the fastest computers in the world. SONAS uses a method of scaling in both storage and throughput by adding storage pods (60 SATA or SAS disks) or interface modules (x3650 servers) like Lego blocks. All of this is managed by a central command module that allows a client to have full control over the entire system no matter how much storage or servers are in the system.
So the "Next big thing" in my opinion is here today and IBM is using the best of the best of IBM research for it's clients. The SONAS solution is designed from the ground up as a true blue NAS storage solution. Look for future SONAS blogs on GPFS, creating an ILM strategy and more.