May 9th has been a target on my calendar for some
time now. Inside of IBM, we have been
waiting for this day to come so we could talk about the new things being
released in the storage platform. It
almost feels like Christmas morning with a bunch of new presents under the
tree. Each gift has inside something
that is either really cool or something very useful.The only difference is your Aunt Matilda and
her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the
entire storage platform. I will
concentrate on just the IBM NAS ones but if you are interested in knowing what
is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of
gifts for him under the tree this morning. Not only did he find presents under the tree
but there were a few little things in his stocking. Here is what Santa brought:
hardware update on the X3650 nodes. Just like before, the SONAS system uses
the impressive workhorse but now it uses the more powerful M3 class with a
six core Xeon Intel 2.66GHz processor. It has 24GB of DDR3 RAM with the option
to increase to a total of 144 GB of DDR3 RAM per interface node. Also new with the X3650 is the option to
include a second processor to double the amount of cores to 12 total per
under the tree is new support for not only XiV but now SONAS supports the
SVC and V7000 as disk subsystems. This
is a huge gift because now SONAS can support tons of other storage under
the awesome virtualization of the SVC code. V7000 support is also interesting as that
platform has the virtualization code from SVC but also support its own
drive architecture including solid state drives.
same category of sweaters, SONAS gets a little smaller rack extender.In the past IBM has used a 16 inch
extender in order to accommodate the large 60 drive disk enclosure.That
has now been trimmed down to only 8 inches and 0 for the gateway model and
RXC rack that houses only interface nodes.
gets a new file system upgrade to GPFS 3.4 PTF4. This will provide a significant performance
improvement over the R1.1.1x release. The updated file system handles small
file and random I/O a lot more efficiently. With this update we now use the role of
manager nodes instead of interface nodes to gain more flexibility in how
we track data in cache.
gifts SONAS received were new support for NDMP, Anti-virus support, use of
both 10GbE ports on the same CNA and some power updates for the EU countries.
And along with all of that, there
is a new performance monitoring package called Perfcol that collects more
information for analysis.
This SONAS release is labeled R1.2 and can be obtained by
contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few
gifts.A new N6270 to replace the
N6070.This new system is in line with
the N6200 series with larger amounts of RAM and processors.Just like the smaller N6240, there is an
expansion controller where customers can add more PCI control cards like HBAs,
10GbE or even FCoE.A new disk shelf was
also released which uses the smaller 2.5 inch drives with improved back end
And over at the Real Time Compression house they got new
support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as
these were just a fraction of the Storage announcements today.Also today is the IBM Storage Executive
Summit in New York City.My friend and
fellow blogger Tony Pearson is covering this great event and will be updating
his blog and twitter feed.If you were
not able to make it to NYC for the event, feel free to tweet him your questions
@az990tonyYou can also send questions
to our IBM Storage feed at @ibmstorage
Well the last two days have been crazy with really good sessions, lots of networking with tons of people and great discussions throughout the entire conference. The sessions have been well attended and people are really asking great questions. For the most part, I hear that everyone is learning from the sessions, which I hope they dont get overloaded with so much information. Today I presented on PAM II technology for the N series system. We disussed the need for large Read Cache systems and how its not only the size of the disks that are driving this need, but also the business requesting for lower return times on data. During this session, the question was brought up about the new acquisition of StorWize and how that would effect the NAS solutions at IBM.
Here is IBM VP of Storage, Doug Balog, talking about the product.
I think its going to be a good product to put in front of our NAS systems and it will drive the heavy read cache systems like PAM II and the huge amounts of cache in the SONAS systems. Speaking of StorWize, I wanted to give everyone a little more information about this product and maybe why IBM purchased them. They provide real time compression technology that will reduce the storage need by compressing the data into images. They have an engine called Random Access Compression Engine (RACE) which is just a compression algorithm that does the conversion with no overhead. The Storwize appliance will work with popular NAS systems, including IBM N series and SONAS, as well as non-IBM NAS systems from EMC, HP, NetApp and others. Storwize real-time compression can provide added value to clients already using data deduplication, thin provisioning and other storage efficiency technologies.
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
There is a demo coming up on January 20th that will show the integration of N series and VMware. The long awaited Virtual Storage Console and Rapid Cloning will be the highlights of the demo. So what is VSC? It is N series software that enables administrators to manage and monitor storage side attributes of ESX-ESXi hosts. VSC functions as a plugin to vCenter and uses APIs to set and retrieve information from the array.
VSC adds a tab into vCenter and enables the following:
View Status of Storage Controllers
View Status of physical hosts, including versions and overall status
Check for the proper configuration of ESX settings as it applies to:
HBA driver timeouts
Provide the ability to set the appropriate to set the appropriate timeouts on multiple ESX hosts simultaneously with a single mouse click
Launch FilerView from within VSC for storage provisioning
Provides access to mbrtools (mbrscan, mbralign, mbrcreate) to identify and correct partition alignment issues
Ability to set credentials to access storage controllers
Ability to collect diagnostics from the ESX hosts, FC switches and Storage controllers
Netapp, for some reason, has removed the SVC from their interoperability list of storage subsystems under a V series. The development team at Netapp has for months not kept up the development and testing for support on SVC (and other storage platforms). This was no more evident when the Storwize V7000 was announced last year that runs the same code base as the SVC system and Netapp refused to offer any support for the product. The lack of support probably comes from the V series team feeling threaten by the virtualization power of the SVC code. These two systems do have some similar capabilities but we find them in different parts of the data center. The V series / Gateway is more of a host to another storage system. It treats the luns presented to it as disk and then presents another protocol out to another host or client. SVC is more a virtualization engine for all the storage and allows customers to move data around in pools that can cross storage subsystems with out the end user knowing.
With all this said, IBM has stepped up and is continuing support for the N series and Netapp models in front of the SVC or the Storwize V7000. As my fellow IBM blogger "The Storage Buddhist" the place for support is not Netapp, but IBM. I stole this chart from his blog to show the levels of code and models supported.
IBM released a new Data Ontap version last Friday along with some other minor releases but more about those later. Data Ontap 8 7mode was the first release of a new 64-bit architecture that will allow N series customers to take advantage of larger aggregates. A little history. Back about 8 years ago, Netapp purchased a company named Spinnaker for the use of their 64 bit code, global name space and some other odds and ends. For the most part, Netapp has been re-branding this code as their GX platform allowing customers who want the feature set to purchase it aside from their Data Ontap base. GX was not a heavy seller as it was complicated to install and much more pricey than the other brand and Netapp decided to co-mingle the two code streams into one. At first glance this sounds like a good idea. The Data Ontap code definitely had some limitations (small aggregates sizes, limited growth and no global name space) but the merging of the two streams was harder than Netapp imagined. This was shown by Netapp promising a release of the new merged code for over years and finally a release was available for testing. There were many bugs (as RC code can be) but Netapp worked through the majority of them to produce a stepping stone release of the merged code called 7 mode. The developers used bits and pieces of the GX code to get the 64-bit architecture allowing customers to build larger aggregates, up to 100TB in size. This was really important as the release of the 2 TB Sata drives were coming and the limitation of 16TB in an aggregate would of killed any performance on the system. With only 8 2TB drives in the aggregate, the maximum IOPs throughput would be limited to about 400 IOPS per 16TB of drive space, not a good ratio at all. Therefor having a larger aggregate size allows them to put up to 50 2TB drives achieving a more respectable 2500 IOPS per aggregate. Now that we have the 7 mode available, there are some upsides and some downsides. First, as stated above, the aggregate sizes have increased tremendously. Allowing for more data disks in the aggregate increases the amount of IOPs the filer can pool. On the downside of this news, we see that you can not simply flip a switch and increase an aggregate created in the old 32-bit code to a new 64-bit aggregate. Customers will have to create a new aggregate after upgrading to the 7-mode version of Data Ontap 8 and then migrate with some restore method (think DR restore from backup) on to the new space. You can not mirror the two as SnapMirror can only mirror between like for like aggregates (32-bit to 32-bit and 64-bit to 64-bit). No big deal if you are new customer or if the filer is a new addition to the filer farm, but for those existing customers I believe this will be a lot tougher. If you do not have the drive space to create a new 100TB or less aggregate, you will have to either wait to buy more disks or do a manual backup (not snapshot), destroy the existing aggregate, and build a new aggregate on the 64-bit code, then restore. This is and the fact this is the first release of the new code family, will be why customers will not adopt the new code very quickly. There are also some other gotchas like no support for Performance Accelerator Cards (PAMII), no real interoperability between the two code bases and more. When I was an administrator, I hated having to read the release notes for the 'fine print gotchas' but in this case I encourage everyone to read the notes thoroughly and perhaps engaging your local IBM Storage engineer to help you access if you are a good candidate to upgrade or not. The fact this is a stepping stone to the full code line does help customers that need to move to the 64-bit architecture today without slowing down Netapp's development team. They are working on the next release of Data Ontap 8 called cluster mode. This will be the code that allows customers to cluster more than one pair of systems under one global name space. I suspect this will be a great addition to the Data Ontap code line and will give Netapp more traction in the larger enterprise business. There were also some firmware releases for the EXN3000 shelf on Friday as well. For more information on what was released, visit www.ibm.com support page
The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
8GB of RAM (2x the amount in the N3600 and 4x the amount of the N3300)
512 mb NVRAM
2 integrated SAS ports and 8 total 1GBPS Ethernet port
PCI-e port for expansion
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
This conference has both Storage and X series sessions along with key note speakers from the top management at IBM.
People come from all over the world to this conference looking for the
'how to' answers and whats to come with the product lines. There is also a solution center that houses all of the products along with our sponsors. This year our top platinum sponsors are Cisco, Intel and Netapp. Other sponsors include Brocade, Emulex, Fuision-IO, VMWare, Red Hat and SUSE.
I plan to be working in the solution center at the SONAS booth talking
to clients about the benefits of SONAS and how it fits into their
environments. If you are wanting to stop by here are the hours that I will be there:
Monday, July 18th Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks) Tuesday, July 19th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center) Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks) Wednesday, July 20th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
I also will be presenting a few sessions on NAS technology here at the conference. Most of my sessions will be a look at what IBM is doing with SONAS, N series and Real Time Compression.
I have a NAS 101 class that I really love doing because there are so
many people that have a misconception of what NAS is today. In my N
series update session we will be talking about the latest release on
N6270 and the EXP 3500 as well as a peek at the R23 release coming in a
few weeks. The other two sessions I am doing are a little off the topic of NAS but around social media and using www.ibm.com for help.
Tony Pearson, John Sing and Ian Wright will be joining me on a panel to
discuss the roles we play in social media and what each of us thinks of
the future of social media. The support session is something that
a client suggested to me out of their frustration of how to find
documents on our support pages. Here is a list of sessions and times that I will be presenting:
7/18 - 1:00 sSN14 Storage Networking (NAS - SAN) NAS 101: An Introduction to Network Attached Storage 7/19 - 10:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM 7/19 - 1:00 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New? 7/20 - 1:00 sGE10 General Tips and Tricks on Searching for Support Answers on ibm.com 7/20 - 5:30 sGE61 General Using Social Media in System Storage 7/21 - 10:30 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/21 - 2:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
you are at the conference feel free to come to any of my sessions and I
would love to hear from you about the IBMNAS blog or any of my social
media outlets. We are using the following hashtag for the
conference all week if you want to follow what is going on via twitter
I am at the IBM Storage University this week with the hope to spread the good word about NAS technology at IBM. The opening session was awesome and SONAS was mentioned a couple of times as part of the IBM Storage strategy. Listen below to a few remarks (short clip) from IBM VP Storage Doug Balog.
My session on NAS technology was well attended and people asked thoughtful questions. We talked about the N series and a couple of new features we have been adding through out the year. Then we talked about the SONAS platform which I think is one of the hottest topics being discussed here this week. I also worked in the solution center where all of the vendors setup booths even Netapp, who is a platinum sponser came with a very large booth this year, right at the door. I didn't get a chance to talk to that team afterward but I hope they were able to speak to alot of people here about N series.
I had a ton of people coming by and asking about SONAS, and not just what is it, but how can it help them.
Today there are some great sessions that I am hoping to attend. One is a N series client from IBM talking about Managing the Largest AGFA PACS solution in the Americas. Then there is my session on ILM/HSM in the SONAS system. I am hoping we will have a great turnout for that ! There are so many sessions that I want to attend, I need to clone myself so that I can get them all.
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management 2. Storage Efficiency 3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors 4. Most importantly, see how we recover these applications in a matter of minutes!
Now available is the IBM System Storage N series with VMware
Redbooks are a great way of learning a new technology or a
reference for configuration.I have used
them for years not just in storage but for X series servers and for software
like TSM.The people that write the
books spend a great deal of time putting them together and I believe most of
them are written by volunteers.
This is the third edition of this Redbook and if you have
read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction
of both the N series systems and VMware vSphere.There are sections on installing the systems,
deploying the LUNs and recovery.After
going through this Redbook, you will have a better understanding of a complete
and protected VMware system.If you need
help with how to size your hardware there is a section for you. If you are
looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making
sure you have proper alignment between the system block and the storage
array.This will negatively impact the
system by a factor of 2 in most random reads/writes as two blocks will be
required for one request.To avoid this
costly mistake or to correct VMs you have already setup a section in the book
called Partition alignment walks you through the entire process of correctly
setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of
deduplication, compression and cloning to drive the efficiency of the storage higher.These software features allow customers to
store more systems on the storage array than if they used traditional hard
drives.Also there is how to use
snapshots for cloning, mirrors for Site Recovery Manager and long term storage
aka Snapvaults.At the end of the book
are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene,
there is a great guide that will help you from start to finish setting up your
vSphere environment.The information is
there, use the search feature or sit down on a Friday with a high-lighter,
which ever fits your style and learn a little about using a N series system
I had the pleasure to present at the IBM Technical conference (aka STG-U) this past week. I was asked to speak about NAS technology basics and how the world is moving to more and more NAS platforms. Typically I get to present on some type of product, SONAS, NSeries, and the like. This was very much different as I got the chance to go deeper into the technology with out talking too much about products. The session name I used was NAS 101: An Introduction into NAS technology. The idea was to help educate our technical teams about the history of NAS, how NAS works, some pitfalls and then NAS at IBM.
There is so much surrounding NAS and to boil all of that down to a 1 hr 15min presentation is pretty difficult. The other challenge is trying to keep the information relative to the amount of knowledge everyone has in the session. I had people who were very skilled storage engineers to people who just getting into the business. I hope the information I presented was relative at all levels.
wanted to post my slide deck here so if you have a need or want me to
come and help teach what NAS is all about feel free to contact me.
Sorry Bill, there is a new question burning in our minds today. There seems to be a lot of buzz lately about tiering your data storage and who can and who can not, why and how but not alot of people are talking about when to tier your storage. Netapp has indicated they are not as concerned with a tiering approach and this is true for the IBM XiV product. Others like 3par and IBM' SONAS has it built in for clients to move data from one pool to the next. But how does one gauge this old standard of giving the best to the most demanding and the least to the dregs of our storage footprint?
Tiering can be based on performance needed from the client/application or the length of time and frequency of use. Some vendors will come out and say we treat all data the same and can shift needed resources to more used areas of the subsystem. And some allow you to create pools of storage to allocate cycles for just that application. The main difference is what happens when a system is over subscribed, do you have a guarantee without the pools that your application will always get the set performance it needs. Archival tiering allows you to move data that has a lower frequency of being accessed to lower cost (large and slow) disk and then to tape. The movement from pool to pool is based on either rules or policies set by the administrator based on date or time. This is a bigger issue with NAS data than SAN due to the nature of NAS files.
An indication of when to tier is the size of your storage system. Is it worth to create three tiers of storage for a 5 - 10 TB storage system? Probably not and there are simple ways of isolating storage for higher performance that has been tired and true. If your storage doesn't have built in tiering, you can use isolation of drives to increase the amount of performance to an application. You can also use higher amounts of cache like the N series PAM cards. This can decrease the latency response time and improve your application performance by adding additional read cache.
A larger system that has 100 TB and up would be ideal for tiering based on performance. As your storage grows there is data that needs to be on fast disk and data that can live elsewhere. Think of your storage as a tool chest of wrenches, screwdrivers and sorts. As you get more tools in your tool chest, you will want to keep those used more frequently in the top where you can get to them quickly like the trusty screwdriver that does both Phillips and flat head screws. But the tools that used less can be in the shelves below, sorted by either size or frequency of use.
Tiering data may be important to you as your build out your system and maybe you need to implement it day one. With the growth in the digital media, whether you are taking pictures for a marketing campaign or producing a new digital movie, we will see data storage grow ten fold in the coming years. I suspect tiering will be needed more for these projects as their data plat form scales out quickly and the smaller storage units will be used as secondary units in field offices or retail stores. Either way, you will need to evaluate whether to tier or not to tier based on your storage needs today and in the coming future. Would Shakespeare believe in tiering? Only if it sold more tickets for his play maybe...
There is an ancient proverb that says " When you have only two pennies left in the world, buy a loaf of bread with one, and a lily with the other.". There is some wisdom in this old saying that we can still apply to today's IT budget and strategy. If you have been keeping up with the news, you would know companies are starting to invest again in their IT hardware and software. This maybe the turn in some of the hardest times in the hardware business. But what are customers really buying and planning to buy with their dollars? What is my bread and what is my lily today? The bread represents nourishment of the body. We have to eat in order to keep going. With out it, we starve and eventually die. This would be the basic part of a business IT strategy. What do you have to do to keep the lights on? I have this conversation with IT planners all the time. People love to do the newest and greatest, but have a smaller understanding or take for granted the things they have to do to keep the business going. The lily is a beautiful and majestic flower. Dating as far back as 1580 B.C., when images of lilies were discovered in a villa in Crete, these majestic flowers have long held a role in ancient mythology. Derived from the Greek word “leiron,” (generally assumed to refer to the white Madonna lily), the lily was so revered by the Greeks that they believed it sprouted from the milk of Hera, the queen of the gods.
The storage market is evolving with the help of cloud storage, unified platforms and consolidation. IT planners and CIOs are dealing with a new way of putting value to these terms and offering their business units a charge back model not only based on data consumption but throughput and retention. The smarter businesses are seeing that running multiple storage platforms with trapped efficiency does not work in today's data center. Storage has to be big, wide and easy to use. Long gone are the days where 10-25 TB were a big deal. We now see systems that start at those levels and go to infinite proportions. Networks are becoming faster and even consolidated with 10/20 gbps driving protocols like FCoE and iSCSI. Backups are being replaced by better replication algorithms that have quality of service levels and automated failover.
NAS storage can take advantage of these technologies that can also help you keep the lights on. Most businesses have some form of NAS storage to help employees share documents, spreadsheets, images, and what nots. There is a movement from the traditional block based systems to unstructured data sets using NAS and these are pushing the market and vendors to come up with better NAS products. Companies like Amazon, Facebook, Twitter, all push vendors to think about how they do storage.
So how are you planning your IT spending are you going to spend more on things that you have to have or will you spend more on the things that look nice? I suspect in most cases there will be an 80/20 split of bread to lily ratio. But how you classify what is needed and what is 'nice to have' in your IT department will change as your business changes this year. Businesses are putting more demand on IT with fewer resources. Even though there is evidence businesses are spending more the hardware recently, the resources (admins) are still not there. The only way companies will be able to achieve success with such a high demand on storage with out the resources is to have simple, scalable storage that allows single admins to manage multiple petabytes of storage.
IBM is working to help customers achieve this type of new IT department. Cloud is one way, either public or even private, but also from a basic system level. Interfaces that are less complicated like the V7000 or XiV allow admins to move easily with out much training. SONAS offers large scale out NAS storage where storage and throughput can be scaled independently.
This year, take time to figure out what is needed and what will be cool to have in your department. Technology will always change, even if its a change back to what we had 20 years ago (mainframe/virtualization). Keep in mind it might look like a lily today, but will be a loaf soon, where do you want to be when the business needs it.