you expect more out of your storage? IBM thinks you should and is
putting their money where their mouth is. In the past it has gone under
different names like STG University and Storage Symposium, but now IBM
has revamped its premier storage conference. The big announcement came
today with much fanfare that included a new website, some videos and
bunch of hype on twitter. A three part conference for executives, gear
heads and business partners there is something for everyone. But what
will be different tham years in the past? I think IBM looked around how
other vendors use conferences to help pump up its customer base
(VMWorld, EMCwhatever) and decided to put some hype in the conference.
of this as a great place to go and network, learn and have a good
time. The conference will be in Orlando and there will be tons of time
to sit in class rooms and learn about the latest technologies but there
will be sessions where IBM will be pulling in our top execs and analysts
to tell you where IBM is going in the storage world.
Executive Edge will feature different speakers from Jeff Jonas, Aviad
Offer and IT Finance expert Calvin Braunstein. This track will take
executives through new announcements, deep dives on technical platforms,
one on one sessions with IBM Execs and some great entertainment. This
is a new feature of the conference as in the past it was more geared
towards the technical teams.
Of course the Executive Edge will be
limited so talk to your local storage sales person to get a chance to be
a part of this special event. There will be time to bring in your team
and have special sessions and round tables with the IBM engineers who
can help you find your way down this path of crazy storage growth. And
there is a golf course on site which I have heard is very nice. Bring
your clubs or rent them, I am sure there will be plenty of us out there
so find a partner and have a good time.
More importantly IBM is
making the effort to step up the event and have it on par with the other
IBM conferences like Pulse. The technical portion will have over 250
sessions on storage related topics. You will also get road-map
information from the product teams as well as a chance to become a
certified technician. One area that has been expanding is our hands on
labs and this year we will have the biggest one yet. You will be able
to come in to the labs and actually see our storage systems and have a
chance to 'test drive' them.
Early bird registration is open now
and you can sign up today. The conference will be in sunny Orlando
Florida at the Waldorf Astoria and Hilton Orlando at Bonnet Creek. The
event starts on June 4th and runs to the 8th. You can follow the
conference on twitter @IBMEdge and use the hashtag #ibmedge For the conference website go here
Now available is the IBM System Storage N series with VMware
Redbooks are a great way of learning a new technology or a
reference for configuration.I have used
them for years not just in storage but for X series servers and for software
like TSM.The people that write the
books spend a great deal of time putting them together and I believe most of
them are written by volunteers.
This is the third edition of this Redbook and if you have
read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction
of both the N series systems and VMware vSphere.There are sections on installing the systems,
deploying the LUNs and recovery.After
going through this Redbook, you will have a better understanding of a complete
and protected VMware system.If you need
help with how to size your hardware there is a section for you. If you are
looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making
sure you have proper alignment between the system block and the storage
array.This will negatively impact the
system by a factor of 2 in most random reads/writes as two blocks will be
required for one request.To avoid this
costly mistake or to correct VMs you have already setup a section in the book
called Partition alignment walks you through the entire process of correctly
setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of
deduplication, compression and cloning to drive the efficiency of the storage higher.These software features allow customers to
store more systems on the storage array than if they used traditional hard
drives.Also there is how to use
snapshots for cloning, mirrors for Site Recovery Manager and long term storage
aka Snapvaults.At the end of the book
are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene,
there is a great guide that will help you from start to finish setting up your
vSphere environment.The information is
there, use the search feature or sit down on a Friday with a high-lighter,
which ever fits your style and learn a little about using a N series system
I just read the blogs from Chris Mellor from the Register
and Tom Trainer Network Computing and thought how insightful are these two
outsiders about the inner workings of IBM.
First off, yes IBM is no longer selling the DCS9900, a DDN
OEM rebranded system in the very large IBM storage portfolio. There is no question that this product is no
longer available after the October 15 date.
Second, the DCS 3700 is already part of our portfolio and is
now an OEM box from Netapp/Engenio/LSI. The density of this system is the same as the
DCS 9900 and makes sense to use the DCS 3700 as a replacement for the DCS9900.
Third, Tom’s blog about SONAS being a monolithic NAS storage
is very skewed. SONAS is a very flexible
in the way we can scale both storage and the throughput with out affecting either
variable. Most “scale out” systems you
have to scale both in order to keep up with demand. SONAS uses some of the best technology on the
market with a huge amount of throughput.
His statement about IBM dropping DDN from SONAS is un-true
and goes to show how much research Tom put into writing this blog. I am sure Tom is looking out to write a
non-biased blog for Network Computing but maybe those days at HDS are still
making a big influence in his ability to look at announcement letter and make a
extrapolations about other products.
Finally, If HDS thought BlueArc was so great, why didn’t they
buy them back when they could have gotten the company for a better deal? Has the product changed THAT much since
2006?I wish HDS only the best for
dealing with the transition and getting that product under the HDS umbrella.
If you do your homework and base your assumptions on facts
instead of conjecture, you will find SONAS is a solid platform in the enterprise
NAS market.SONAS has proven it can be
the market leader with a low cost to performance ratio and will only get better
as time goes on.
Labor day has come and gone and so has all of the holidays
between now and Thanksgiving. This is
only augmented with the hope that your favorite football team (both American football
and what we call Soccer) has a great weekend match and you get to celebrate
with the beverage of your choice.
During your work-week, which can and sometimes does include
weekends, all you hear is no more money to do the things you have to do to keep
the business running. If you have kept
up with squeezing more out your systems with virtualization that’s great but
your network is now overtaxed. The staff
that used to take care of certain aspects of the day to day running of your
data center has been let go and their job has been ‘given’ to you with no
thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the
price of gas is so high that you decide to bike to work to help save the
planet. You spend more time on the road
commuting and look like you need a shower when you get to work after dodging
traffic all morning. Your coffee is
priced higher now because the coffee house wants to use Fair Trade coffee from
farmers in a county you have never been. And your dog is on anti-depressing meds because
you are not home as much and he can’t go out in the yard because of the killer bees
migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice
when we find things that not only help us but are easy to use.When you
come across these items they make such an impression that you like to tell others
about your great fortunes. I came by a
solution that was very easy to use and the value was so great that at first I
didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real
Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and
all had wonderful things to say about the technology. I listened but was hesitant
to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology
and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other
analyst, they all point to compression of the data as being one of the larger
items for handling future growth.There are a lot of vendors that claim they can
compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of
using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I
understood how deduplication works and where it was useful. But this was compression which is a different
ball game.Now we are able to shrink the
storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size
and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s
great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the
performance numbers were going to be and found an astonishing outcome.The RTC appliance made a performance
improvement on the overall solution.It
does help by adding cache and adding processing to the serving of data but it
also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no
compression, then all of the data has to be laid out on the disk, that sping
for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB
of data.But if we get 2:1 or 3:1 compression ratios,
then all of the components have to work less. No longer are they working to save 100GB of
data but 50GB or 25GB or data. This
allows the system to process more data and have cycles to respond quicker to
I/O requests (IE lower latency).
So the final thing is always the question of how hard is
this to install. Is there a period of
time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy.So easy that there is a good YouTube video
that goes through the entire process of unpacking to racking to compressing
data. I think the video speaks for
So if you are back at work today and find your life swirling
around you like a hurricane, stop and be reassured there is a few things out
there that still can make your life a little easier. It doesn’t make the killer bees go away but
maybe it will give you peace of mind that your storage doesn’t run out in the
Last week at the IBM Technical Conference I was able to
spend some time with a couple of friends discussing technology.It is always interesting to hear their take
on where the storage market is going and what lays ahead in the future. As my Netapp pal and I were chatting about the
messaging around unified architecture, we both noted that unified to one
perceptive is disjointed to another.
IBM and Netapp have been using the term unified for its NAS/SAN device for about 5 years now.The
idea is to share a common code base on the same hardware to increase
functionality and usability of that storage. Other vendors have gone similar routes using
multiple code bases and/or hardware but I see that as a NAS gateway in front of
SAN storage system.
This has been very successful in data centers both large and
small. But the idea of how we manage
storage is changing.Virtualization is
changing the idea of how and even where our data may be stored. The term cloud is something of a marketing
term but I like the term Storage Utility better. Utility companies such as electric, water,
sewer and even cable provide a product to its consumers and storage utility
vendors could do the same.
Most people are not concerned about process companies take
to make water drinkable or how electricity is generated as long as it is safe,
reliable and easy for them to consume. Storage
as a Utility is no different, it is only when the storage is offline or hacked
in by outsiders the consumers are concerned. There are laws that govern utilities and the FTC has put some privacy laws together to help consumers but I believe we can
take it a little further (a blog for another time).
As our data is changing from traditional spinning drives in
our data center to a storage utility, we will need some type of bridge that
will ease the pain of transition. The
main reason people do not adapt new technology is because the transition is
often too painful and the benefit of new technology is less than the need to
move. Whether it is a software package
that helps move data or a hardware device, it will have to give access to both file
based data and object based data. This
will allow for users to read the files as needed no matter what their connectivity
or location. It could also be used to
help drive efficiencies up buy allowing data to move from file based (high
cost) to object based (lower cost) environments.
Today there are some vendors who have early versions of this
type of unified solution. They are bridging
the gap between what we have today in private data centers and the future of
public utility storage. This is very
early in the transition but with this type of technology, we will be able to
adapt and provide a better way of storing data. Will it still be called a unified solution?
Only the marketing people can tell us that.
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
This conference has both Storage and X series sessions along with key note speakers from the top management at IBM.
People come from all over the world to this conference looking for the
'how to' answers and whats to come with the product lines. There is also a solution center that houses all of the products along with our sponsors. This year our top platinum sponsors are Cisco, Intel and Netapp. Other sponsors include Brocade, Emulex, Fuision-IO, VMWare, Red Hat and SUSE.
I plan to be working in the solution center at the SONAS booth talking
to clients about the benefits of SONAS and how it fits into their
environments. If you are wanting to stop by here are the hours that I will be there:
Monday, July 18th Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks) Tuesday, July 19th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center) Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks) Wednesday, July 20th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
I also will be presenting a few sessions on NAS technology here at the conference. Most of my sessions will be a look at what IBM is doing with SONAS, N series and Real Time Compression.
I have a NAS 101 class that I really love doing because there are so
many people that have a misconception of what NAS is today. In my N
series update session we will be talking about the latest release on
N6270 and the EXP 3500 as well as a peek at the R23 release coming in a
few weeks. The other two sessions I am doing are a little off the topic of NAS but around social media and using www.ibm.com for help.
Tony Pearson, John Sing and Ian Wright will be joining me on a panel to
discuss the roles we play in social media and what each of us thinks of
the future of social media. The support session is something that
a client suggested to me out of their frustration of how to find
documents on our support pages. Here is a list of sessions and times that I will be presenting:
7/18 - 1:00 sSN14 Storage Networking (NAS - SAN) NAS 101: An Introduction to Network Attached Storage 7/19 - 10:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM 7/19 - 1:00 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New? 7/20 - 1:00 sGE10 General Tips and Tricks on Searching for Support Answers on ibm.com 7/20 - 5:30 sGE61 General Using Social Media in System Storage 7/21 - 10:30 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/21 - 2:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
you are at the conference feel free to come to any of my sessions and I
would love to hear from you about the IBMNAS blog or any of my social
media outlets. We are using the following hashtag for the
conference all week if you want to follow what is going on via twitter
May 9th has been a target on my calendar for some
time now. Inside of IBM, we have been
waiting for this day to come so we could talk about the new things being
released in the storage platform. It
almost feels like Christmas morning with a bunch of new presents under the
tree. Each gift has inside something
that is either really cool or something very useful.The only difference is your Aunt Matilda and
her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the
entire storage platform. I will
concentrate on just the IBM NAS ones but if you are interested in knowing what
is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of
gifts for him under the tree this morning. Not only did he find presents under the tree
but there were a few little things in his stocking. Here is what Santa brought:
hardware update on the X3650 nodes. Just like before, the SONAS system uses
the impressive workhorse but now it uses the more powerful M3 class with a
six core Xeon Intel 2.66GHz processor. It has 24GB of DDR3 RAM with the option
to increase to a total of 144 GB of DDR3 RAM per interface node. Also new with the X3650 is the option to
include a second processor to double the amount of cores to 12 total per
under the tree is new support for not only XiV but now SONAS supports the
SVC and V7000 as disk subsystems. This
is a huge gift because now SONAS can support tons of other storage under
the awesome virtualization of the SVC code. V7000 support is also interesting as that
platform has the virtualization code from SVC but also support its own
drive architecture including solid state drives.
same category of sweaters, SONAS gets a little smaller rack extender.In the past IBM has used a 16 inch
extender in order to accommodate the large 60 drive disk enclosure.That
has now been trimmed down to only 8 inches and 0 for the gateway model and
RXC rack that houses only interface nodes.
gets a new file system upgrade to GPFS 3.4 PTF4. This will provide a significant performance
improvement over the R1.1.1x release. The updated file system handles small
file and random I/O a lot more efficiently. With this update we now use the role of
manager nodes instead of interface nodes to gain more flexibility in how
we track data in cache.
gifts SONAS received were new support for NDMP, Anti-virus support, use of
both 10GbE ports on the same CNA and some power updates for the EU countries.
And along with all of that, there
is a new performance monitoring package called Perfcol that collects more
information for analysis.
This SONAS release is labeled R1.2 and can be obtained by
contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few
gifts.A new N6270 to replace the
N6070.This new system is in line with
the N6200 series with larger amounts of RAM and processors.Just like the smaller N6240, there is an
expansion controller where customers can add more PCI control cards like HBAs,
10GbE or even FCoE.A new disk shelf was
also released which uses the smaller 2.5 inch drives with improved back end
And over at the Real Time Compression house they got new
support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as
these were just a fraction of the Storage announcements today.Also today is the IBM Storage Executive
Summit in New York City.My friend and
fellow blogger Tony Pearson is covering this great event and will be updating
his blog and twitter feed.If you were
not able to make it to NYC for the event, feel free to tweet him your questions
@az990tonyYou can also send questions
to our IBM Storage feed at @ibmstorage
I had the pleasure to present at the IBM Technical conference (aka STG-U) this past week. I was asked to speak about NAS technology basics and how the world is moving to more and more NAS platforms. Typically I get to present on some type of product, SONAS, NSeries, and the like. This was very much different as I got the chance to go deeper into the technology with out talking too much about products. The session name I used was NAS 101: An Introduction into NAS technology. The idea was to help educate our technical teams about the history of NAS, how NAS works, some pitfalls and then NAS at IBM.
There is so much surrounding NAS and to boil all of that down to a 1 hr 15min presentation is pretty difficult. The other challenge is trying to keep the information relative to the amount of knowledge everyone has in the session. I had people who were very skilled storage engineers to people who just getting into the business. I hope the information I presented was relative at all levels.
wanted to post my slide deck here so if you have a need or want me to
come and help teach what NAS is all about feel free to contact me.
This week, I am at SNW in San Jose, CA. If you have never heard of the conference, its
all about storage and networking and pulls in all of the big vendors to put on
labs, lectures and a vendor hall. People
come from all over the world to this event to learn what is new and how to do
One thing that I love doing at these events is talking to
customers and potential customers about IBM storage technology solutions. Often we find the conversations do not talk
about products as much as the technology in them that fix some sort of issue in
the data center. I think this is best
seen when you come in to the IBM booth. There is no hardware to see blinking lights or
yank cables. We have something better,
people who know the solutions to your issues.
If you ask any of the IBMers that work these events, they
always say it’s a love hate relationship. The hours are long and you stand on feet for
4-8 hrs. The best part is talking about
IBM solutions and finding out what people are doing in the field. This is the best way to help drive innovation,
listening to the customer. IBM has
programs that send our developers into the field to listen to customers and
this is just one example of that program.
Another event at SNW this year was a gathering of the
storage social media moguls. This is a
non-vendor specific event and is open to everyone. It is associated with a certain hash tag of
#storagebeers and they have been going on all over the world. Last night was the largest storagebeers to
date and it was a whos who of this community. But what was better than meeting the people
that you see on twitter or those who write blogs, was the idea of putting all
of the vendor fighting behind us and just a group of people who work in the
storage industry talking about whatever was on their mind. If you find yourself at an event like SNW or
VMWorld, check to see if there is a #storagebeers and go back and meet some
really cool people.
If you are at SNW and want to come by for a chat, you will find me at the IBM Booth today between 11am and 3pm. I would love to spend some time learning about what you are doing in the data center.
My friend and colleague Ian Wright has put together an awesome You Tube video of the V7000 with the Flash Copy manager software. Ian has made several videos of the V7000 including a tour of the GUI, how to do things and now this. Ian says in an email to me earlier: "The video starts out with a restore of an accidentally deleted email (but not a restore of the spam that was deleted) and goes on to show recovering an accidentally deleted database. Both are actions that I think should resonate with customers using these applications."
I thought this was an awesome example of the V7000 and the Rapid Application Storage Solution that was release a few months back. Please take a few minutes to go through the video and give Ian some feedback.
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
If you haven't heard (get from under that rock) IBM is turning 100 this year and the company is having an awesome time celebrating our longevity. From technical advances, the Apollo program to blazing trails through race and gender equality, IBM has and IS doing the job for all of the world. The company has changed in so many ways and has to adapt in ways only IBMers can but we have survived and thrived.
Find more information about our centennial celebration here.
Here is a great 100 second video of all the cool and great things IBM has done over the last 100 years.
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use. Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
8GB of RAM (2x the amount in the N3600 and 4x the amount of the N3300)
512 mb NVRAM
2 integrated SAS ports and 8 total 1GBPS Ethernet port
PCI-e port for expansion
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.