Storwize V7000 and the IBM NAS software were married Wednesday, October 12, 2012 at midnight at IBM Storage chapel in San Jose, California. The Reverend Rod Adkins officiated. Following the ceremony, the bride’s parents hosted a reception at the Almaden Research Center.
The bride comes from the NAS family who were in attendance. She also has ties with the Tivoli and GPFS families deep within the storage community. There were family members from the X series family who were at the ceremony.
The groom comes from a long line of storage products. XiV, DS8800 and SVC were all part of the festivities and supported the groom throughout entire day.
The couple will honeymoon Redwood City, California with a visit to the Storage Performance Council.
After long anticipation, IBM is now in the unified storage market with the introduction of the Storwize V7000 Unified (SV7kU?). The system stands as small as 6U of rack space and can flex up to four clustered systems (via RPQ) supporting internal SAS, SATA or virtualized external disk from other vendors.
The V7000 Unified is a midrange disk system that will allow new V7000 or existing V7000 customers the ability to integrate their NAS workload into the system. Using the standard V7000 shelf, IBM has added two 3650m3 servers with the IBM NAS software stack to complete a unified architecture. A new GUI that ingrates the NAS portion of the software is now available that will combine management for both technologies with a few mouse clicks. Setup of the system stays the same with the simplified USB key approach. Customers have reported that between the USB key installation and the wizard driven alerts, the V7000 has been one of the easiest systems to install and configure. IBM decided to keep these features in the enhanced GUI.
Here are some screenshots that show the new integration of the NAS software stack
V7000 Unified will support NFS/CIFS/FTP/HTTPs/SCP protocols in addition to block functions FCP and iSCSI. It will also support file replication and file level snapshots for business continuity in addition to existing block functions.
Another function in the V7000 Unified that will help customers is the introduction of the IBM Active Cloud Engine. What is it? Think of it as a very smart, very fast robot – that never sleeps – keeping your cloud storage neat, tidy and running smoothly. Think Rosie the robot from The Jetsons.
This engine is a policy driven engine that will help improve the storage efficiency by automatically placing, moving and deleting files to the appropriate storage. The efficiency gain comes from storing the files where they should be with out an administrator manually moving them. As data is gets older, the engine can move the file to another location where the price per TB is less and even delete the file if necessary.
The movement is done seamlessly and the end user does not have any idea their data has moved. Another aspect of the engine is identifying files for backups or replication to a DR location. As the data ages, the data continues the life cycle through the data center without storage administrators intervention.
Data can be moved from internal disk to external virtualized disk and even to tape. The diagram below shows the movement from file creation to 180 days old and off to deduped tape.
The policy can be created from a wizard in the V7000 Unified GUI by creating thresholds and start times. Customers can also exclude certain files by different file attributes like size or last accessed. For the more advanced customer, an edit feature of the policy is allowed.
Another question people are asking is about the relationship with Netapp and how will this product effect the N series product line. IBM is expanding the midrange storage portfolio by offering both the new V7000 Unified along with our N series products to focus on different client needs.
N series continues to be IBM’s offering focused on clients who have a primary need for NAS optimized (file) workloads. Existing N series clients with growing data requirements will continue to require additional N series disk drives, expansion units, and new systems to meet their needs.
IBM Storwize V7000 Unified will particularly appeal to clients who have a primary need for storage to support block optimized workloads with additional needs to consolidate file workloads for greater efficiency (unified storage). Storwize V7000 Unified is also targeted to clients that can benefit from the unique capabilities of IBM Active Cloud Engine or to clients that already are using Storwize V7000 or SONAS.
Just like in real life, we have seen other marriages come and go but this one seems to be different. The V7000 Unified is using the best of the storage portfolio and bringing value to the customer. IBM is also leveraging the investments made over 10 years of innovation; Virtualization, Easy Tier, Simplified GUI, Active Cloud Engine and is producing a product that will accomplish the lowering total cost of ownership.
As goes with the tradition of the bride to have good-luck: “Something old, something new, something borrowed, something blue, and a silver sixpence in her shoe." (You can find this poem in Leslie Jones' book "Happy is the Bride the Sun Shines On."). We find the IBM version of this offering good luck with the following: Something Old: 4,500 V7000 systems sold last year Something New: Active Cloud Engine Something Borrowed: Storage Virtualization Something Blue: Storwize V7000 Unified, a true IBM organic product
I am still looking for the sixpence but feel free to mail us one and we will attach it to the bezel of each controller.
Day 1: Today, IBM has massed the troops together to learn more about the SONAS product and how the product will look in the near and far future. There is a ton of information they are dumping that I need time to just process it all. Its also nice to meet people from all around the world that have the same mission as I do. I hope to get some interviews tomorrow as well as listening to Sven Ohme talk about performance of the box.
This video is shot in front of the first hard drive, ever. What a testimonial to the IBM innovation. The first words written are..... you gotta watch to find out.
Just a quick drop on the Data Reduction Tool that we use in the field to help estimate how much storage customers will save by running their data on our A9000 All Flash Array. This system (not SSDs) is based on the XIV Grid architecture and is available to customers since this past summer.
One of the things that many of our customers tell us is our competition is out offering silly data savings only based their word. IBM has for the past 5 years giving customers a real estimate on their compression savings using our compression estimation tool within 5% with out change to your code or storage system. Now we have the ability to run the tool on your data for savings on our compression, deduplication and pattern analysis.
This tool is downloaded from the IBM site and run on the host against the storage lun/volume. At the end you will be able to see the savings broken down in those three categories plus how much could also be saved using thin provisioning. The tool is CLI based and should be run by an admin with proper access.
All said this tool is the best thing out there to really give customers an idea of true savings. For more information please follow these link below
I was just thinking the other day that I really need to
write an article for my blog about the upcoming releases. When I opened the page it said I had not
written anything since May of this year. Time really flies when you are having fun, so
IBM just released a new XiV system dubbed the Gen 3.Generation 1 of course was built by the XiV company
before IBM purchased them, then came Gen2 shortly there after. As you expect the system has to keep up with
customer demands and technology refreshes but some thing very unique caught my
eye. The performance with this system
will be heads and shoulders above the competition.
Nehalem micro-architecture now makes up the heart
of the processing power within the grid with tons more cache to boot.There is a change in the inter-connectivity
from Ethernet to Infiband. I can’t wait
to see the new SPC2 numbers when they are published.
I suspect with
the introduction of more cache (via SSD) and the switch over to near-line SAS
drives is only going to help increase performance from gen2 to a gen3 system.The
self tuning/healing, tierless storage is still at the heart of the system and still
redefines how storage is done today.
My friend and colleague Ian Wright has put together an awesome You Tube video of the V7000 with the Flash Copy manager software. Ian has made several videos of the V7000 including a tour of the GUI, how to do things and now this. Ian says in an email to me earlier: "The video starts out with a restore of an accidentally deleted email (but not a restore of the spam that was deleted) and goes on to show recovering an accidentally deleted database. Both are actions that I think should resonate with customers using these applications."
I thought this was an awesome example of the V7000 and the Rapid Application Storage Solution that was release a few months back. Please take a few minutes to go through the video and give Ian some feedback.
One of my favorite TV programs is the BBC show Top Gear. They go through and test cars not only for handling, looks, and cup holders but mainly for power. At the end they run all of the cars through the same test track and get a time. That time then gets recorded on their list of all the cars tested and is celebrated for achievement or scorned at for doing poorly. No matter what the car turns up, they were all treated equally.
Today, IBM is announcing a test done by a certain benchmark called SPECsfs. This has been the yardstick for all NAS vendors wanting to flex their muscles and show how they handle small block I/O. Vendors can bring how ever many drives and tweaks they want but the test itself is very rigid and has to be certified before the results are published. IBM put together a SONAS system consisting of 10 interface nodes and 8 storage pods with all SAS disk. A total of about 900TB of usable disk, and about 1/3 of the maximum SONAS configuration. There was no solid state disk or extra tweaks done just a SONAS system that you could order today. That said, the IBM SONAS set a new world record for performance for a single file system at 403,000 IOPS per second.
Yes you read that right, 403k IOPS in a single file system. If you look at the other vendors they have used multiple file systems to aggregate the performance together in order to achieve a benchmark. Then they tend to use a virtual name space with software that is layered over all of the file systems, but here SONAS is one file system over 900TB with a true global name space. Some issues with multiple file system is they cannot stripe data across the file systems and the load balancing becomes an issue. If you look at the comparison of performance per file system, you can see that IBM is WAY beyond the competitors.
So you maybe asking, "Yeah that's pretty cool but what was the response time?". According to the test, the average response time was 3.23 MS from 0 to 403k IOPs per second. This is extremely good and when you think that was coming from one file system of 900TB, you realize how good that number is compared to other results. There will be tons of vendors trying to debunk how IBM out performed them and how they have better software or better market share but it really boils down to these key points:
An all-spinning SAS disk SONAS configuration, typical of SONAS configurations being installed today
Single file system featuring ease of use, minimum complexity, global load balancing, sharing of resources, proof of scale
903 TB usable capacity is indicative of current real life customer scale out NAS requirements
An environment in which all applications would benefit from the single file system and benefit from the high IOPs and excellent response time
One can clearly correlate the SONAS SPECsfs benchmark with the response time received to a real world application by today’s SONAS
I have included the slide deck for the announcement below. Feel free to check out the information on the SPECsfs website.
I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point. But on more than one occasion I have had to go to the closet of 'junk' to get something that helped me in completing a project. A Cat5 cable for my son's computer, an extra wireless mouse when my other one died. Yes I could go through it all and sort it out and come up with some nice labels for it all, but that takes time. It's just easier to close the container lid and forget about it until I realize I need something and its easy enough to grab it. Now this is not a hoarding issue like those you see on TV where people fill their house, garage, sheds and barns with all kinds of things. Those people who show up on TV have taking the 'collecting' business to another level and some call them 'hoarders'. But if you watch shows like "American Pickers" on the History Chanel, you will notice that most of the 'hoarders' know what they have and where, a meta data knowledge of their antiques. When you look at how businesses are storing their data today, most are looking to keep as much as possible in production. Data that is no longer serving a real purpose but storage admins are too gun shy to hit the delete button on it for fear of some VMWare admin calling up to see why their Windows NT 4 server is not responding. If you have tools that can move data around based on the age or last accessed then you have made a great leap into making savings. But these older ILM systems can not handle the growth of unstructured data of 2017. Companies want to be able to create a container for the data and not have to worry if the data is on prem, off prem, on disk or tape. Set it and forget it is the basic rule of thumb. But this becomes difficult due to the nature of data as it has many different values depending on who you ask. A 2 year old invoice is not as valuable to someone in Engineering as it is to the AR person who is using it to base their next billing cycle. One of the better ways to cut through the issue is to have a flexible platform that can move data from expensive flash down to tape and cloud with out changing the way people access the data. If the user can not tell the difference where his data is coming from and does not have to change the way he gets to it then why not look at putting the cold data on something low cost like tape and cloud tape. This type of system can be accomplished but using the IBM Spectrum Scale platform. The file system has a global name space across all of the different types of media and can even use the cloud as a place to store data without changing the way the end user will access the data. The file movement is policy based and allows admins to not ask the user if the data is needed, it simply can move it to a lower cost as it gets older/colder. The best part is because of a new licensing scheme, customers only pay the TB license for data that is on disk and flash. Any data that sits on Tape does not contribute to the overall license cost. For example: 500TB of data, 100 TBs that is less than 30 days old and 400 that will greater than 30 days. If stored on a Spectrum Scale file system, you only have to pay for the 100 TBs that is being stored on disk and not the 400 TB on tape. This greatly reduces the cost to store data as while not taking features away from our customers. For more great information on the IBM Spectrum Scale go here to this link and catch up.
Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.
1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.
2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call 'stickiness'. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say "I want these <insert vendor name> out of here", are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?
I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.
Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.
The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to 'cloud'. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.
Who doesn't like getting something free? IBM is offering customers the chance to grab some more flash storage by offering a flash drive at no charge with the purchase of two flash drives on new Storwize V7000s or V5000s. There is a maximum of 4 drives at no charge for the V7 and 2 drives at no charge for the V5. What does this really mean? Since a V7 can hold up to 24 drives in a controller, you will be getting automatically a 25% free upgrade while only paying 75% of the total (24 drives) purchase cost.
The drives sizes for this deal are the 1.6TB, 1.9TB and the 3.2 TB eMLC Flash drives. This offer is only for new systems and does not apply to upgrades and only up to December 26, 2016.
Natural disasters such as earthquakes, floods and hurricanes all have at least one thing in common, it makes companies look at their DR plan. This scenario plays out something like this:
CIO texts IT person in charge of keeping the company running "Hey, just checking to make sure we are ok in case this hurricane hits us???? :)"
Reply "Yeah, but we can just move stuff around to the other datacenter and we have most of it in the cloud, we are headed to the bar for hurricanes, come join us!"
Having a DR plan is only as good as the last test. When I was starting my IT career I had to help put together a DR plan and the go to the offsite location to test it. This was an eye opening/watershed experience for me as I learned not everything you write on paper can be done in the time you actually have. I can still remember thinking we could restore from tape all of the databases and backup libraries in a few hours and be back up and running. My test plan was flawed because I didn't:
a. Understand the business needs (SLAs
b. Have input from different IT sectors (network, directory services, databases, backups, etc. )
c. have a plan b, c and d
Now we have the ability to claim our data is safe because it's "In the Cloud" and that does take some burden off the IT department but in reality the onsite infrastructure still has to be able to connect to the cloud. We also have replication of everything which lowers our downtime and keeps things more in a crash consistent state. VMware allows us to move servers from one data center to the next and we are more accustomed to keeping things up all the time.
This scenario of up all the time still doesn't excuse us from not having a DR plan and testing it. If you rely on the software to make sure your business is always up and running you need to understand the processes it takes to get the software up and running. There is something I have heard people use more and more when describing large down time problems "The Perfect Storm". This is where a process is not understood or taken in consideration when keeping the business running. For me when I was younger this was the fact we needed to have a directory service restored before we can start restoring servers. I didn't understand the primary need to have ALL of the users/passwords for the servers before they restored.
I hope everyone in the FL/GA/SC area stays safe and have taken all of the necessary precautions for their homes and businesses to stay safe. Good luck and God bless you all.
Do you have a story that you want to tell? IBM is giving you a chance to tell the world how you are making your business or the industry better. This year we are focused on four areas: Social, Mobile, Analytics and Cloud. These subjects will be the cornerstone of the conference and sessions will be selected on how your business was changed by them.
We want attendees to better understand why its important to move Infrastructure from an afterthought to a strategic mission critical choice, and want presenters to discuss how IBM infrastructure is a unique enabler for growth and innovation. Ideally, a speaker can incorporate how the company's strategic and forward thinking decisions about infrastructure have directly impacted the enterprise's ability to respond effectively to new opportunities, challenges and the demands of growth and innovation. We look forward to inviting customers to speak and will pay for their conference fee as a token of thanks.
If you are interested in speaking and attending Edge, please email me back with some details below and I will get your information into the database for the selection committee.
Please send me your name, company and contact information and the specifics you might include in your project/story including:
What IBM solution components? what is your implementation?
What terms of the decision process did you go through and the impact to your business?
Will you be comfortable including some business impact context?
Is there a tie-in with Cloud, Analytics, Mobile or Social type of workload ?
How has IBM technology helped you run the business better?
If you are interested, please let me know by March 12 as the deadline is next Friday for submissions.
IBM announced the enhancement of compressing not only block data on the V7000
but also now it includes the file data on the V7000 Unified.The V7000 was first set up with compression back
in the summer with a big announcement surrounding “Smarter Storage”.This optimization was the same code and
engine that was purchased from a company named Storwize a few years ago.
initially kept the compression appliance that Storwize was first known for in
the market.Using LZ compression with a
RACE (Random Access Compression Engine) providing an optimized real-time compression
without performance degradation. Thus slowing down data growth and reducing the
amount of storage to be managed, powered and cooled.
compression does not require the compression or decompression of entire files
to access the data block. The engine will compress and decompress the relevant
data blocks “on the fly”. As data is
written the RACE engine compresses the data into a smaller chunk and its 100%
transparent for systems, storage and applications.
The V7000 Unified can now deliver a larger compressed
platform than any other mid-range platform.With compression percentages around 75%, a system that was maxed out at
2.8 PB (960 drives x 3TB each) can now see the system handle up to 5 PB of
Each V7000 Unified with code base 6.4 has the option of
turning on a 45 day trial of the compression software.After setting the license to “45” then you
can add new compressed volumes on the system.You can also compress data on virtualized storage arrays.
Compression has been part of NAS for a very long time.We have seen compression of files from jpeg
to office documents.But the best part
is the end user will never have to worry about which files needed to zipped or
compressed. Everything that comes through the V700 Unified can be compressed in
line before it writes the data to disks.
A couple of other improvements that IBM announced were the
addition of a integrated LDAP server to V7000 Unified.This now allows customers to use both local authentication
and external authentication servers to allow access to data. Another feature was the ability to upgrade a
V7000 to a V7000 Unified in the field.If you currently own a V7000 but need to add file access to the system,
IBM will sell you the two file modules and corresponding software to upgrade
you system. Now mind you there is a list of requirements that will need to be
met so check with your local storage engineer for more information. And finally
we now have support for a 4 way cluster on V7000 unified.This allows for more disks to be provisioned
and can compete with some of the other mid-range storage platforms in the
This all together makes a nice round of improvements that
will make life easier for IBM customers. As the V7000 platform matures it looks like IBM
is putting their money where their mouth is and making storage smarter and more
efficient.More to come on this platform
as I suspect we will see bigger things down the road.
you expect more out of your storage? IBM thinks you should and is
putting their money where their mouth is. In the past it has gone under
different names like STG University and Storage Symposium, but now IBM
has revamped its premier storage conference. The big announcement came
today with much fanfare that included a new website, some videos and
bunch of hype on twitter. A three part conference for executives, gear
heads and business partners there is something for everyone. But what
will be different tham years in the past? I think IBM looked around how
other vendors use conferences to help pump up its customer base
(VMWorld, EMCwhatever) and decided to put some hype in the conference.
of this as a great place to go and network, learn and have a good
time. The conference will be in Orlando and there will be tons of time
to sit in class rooms and learn about the latest technologies but there
will be sessions where IBM will be pulling in our top execs and analysts
to tell you where IBM is going in the storage world.
Executive Edge will feature different speakers from Jeff Jonas, Aviad
Offer and IT Finance expert Calvin Braunstein. This track will take
executives through new announcements, deep dives on technical platforms,
one on one sessions with IBM Execs and some great entertainment. This
is a new feature of the conference as in the past it was more geared
towards the technical teams.
Of course the Executive Edge will be
limited so talk to your local storage sales person to get a chance to be
a part of this special event. There will be time to bring in your team
and have special sessions and round tables with the IBM engineers who
can help you find your way down this path of crazy storage growth. And
there is a golf course on site which I have heard is very nice. Bring
your clubs or rent them, I am sure there will be plenty of us out there
so find a partner and have a good time.
More importantly IBM is
making the effort to step up the event and have it on par with the other
IBM conferences like Pulse. The technical portion will have over 250
sessions on storage related topics. You will also get road-map
information from the product teams as well as a chance to become a
certified technician. One area that has been expanding is our hands on
labs and this year we will have the biggest one yet. You will be able
to come in to the labs and actually see our storage systems and have a
chance to 'test drive' them.
Early bird registration is open now
and you can sign up today. The conference will be in sunny Orlando
Florida at the Waldorf Astoria and Hilton Orlando at Bonnet Creek. The
event starts on June 4th and runs to the 8th. You can follow the
conference on twitter @IBMEdge and use the hashtag #ibmedge For the conference website go here
My father is a retired teacher but loves to work with his hands. I
can remember very early on in my up bringing, him teaching me that it is
good to measure twice and cut once. Whether it was building a deck or
just a bird house the point was it took more time to cut something wrong
and then has to re-cut the board shorter or even wastes the old board
and cut a whole new one.
When I was preparing for this article I remember having to learn that
lesson the hard way and how much effort really is put into that second
cut. The problem in the storage industry is the misaligned partitions
from a move of a 512 byte sector to a new 4096 byte sector. This has to
be one of the bigger performance issues with virtualized systems and
Disk drives in the past had a limit on the number of sectors to 512
bytes. This was ok when you had a 315 MB drive because the number of
512 byte blocks was not nearly as large as what is in a 3 TB drive of
today’s’ systems. Newer versions of Windows and Linux will transfer the
4096 data block that match the native hard disk drive sector size. But
during migrations even new systems can have an issue.
There is also something called 512 byte sector emulation which is
where a 4k sector on the hard disk is remapped to 8 512 byte sectors.
Each read and write would be done in eight 512 byte sectors.
When the older OS is created or migrated, it may or may not align the
first block in the eight block group with the beginning of the 4k
sector. This causes misalignment of a one block segment. As the reads
and writes are laid down on the disks the misalignment of the logical
sectors from the physical sectors mean the 8 512 byte blocks now occupy 2
This now forces the disk to perform an additional read and/or write
to two physical 4k sectors. It has been documented that sector
misalignment can cause a reduction in write performance of at least 30%
for a 7200 RPM hard drive.
This issue is only magnified when adding other file systems on top of
this misalignment. When using a hyper visor like VMWare or Hyper-V,
the virtual image can be misaligned and cause even further performance
There are hundreds of articles and blogs written on how to check for
you disk alignment. A simple Google search of the words “disk sector
alignment” and you will find this has been a very popular topic.
Different applications will have different ways of checking and possibly
realigning the sectors.
One application that can help you identify and fix these is a tool
called the Pargon Alignment tool. This tool is easy to use and will
automatically determine if a drive’s partitions are misaligned. If
there is misalignment the utility then properly realigns the existing
partitions including boot partitions to the 4k sector boundaries.
I came across this tool when looking for something to help N series
customers who have misalignment issues in virtual systems. One of the
biggest things I saw as an advantage was this tool can align partitions
while the OS is running and does not require the snapshots to be
removed. It also can align multiple VMDKs within a single virtual
In the end, your alignment will effect how much disk space you have,
how much you can dedupe and the overall performance of your storage
system. It pays to check this before you start having issues and if you
are already seeing problems I hope this can help.
Now available is the IBM System Storage N series with VMware
Redbooks are a great way of learning a new technology or a
reference for configuration.I have used
them for years not just in storage but for X series servers and for software
like TSM.The people that write the
books spend a great deal of time putting them together and I believe most of
them are written by volunteers.
This is the third edition of this Redbook and if you have
read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction
of both the N series systems and VMware vSphere.There are sections on installing the systems,
deploying the LUNs and recovery.After
going through this Redbook, you will have a better understanding of a complete
and protected VMware system.If you need
help with how to size your hardware there is a section for you. If you are
looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making
sure you have proper alignment between the system block and the storage
array.This will negatively impact the
system by a factor of 2 in most random reads/writes as two blocks will be
required for one request.To avoid this
costly mistake or to correct VMs you have already setup a section in the book
called Partition alignment walks you through the entire process of correctly
setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of
deduplication, compression and cloning to drive the efficiency of the storage higher.These software features allow customers to
store more systems on the storage array than if they used traditional hard
drives.Also there is how to use
snapshots for cloning, mirrors for Site Recovery Manager and long term storage
aka Snapvaults.At the end of the book
are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene,
there is a great guide that will help you from start to finish setting up your
vSphere environment.The information is
there, use the search feature or sit down on a Friday with a high-lighter,
which ever fits your style and learn a little about using a N series system
I just read the blogs from Chris Mellor from the Register
and Tom Trainer Network Computing and thought how insightful are these two
outsiders about the inner workings of IBM.
First off, yes IBM is no longer selling the DCS9900, a DDN
OEM rebranded system in the very large IBM storage portfolio. There is no question that this product is no
longer available after the October 15 date.
Second, the DCS 3700 is already part of our portfolio and is
now an OEM box from Netapp/Engenio/LSI. The density of this system is the same as the
DCS 9900 and makes sense to use the DCS 3700 as a replacement for the DCS9900.
Third, Tom’s blog about SONAS being a monolithic NAS storage
is very skewed. SONAS is a very flexible
in the way we can scale both storage and the throughput with out affecting either
variable. Most “scale out” systems you
have to scale both in order to keep up with demand. SONAS uses some of the best technology on the
market with a huge amount of throughput.
His statement about IBM dropping DDN from SONAS is un-true
and goes to show how much research Tom put into writing this blog. I am sure Tom is looking out to write a
non-biased blog for Network Computing but maybe those days at HDS are still
making a big influence in his ability to look at announcement letter and make a
extrapolations about other products.
Finally, If HDS thought BlueArc was so great, why didn’t they
buy them back when they could have gotten the company for a better deal? Has the product changed THAT much since
2006?I wish HDS only the best for
dealing with the transition and getting that product under the HDS umbrella.
If you do your homework and base your assumptions on facts
instead of conjecture, you will find SONAS is a solid platform in the enterprise
NAS market.SONAS has proven it can be
the market leader with a low cost to performance ratio and will only get better
as time goes on.
Labor day has come and gone and so has all of the holidays
between now and Thanksgiving. This is
only augmented with the hope that your favorite football team (both American football
and what we call Soccer) has a great weekend match and you get to celebrate
with the beverage of your choice.
During your work-week, which can and sometimes does include
weekends, all you hear is no more money to do the things you have to do to keep
the business running. If you have kept
up with squeezing more out your systems with virtualization that’s great but
your network is now overtaxed. The staff
that used to take care of certain aspects of the day to day running of your
data center has been let go and their job has been ‘given’ to you with no
thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the
price of gas is so high that you decide to bike to work to help save the
planet. You spend more time on the road
commuting and look like you need a shower when you get to work after dodging
traffic all morning. Your coffee is
priced higher now because the coffee house wants to use Fair Trade coffee from
farmers in a county you have never been. And your dog is on anti-depressing meds because
you are not home as much and he can’t go out in the yard because of the killer bees
migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice
when we find things that not only help us but are easy to use.When you
come across these items they make such an impression that you like to tell others
about your great fortunes. I came by a
solution that was very easy to use and the value was so great that at first I
didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real
Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and
all had wonderful things to say about the technology. I listened but was hesitant
to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology
and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other
analyst, they all point to compression of the data as being one of the larger
items for handling future growth.There are a lot of vendors that claim they can
compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of
using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I
understood how deduplication works and where it was useful. But this was compression which is a different
ball game.Now we are able to shrink the
storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size
and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s
great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the
performance numbers were going to be and found an astonishing outcome.The RTC appliance made a performance
improvement on the overall solution.It
does help by adding cache and adding processing to the serving of data but it
also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no
compression, then all of the data has to be laid out on the disk, that sping
for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB
of data.But if we get 2:1 or 3:1 compression ratios,
then all of the components have to work less. No longer are they working to save 100GB of
data but 50GB or 25GB or data. This
allows the system to process more data and have cycles to respond quicker to
I/O requests (IE lower latency).
So the final thing is always the question of how hard is
this to install. Is there a period of
time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy.So easy that there is a good YouTube video
that goes through the entire process of unpacking to racking to compressing
data. I think the video speaks for
So if you are back at work today and find your life swirling
around you like a hurricane, stop and be reassured there is a few things out
there that still can make your life a little easier. It doesn’t make the killer bees go away but
maybe it will give you peace of mind that your storage doesn’t run out in the
Last week at the IBM Technical Conference I was able to
spend some time with a couple of friends discussing technology.It is always interesting to hear their take
on where the storage market is going and what lays ahead in the future. As my Netapp pal and I were chatting about the
messaging around unified architecture, we both noted that unified to one
perceptive is disjointed to another.
IBM and Netapp have been using the term unified for its NAS/SAN device for about 5 years now.The
idea is to share a common code base on the same hardware to increase
functionality and usability of that storage. Other vendors have gone similar routes using
multiple code bases and/or hardware but I see that as a NAS gateway in front of
SAN storage system.
This has been very successful in data centers both large and
small. But the idea of how we manage
storage is changing.Virtualization is
changing the idea of how and even where our data may be stored. The term cloud is something of a marketing
term but I like the term Storage Utility better. Utility companies such as electric, water,
sewer and even cable provide a product to its consumers and storage utility
vendors could do the same.
Most people are not concerned about process companies take
to make water drinkable or how electricity is generated as long as it is safe,
reliable and easy for them to consume. Storage
as a Utility is no different, it is only when the storage is offline or hacked
in by outsiders the consumers are concerned. There are laws that govern utilities and the FTC has put some privacy laws together to help consumers but I believe we can
take it a little further (a blog for another time).
As our data is changing from traditional spinning drives in
our data center to a storage utility, we will need some type of bridge that
will ease the pain of transition. The
main reason people do not adapt new technology is because the transition is
often too painful and the benefit of new technology is less than the need to
move. Whether it is a software package
that helps move data or a hardware device, it will have to give access to both file
based data and object based data. This
will allow for users to read the files as needed no matter what their connectivity
or location. It could also be used to
help drive efficiencies up buy allowing data to move from file based (high
cost) to object based (lower cost) environments.
Today there are some vendors who have early versions of this
type of unified solution. They are bridging
the gap between what we have today in private data centers and the future of
public utility storage. This is very
early in the transition but with this type of technology, we will be able to
adapt and provide a better way of storing data. Will it still be called a unified solution?
Only the marketing people can tell us that.
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
This conference has both Storage and X series sessions along with key note speakers from the top management at IBM.
People come from all over the world to this conference looking for the
'how to' answers and whats to come with the product lines. There is also a solution center that houses all of the products along with our sponsors. This year our top platinum sponsors are Cisco, Intel and Netapp. Other sponsors include Brocade, Emulex, Fuision-IO, VMWare, Red Hat and SUSE.
I plan to be working in the solution center at the SONAS booth talking
to clients about the benefits of SONAS and how it fits into their
environments. If you are wanting to stop by here are the hours that I will be there:
Monday, July 18th Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks) Tuesday, July 19th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center) Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks) Wednesday, July 20th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
I also will be presenting a few sessions on NAS technology here at the conference. Most of my sessions will be a look at what IBM is doing with SONAS, N series and Real Time Compression.
I have a NAS 101 class that I really love doing because there are so
many people that have a misconception of what NAS is today. In my N
series update session we will be talking about the latest release on
N6270 and the EXP 3500 as well as a peek at the R23 release coming in a
few weeks. The other two sessions I am doing are a little off the topic of NAS but around social media and using www.ibm.com for help.
Tony Pearson, John Sing and Ian Wright will be joining me on a panel to
discuss the roles we play in social media and what each of us thinks of
the future of social media. The support session is something that
a client suggested to me out of their frustration of how to find
documents on our support pages. Here is a list of sessions and times that I will be presenting:
7/18 - 1:00 sSN14 Storage Networking (NAS - SAN) NAS 101: An Introduction to Network Attached Storage 7/19 - 10:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM 7/19 - 1:00 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New? 7/20 - 1:00 sGE10 General Tips and Tricks on Searching for Support Answers on ibm.com 7/20 - 5:30 sGE61 General Using Social Media in System Storage 7/21 - 10:30 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/21 - 2:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
you are at the conference feel free to come to any of my sessions and I
would love to hear from you about the IBMNAS blog or any of my social
media outlets. We are using the following hashtag for the
conference all week if you want to follow what is going on via twitter
May 9th has been a target on my calendar for some
time now. Inside of IBM, we have been
waiting for this day to come so we could talk about the new things being
released in the storage platform. It
almost feels like Christmas morning with a bunch of new presents under the
tree. Each gift has inside something
that is either really cool or something very useful.The only difference is your Aunt Matilda and
her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the
entire storage platform. I will
concentrate on just the IBM NAS ones but if you are interested in knowing what
is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of
gifts for him under the tree this morning. Not only did he find presents under the tree
but there were a few little things in his stocking. Here is what Santa brought:
hardware update on the X3650 nodes. Just like before, the SONAS system uses
the impressive workhorse but now it uses the more powerful M3 class with a
six core Xeon Intel 2.66GHz processor. It has 24GB of DDR3 RAM with the option
to increase to a total of 144 GB of DDR3 RAM per interface node. Also new with the X3650 is the option to
include a second processor to double the amount of cores to 12 total per
under the tree is new support for not only XiV but now SONAS supports the
SVC and V7000 as disk subsystems. This
is a huge gift because now SONAS can support tons of other storage under
the awesome virtualization of the SVC code. V7000 support is also interesting as that
platform has the virtualization code from SVC but also support its own
drive architecture including solid state drives.
same category of sweaters, SONAS gets a little smaller rack extender.In the past IBM has used a 16 inch
extender in order to accommodate the large 60 drive disk enclosure.That
has now been trimmed down to only 8 inches and 0 for the gateway model and
RXC rack that houses only interface nodes.
gets a new file system upgrade to GPFS 3.4 PTF4. This will provide a significant performance
improvement over the R1.1.1x release. The updated file system handles small
file and random I/O a lot more efficiently. With this update we now use the role of
manager nodes instead of interface nodes to gain more flexibility in how
we track data in cache.
gifts SONAS received were new support for NDMP, Anti-virus support, use of
both 10GbE ports on the same CNA and some power updates for the EU countries.
And along with all of that, there
is a new performance monitoring package called Perfcol that collects more
information for analysis.
This SONAS release is labeled R1.2 and can be obtained by
contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few
gifts.A new N6270 to replace the
N6070.This new system is in line with
the N6200 series with larger amounts of RAM and processors.Just like the smaller N6240, there is an
expansion controller where customers can add more PCI control cards like HBAs,
10GbE or even FCoE.A new disk shelf was
also released which uses the smaller 2.5 inch drives with improved back end
And over at the Real Time Compression house they got new
support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as
these were just a fraction of the Storage announcements today.Also today is the IBM Storage Executive
Summit in New York City.My friend and
fellow blogger Tony Pearson is covering this great event and will be updating
his blog and twitter feed.If you were
not able to make it to NYC for the event, feel free to tweet him your questions
@az990tonyYou can also send questions
to our IBM Storage feed at @ibmstorage
I had the pleasure to present at the IBM Technical conference (aka STG-U) this past week. I was asked to speak about NAS technology basics and how the world is moving to more and more NAS platforms. Typically I get to present on some type of product, SONAS, NSeries, and the like. This was very much different as I got the chance to go deeper into the technology with out talking too much about products. The session name I used was NAS 101: An Introduction into NAS technology. The idea was to help educate our technical teams about the history of NAS, how NAS works, some pitfalls and then NAS at IBM.
There is so much surrounding NAS and to boil all of that down to a 1 hr 15min presentation is pretty difficult. The other challenge is trying to keep the information relative to the amount of knowledge everyone has in the session. I had people who were very skilled storage engineers to people who just getting into the business. I hope the information I presented was relative at all levels.
wanted to post my slide deck here so if you have a need or want me to
come and help teach what NAS is all about feel free to contact me.
This week, I am at SNW in San Jose, CA. If you have never heard of the conference, its
all about storage and networking and pulls in all of the big vendors to put on
labs, lectures and a vendor hall. People
come from all over the world to this event to learn what is new and how to do
One thing that I love doing at these events is talking to
customers and potential customers about IBM storage technology solutions. Often we find the conversations do not talk
about products as much as the technology in them that fix some sort of issue in
the data center. I think this is best
seen when you come in to the IBM booth. There is no hardware to see blinking lights or
yank cables. We have something better,
people who know the solutions to your issues.
If you ask any of the IBMers that work these events, they
always say it’s a love hate relationship. The hours are long and you stand on feet for
4-8 hrs. The best part is talking about
IBM solutions and finding out what people are doing in the field. This is the best way to help drive innovation,
listening to the customer. IBM has
programs that send our developers into the field to listen to customers and
this is just one example of that program.
Another event at SNW this year was a gathering of the
storage social media moguls. This is a
non-vendor specific event and is open to everyone. It is associated with a certain hash tag of
#storagebeers and they have been going on all over the world. Last night was the largest storagebeers to
date and it was a whos who of this community. But what was better than meeting the people
that you see on twitter or those who write blogs, was the idea of putting all
of the vendor fighting behind us and just a group of people who work in the
storage industry talking about whatever was on their mind. If you find yourself at an event like SNW or
VMWorld, check to see if there is a #storagebeers and go back and meet some
really cool people.
If you are at SNW and want to come by for a chat, you will find me at the IBM Booth today between 11am and 3pm. I would love to spend some time learning about what you are doing in the data center.
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
If you haven't heard (get from under that rock) IBM is turning 100 this year and the company is having an awesome time celebrating our longevity. From technical advances, the Apollo program to blazing trails through race and gender equality, IBM has and IS doing the job for all of the world. The company has changed in so many ways and has to adapt in ways only IBMers can but we have survived and thrived.
Find more information about our centennial celebration here.
Here is a great 100 second video of all the cool and great things IBM has done over the last 100 years.
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use. Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
I was driving into the IBM Almaden Research Center and just enjoying the beautiful scenery of the San Jose area. The campus is on top of a hill and surrounded by farm lands. I would really like to have a corner office here, but I don't think I would get much done. So here is my Vlog for this morning and I am hoping to get some interviews on here from some of the presenters and attendees.
I am headed out west to learn more about SONAS and the future of the product. I think there will be lots of good information that I will try to share with you. If you have any question that you want to ask a SONAS developer, let me know as they will all be there!
Today IBM is releasing two new N6200 systems that will be a huge improvement over the existing N6000 systems. The two new systems will show a bump in capacity and performance and more flexibility. For a very crowded midrange market this new N series product set will bridge the gap between entry level and enterprise class systems
One of the biggest issues with the previous 6000 systems was the limited amount of PCI-e slots. The other issue is the lack of more common hardware onboard like SAS and 10gbps ethernet.
The first thing that stands out to me is the footprint of the new system. The older N6000 has a 6 U requirement for an HA pair. The new N6200 is half the size, only occupying 3U for the two HA pair, or a single node with a I/O expansion module, providing an additional four PCI-e cards. Another configuration is two controllers with two expansion modules in a total space of 6U (equal of the older N6000 systems) but with a total of 12 PCI-e slots (vs 8 on the older N6000).
We will recommend using the two slots built into the controller for high performance 10GbE and / or 8 Gb PC adapters. The additional expansion slots in the expansion module can be used for Flash Cache and other connectivity for disks.
The on-board hardware is getting an face lift as well. While the new system sports a 10GbE port this is used mainly for the interconnect and nothing else. This was one of the disappointments I have with this systems, but understand this is how Netapp will accomplish scale out clustering.
FC ports were kept at 4 Gbps but there is two new SAS ports with matching ACP (alternate control path) ports used for off loading some of the traffic from the data path.
One of the unsung updates was in the NVRAM. Instead of using the same memory in the past, we now see an improvement of the memory by using something called Asynchronous DRAM Refresh (ADR). This is a new self-refresh mode in the Intel chipset that allows a portion of the main memory to be backed by an on-board battery. This gives the NVRAM the same high bandwidth as main memory and also simplifies the design of the motherboard.
This gives the new N6200 systems a bump in performance along with the introduction of the new Intel processors. The SPECsfs benchmark on the highest N6200 system showed 101,183 ops at 1.66ms ORT compared to the N6060 showing 60,507 ops and 1.58ms ORT, an improvement of about 70% in SFS throughput.
IBM is introducing the IBM System Storage N6210 Series and the IBM System Storage N6240 Series These new systems replace the IBM N3600 and N6040 Series respectively. GA date is scheduled for January 28, 2011 (N6240) and February 25, 2011(N6210). Here is the slide deck that is published with the release.
Netapp, for some reason, has removed the SVC from their interoperability list of storage subsystems under a V series. The development team at Netapp has for months not kept up the development and testing for support on SVC (and other storage platforms). This was no more evident when the Storwize V7000 was announced last year that runs the same code base as the SVC system and Netapp refused to offer any support for the product. The lack of support probably comes from the V series team feeling threaten by the virtualization power of the SVC code. These two systems do have some similar capabilities but we find them in different parts of the data center. The V series / Gateway is more of a host to another storage system. It treats the luns presented to it as disk and then presents another protocol out to another host or client. SVC is more a virtualization engine for all the storage and allows customers to move data around in pools that can cross storage subsystems with out the end user knowing.
With all this said, IBM has stepped up and is continuing support for the N series and Netapp models in front of the SVC or the Storwize V7000. As my fellow IBM blogger "The Storage Buddhist" the place for support is not Netapp, but IBM. I stole this chart from his blog to show the levels of code and models supported.
There is an ancient proverb that says " When you have only two pennies left in the world, buy a loaf of bread with one, and a lily with the other.". There is some wisdom in this old saying that we can still apply to today's IT budget and strategy. If you have been keeping up with the news, you would know companies are starting to invest again in their IT hardware and software. This maybe the turn in some of the hardest times in the hardware business. But what are customers really buying and planning to buy with their dollars? What is my bread and what is my lily today? The bread represents nourishment of the body. We have to eat in order to keep going. With out it, we starve and eventually die. This would be the basic part of a business IT strategy. What do you have to do to keep the lights on? I have this conversation with IT planners all the time. People love to do the newest and greatest, but have a smaller understanding or take for granted the things they have to do to keep the business going. The lily is a beautiful and majestic flower. Dating as far back as 1580 B.C., when images of lilies were discovered in a villa in Crete, these majestic flowers have long held a role in ancient mythology. Derived from the Greek word “leiron,” (generally assumed to refer to the white Madonna lily), the lily was so revered by the Greeks that they believed it sprouted from the milk of Hera, the queen of the gods.
The storage market is evolving with the help of cloud storage, unified platforms and consolidation. IT planners and CIOs are dealing with a new way of putting value to these terms and offering their business units a charge back model not only based on data consumption but throughput and retention. The smarter businesses are seeing that running multiple storage platforms with trapped efficiency does not work in today's data center. Storage has to be big, wide and easy to use. Long gone are the days where 10-25 TB were a big deal. We now see systems that start at those levels and go to infinite proportions. Networks are becoming faster and even consolidated with 10/20 gbps driving protocols like FCoE and iSCSI. Backups are being replaced by better replication algorithms that have quality of service levels and automated failover.
NAS storage can take advantage of these technologies that can also help you keep the lights on. Most businesses have some form of NAS storage to help employees share documents, spreadsheets, images, and what nots. There is a movement from the traditional block based systems to unstructured data sets using NAS and these are pushing the market and vendors to come up with better NAS products. Companies like Amazon, Facebook, Twitter, all push vendors to think about how they do storage.
So how are you planning your IT spending are you going to spend more on things that you have to have or will you spend more on the things that look nice? I suspect in most cases there will be an 80/20 split of bread to lily ratio. But how you classify what is needed and what is 'nice to have' in your IT department will change as your business changes this year. Businesses are putting more demand on IT with fewer resources. Even though there is evidence businesses are spending more the hardware recently, the resources (admins) are still not there. The only way companies will be able to achieve success with such a high demand on storage with out the resources is to have simple, scalable storage that allows single admins to manage multiple petabytes of storage.
IBM is working to help customers achieve this type of new IT department. Cloud is one way, either public or even private, but also from a basic system level. Interfaces that are less complicated like the V7000 or XiV allow admins to move easily with out much training. SONAS offers large scale out NAS storage where storage and throughput can be scaled independently.
This year, take time to figure out what is needed and what will be cool to have in your department. Technology will always change, even if its a change back to what we had 20 years ago (mainframe/virtualization). Keep in mind it might look like a lily today, but will be a loaf soon, where do you want to be when the business needs it.
I keep hearing how great our compression appliance really is and how quick and easy it is to setup. I did some asking around the office and was sent this video. It does look simple and I wish other products had this type of instructional video. If you want more information about RTC, check out the IBM RTC site here. Enjoy the video and if you like this and have a suggestion for another one let me know!
The hardware doesn't change but it will include both IBM Tivoli Storage Productivity Center for Disk Midrange Edition and IBM Tivoli Storage FlashCopy Manager to help round out a complete set of software functions. This is a very cool way of putting together a suite of software that makes sense for this platform. Much like the N series SnapManger suite the flash copy manager can take consistent backups/snapshots of databases and the like. TPC is a monitoring tool that allows admins to view data both historical and real-time.
Another part of the package is IBM services that can come in and help customers with the setup of the hardware and software. Customers are always wanting to bring in new gear and get it up and running as quickly as possible and IBM has the engineers to do just that. This service will provide planning, implementation, configuration, testing and basic skills instruction to help you eliminate the need for in-house resources skilled in the technology and free up your IT staff to focus on other higher priority business initiatives.
This package is not a way for customers to get their V7000 up and running but its a way to monitor and make the system more efficient. The V7000 already has a long list of features that we have taken from our enterprise storage and now we have the tools and means to help make this solution even better.
There is a demo coming up on January 20th that will show the integration of N series and VMware. The long awaited Virtual Storage Console and Rapid Cloning will be the highlights of the demo. So what is VSC? It is N series software that enables administrators to manage and monitor storage side attributes of ESX-ESXi hosts. VSC functions as a plugin to vCenter and uses APIs to set and retrieve information from the array.
VSC adds a tab into vCenter and enables the following:
View Status of Storage Controllers
View Status of physical hosts, including versions and overall status
Check for the proper configuration of ESX settings as it applies to:
HBA driver timeouts
Provide the ability to set the appropriate to set the appropriate timeouts on multiple ESX hosts simultaneously with a single mouse click
Launch FilerView from within VSC for storage provisioning
Provides access to mbrtools (mbrscan, mbralign, mbrcreate) to identify and correct partition alignment issues
Ability to set credentials to access storage controllers
Ability to collect diagnostics from the ESX hosts, FC switches and Storage controllers
First, off I want to say what an awesome year IBM had in storage! We announced several new products and improvements to older ones. SONAS was one of the NAS product of 2010 at IBM. The idea that came from bringing a parallel file system and merging it with commodity parts is brilliant. People have been building these systems for years and having to deal with the issues of interoperability and supportability, can now focus more on making storage work for them. Real Time Compression was also released for the N series product. This was an acquisition that really helps IBM position compression technology in the NAS market. RTC today is an appliance that compresses the data into smaller packages with no performance degrade. I believe we will see more of this technology spread into other aspects of storage line.
The biggest storage announcement was definitely the introduction of a new mid-tier storage device, Storwize V7000. This device is based on the tried and true SVC code base with some new enterprise class features from our DS8000 line. This system has the cool XiV like interface and a very cool form factor and with things like easy-tier and disk virtualzation, the box is going to be hard to beat in 2011.
Second, I want to honor IBM as we celebrate our centennial year of business. The Computing Tabulating Record Company started on June 15, 1911 and while the name has changed and our products and services have changed, but our mission and dedication to our clients remains unchanged. So many of us do not even begin to understand the role IBM has made on our world as it is today. IBM has been well known through most of its recent history as one of the world's largest computer companies and systems integrators. With over 388,000 employees worldwide, IBM is one of the largest and most profitable information technology employers in the world. IBM holds more patents than any other U.S. based technology company and has eight research laboratories worldwide. The company has scientists, engineers, consultants, and sales professionals in over 170 countries. IBM employees have earned Five Nobel Prizes, four Turning Awards, five National Medals of Technology and five National Medals of Science.
Lastly, I want to challenge everyone, IBMers, clients, everyone, to really look at what is going on in the storage space this year. With the explosive growth of data we are seeing people buying unprecedented amounts of storage. Most of the vendors are going to be investing in R & D for storage and coming out with new and time saving features. Clients should challenge their vendors to exceed their requirements not just make them. I also want vendors to look beyond products and start looking the services that help clients make better decisions and support the products they have purchased.
The last day of techfest, I was able to sit down with another FTSS from the IBM Storage team. Neil Youshak is a FTSS that covers the south Florida territory (and more). Not only is Neil an awesome engineer, but he is a triathlete and swims with sharks. Thanks to Neil for the time and look for more interviews soon.
I was fortunate enough today to talk with a great engineer from IBM about his experience at IBM Tech Fest, Keith Thuerk. Keith is based in the South East and is a FTSS (SE) for IBM and has been helping clients find IBM storage solutions in his area for over two years. He has a strong background in networking and works hard on finding solutions that are creative and fit customer's pain points. This week, Keith and other engineers from the East came together for technical training on IBM storage.
Keith and I talked about training and how important it is to keep up your skills. We also chatted about how social media is changing the marketplace.
Keith is also a blogger for IBM and is tweeting information about IBM storage. You can find his blog, Data Center 7.0 here and follow him on twitter, @kthuerk
I am always blown away with the expertise and insight our Advanced Technical Services team displays. They are our “Go To” guys for driving technology to our field teams and they are the last resort before getting into a development team. For me, they are a well of information and experience that I can use to help build solutions.
Today, I am sitting in the SONAS system training with Mark Taylor. Mark and I have been working together at IBM for a few years and I have the most respect for him. Mark is responsible for supporting the N series and SONAS at IBM along with a few other team members. He has is known for being a stickler on our solution assurance calls and is always finding solutions for our clients.
Our mission this week is to learn more about storage products on a deeper level. Many of the technical sales group has specialty that they focus on. It could be XiV, SONAS, Mid-Tier storage, what ever.This week, when we leave on Friday, we should come away as more rounded technical experts.
I am still amazed at the SONAS product and how powerful it really is, especially compared to other products in the market place. I find it hard to compare to other brands due to great feature set it brings and integration with TSM. No other storage out there is able to provide unstructured NAS data a platform to live on from cradle to grave like SOANS.
This mimics how IBM is doing more solution selling in the marketplace.Our Storage team is partnering with the POWER team and the Software teams to provide customers with a ‘one stop’ solution. If you look just at the SONAS product, it has multiple components all from IBM; X series servers, TSM and GPFS software, XiV storage. We are finding that if we combine these products into a solution based product, customers can solve more issues with the same amount of dollars. I believe this is the future of IBM storage and storage in general.
SONAS does have a couple of points that I would like to see cleaned up. One is the GUI, and the other is its policy writer. From what I can tell, the information in the SONAS GUI is very similar to that of the XiV system. It just has a different look and feel.With the Storwize V7000 getting the ‘XiV’ look and feel, I suspect future releases of SONAS might get the same treatment.As for the policy engine, it’s all based on SQL Query language. If you know how to write it then it’s not an issue but there are some out there that might not be privy to such skills. There are some guidelines and examples that can be used to help setup the policies like moving data from one pool to the next but I suspect people will rely more on their Technical Advisor to help define those rules.
Tomorrow is all about protect tier.I am excited about the hands on and finding how this box can really save people space with their backups.
All this week I will be attending a training event at IBM called Tech Fest. Kind of Comic Con meets IBM Storage University.Technical engineers from all over the country descend upon Washington DC (ok it’s really Gaithersburg) to learn about IBM Storage.
The goal is bring everyone up to speed on the latest products coming out of IBM Storage: SONAS, XiV, DS8800, Storwize V7000, etc. A pure technical deepdive with the R&D teams to get a better understanding of the new storage and features.
Training is essential to keeping a sales force moving forward. Not only to present new ideas to clients but to solve those issues that has been around for years. With out training, people are forced to pick and choose their products they get up to speed on and with a large portfolio of storage at IBM, which can be a huge undertaking. I for one try to keep up with the NAS systems and that is a never ending saga.
One idea I have had in the last few weeks leading up to this is how to simplify the entire IBM Storage portfolio. We have a ton of products that have great features but they seem to cover a certain area in the data center. You need storage virtualization, we have a system that can do that (actually two now with Storwize V7000). You need a high performance box built on all sata technology? We have a system that can do that too.I was really hoping the big wigs at IBM would start simplifying the product line and have the systems be more universal than they are today.
We have a good thing going with the Storwize V7000. If we could put NAS technology in that system, and integrate the XiV interface into our products, we could start simplifying our products. We should have a low end storage, mid and enterprise storage all based on the unified platform. I am sure we can do this as the products are mostly based on commodity parts, it’s just the software integration.
There are definite advantages in simplifying the product line and I bet we can work towards that goal. Besides sales, support and development can be simplified and improved as there would be fewer things to learn. I think there are lots of benefit and some risk.
So this week I am going to be talking to lots of people and getting their opinions on IBM Storage. If you want to follow me on twitter, subscribe to richswainWORK.
If there was ever a time for IBM to look at the storage market and come up with a product, today is that time. IBM released a new storage platform called Storwize. If you remember, IBM acquired another business named Storwize but the two do not have anything in common. It's a cool name and I am glad we got to use it! There is a ton of information coming out about the product and what it can do and how it will help you, but I wanted to take a little different approach to the announcement today. I am going to be doing some live blogging and tweeting about my journey to the announcement in New York City. I will be trying to help everyone who can not make it get a feel for what is going on and hopefully be able to interview some people along the way.
As for now, I will be putting up some video blogs (Vlogs?) and tweeting. If you don't follow me, my account is 'richswainWORK'. IBM will also be using the #ibmstorage hash all day to keep up with everyone's comments and questions so fire away, we have a staff of people just waiting to help.
IBM released a new Data Ontap version last Friday along with some other minor releases but more about those later. Data Ontap 8 7mode was the first release of a new 64-bit architecture that will allow N series customers to take advantage of larger aggregates. A little history. Back about 8 years ago, Netapp purchased a company named Spinnaker for the use of their 64 bit code, global name space and some other odds and ends. For the most part, Netapp has been re-branding this code as their GX platform allowing customers who want the feature set to purchase it aside from their Data Ontap base. GX was not a heavy seller as it was complicated to install and much more pricey than the other brand and Netapp decided to co-mingle the two code streams into one. At first glance this sounds like a good idea. The Data Ontap code definitely had some limitations (small aggregates sizes, limited growth and no global name space) but the merging of the two streams was harder than Netapp imagined. This was shown by Netapp promising a release of the new merged code for over years and finally a release was available for testing. There were many bugs (as RC code can be) but Netapp worked through the majority of them to produce a stepping stone release of the merged code called 7 mode. The developers used bits and pieces of the GX code to get the 64-bit architecture allowing customers to build larger aggregates, up to 100TB in size. This was really important as the release of the 2 TB Sata drives were coming and the limitation of 16TB in an aggregate would of killed any performance on the system. With only 8 2TB drives in the aggregate, the maximum IOPs throughput would be limited to about 400 IOPS per 16TB of drive space, not a good ratio at all. Therefor having a larger aggregate size allows them to put up to 50 2TB drives achieving a more respectable 2500 IOPS per aggregate. Now that we have the 7 mode available, there are some upsides and some downsides. First, as stated above, the aggregate sizes have increased tremendously. Allowing for more data disks in the aggregate increases the amount of IOPs the filer can pool. On the downside of this news, we see that you can not simply flip a switch and increase an aggregate created in the old 32-bit code to a new 64-bit aggregate. Customers will have to create a new aggregate after upgrading to the 7-mode version of Data Ontap 8 and then migrate with some restore method (think DR restore from backup) on to the new space. You can not mirror the two as SnapMirror can only mirror between like for like aggregates (32-bit to 32-bit and 64-bit to 64-bit). No big deal if you are new customer or if the filer is a new addition to the filer farm, but for those existing customers I believe this will be a lot tougher. If you do not have the drive space to create a new 100TB or less aggregate, you will have to either wait to buy more disks or do a manual backup (not snapshot), destroy the existing aggregate, and build a new aggregate on the 64-bit code, then restore. This is and the fact this is the first release of the new code family, will be why customers will not adopt the new code very quickly. There are also some other gotchas like no support for Performance Accelerator Cards (PAMII), no real interoperability between the two code bases and more. When I was an administrator, I hated having to read the release notes for the 'fine print gotchas' but in this case I encourage everyone to read the notes thoroughly and perhaps engaging your local IBM Storage engineer to help you access if you are a good candidate to upgrade or not. The fact this is a stepping stone to the full code line does help customers that need to move to the 64-bit architecture today without slowing down Netapp's development team. They are working on the next release of Data Ontap 8 called cluster mode. This will be the code that allows customers to cluster more than one pair of systems under one global name space. I suspect this will be a great addition to the Data Ontap code line and will give Netapp more traction in the larger enterprise business. There were also some firmware releases for the EXN3000 shelf on Friday as well. For more information on what was released, visit www.ibm.com support page
Well the last two days have been crazy with really good sessions, lots of networking with tons of people and great discussions throughout the entire conference. The sessions have been well attended and people are really asking great questions. For the most part, I hear that everyone is learning from the sessions, which I hope they dont get overloaded with so much information. Today I presented on PAM II technology for the N series system. We disussed the need for large Read Cache systems and how its not only the size of the disks that are driving this need, but also the business requesting for lower return times on data. During this session, the question was brought up about the new acquisition of StorWize and how that would effect the NAS solutions at IBM.
Here is IBM VP of Storage, Doug Balog, talking about the product.
I think its going to be a good product to put in front of our NAS systems and it will drive the heavy read cache systems like PAM II and the huge amounts of cache in the SONAS systems. Speaking of StorWize, I wanted to give everyone a little more information about this product and maybe why IBM purchased them. They provide real time compression technology that will reduce the storage need by compressing the data into images. They have an engine called Random Access Compression Engine (RACE) which is just a compression algorithm that does the conversion with no overhead. The Storwize appliance will work with popular NAS systems, including IBM N series and SONAS, as well as non-IBM NAS systems from EMC, HP, NetApp and others. Storwize real-time compression can provide added value to clients already using data deduplication, thin provisioning and other storage efficiency technologies.
I am at the IBM Storage University this week with the hope to spread the good word about NAS technology at IBM. The opening session was awesome and SONAS was mentioned a couple of times as part of the IBM Storage strategy. Listen below to a few remarks (short clip) from IBM VP Storage Doug Balog.
My session on NAS technology was well attended and people asked thoughtful questions. We talked about the N series and a couple of new features we have been adding through out the year. Then we talked about the SONAS platform which I think is one of the hottest topics being discussed here this week. I also worked in the solution center where all of the vendors setup booths even Netapp, who is a platinum sponser came with a very large booth this year, right at the door. I didn't get a chance to talk to that team afterward but I hope they were able to speak to alot of people here about N series.
I had a ton of people coming by and asking about SONAS, and not just what is it, but how can it help them.
Today there are some great sessions that I am hoping to attend. One is a N series client from IBM talking about Managing the Largest AGFA PACS solution in the Americas. Then there is my session on ILM/HSM in the SONAS system. I am hoping we will have a great turnout for that ! There are so many sessions that I want to attend, I need to clone myself so that I can get them all.
Just a quick note on two new Solution Briefs that were posted to the IBM SONAS front page. These briefs have important information on using Symantec's EndPoint and NetBackup to protect data on the SONAS system. As always, the devil is in the details but this gives SONAS clients a look into how we can utilize these two solutions with the very powerful SONAS system.
I am also headed up to the IBM Storage University to present on NAS technology at IBM. If you get a chance to stop by and see one of my sessions, please come up and give me feedback on the blog (and other things). I will also be in the Solution Center at the SONAS booth.
Move that File! You know that show were people are moved out of their old house, an army of contractors come in and build a new house, then the people come back and are astonished at their new home. I was watching an older episode the other night and released how much this improves a family's mobility, productiveness and state of mind. While, their old house was ok, it provided some what of a shelter, the new house was 100x better. I think of SONAS in the same way. There are many ways to do NAS technologies. Some take time to develop and build, but others are just as effective with little to no planning. I was talking to a client the other day and his response to NAS was to put NFS servers in all of their locations. It's cheap and something they can repeat like a cookie cutter many times over. What he was not taking in to his planning was administrating all of these islands of storage and how much he was spending on data sitting on expensive disk. If he was able to consolidate these servers and have a way of moving data around and eventually off to the greenest storage media out there, tape, then how much more money and time would that save him? He didn't have an answer but we are working on plan for him today. IBM announced yesterday that SONAS version 1.1.1 will now support ILM tiering with GPFS and then moving data off to tape using Tivoli's Storage Manger HSM. These two work in concert with the policy manager on the SONAS system to move data on and out of pools based on the meta data properties. As discussed in previous posts, SONAS separates the meta data which allows the scan engine to pass the needed data on to the ILM or TSM agents. These agents then move data between the pools and allows the client to free up space on valuable spinning disks. If you are one of the people that says tape and tiering is not needed, then think about the idea of putting data that hasn't been touched on a disk that costs $0.03 per GB. Its not that your storage isn't cool and you may not need tiers for your high performance, but what if the only data that was on the system was data that was actively being used and not my old spreadsheet from 2009. Along with the ILM announcement, IBM released the following with version 1.1.1
SONAS with IBM XIV storage
Higher capacity SAS drives
HTTPS protocol support
Network Information Service (NIS) support
I will post more information this week and next week on the replication and the XiV integration.
There is always a part of the business that gets over looked and usually its the people that are in the trenches making things work and keeping those machines going. I recently had the pleasure of spending some time with three great IBM CEs here in the Raleigh NC area. I was impressed with their professionalism and thoroughness while working on the SONAS upgrade. They made sure everything was installed, cabled and tabled according to the documentation. It is one thing to have a great product and lots of features, but it is even more important to have people who can service the system and do it with the highest level of craftsmanship. Thanks guys and gals for everything you do to help make our job easier!
Today, I helped our local Client Engineers install a couple of new nodes and some more storage into a local SONAS system. This was exciting for me as I love working with the hardware and software and it keeps up my keyboard skills. This client is bringing online more demand and needs both horsepower (interface nodes) and storage to accommodate a new business line. I was amazed at how easy the system is to upgrade and soon his little starter rack is almost full. We added two interface nodes, IBM xSeries 3650 m2 and two 60 disk shelves to the unit. Once the disks are online and presented up to the interface modules, they can start creating shares for the new operation. As they need more storage or more interface nodes, another rack will be but in and the same process of pooling these resources together will come together. The idea of having multiple interface nodes and storage pools is to not have single points of failure. In traditional storage, if a controller goes down, its partner has to pick up the entire work load for the down hardware. Not so in SONAS, if a controller goes down, the work is then evenly spread across all of the other nodes in the system. This is why we do not have a problem of loosing CIFS connections when systems go down. The addition of new storage is also interesting as we are tripling the amount of storage the base system had originally with two 4 U shelves. These shelves are highly dense, top loading containers using either SAS or SATA disks. In this instance today, we were installing 120 2 TB SATA drives. A total of 240TB in 8 U of space. Not too shabby. At the end of the day, I was pleased to see that IBM is moving forward with smarter storage systems. If you look at the entire portfolio, you can see that our systems like XiV grid, the auto tiering on DS8700, SVC virtualzation, all of these systems are helping our goal of a Smarter Planet. Look for some more pictures and maybe a video on Monday.
IBM has been working to enhance the way we do business from day one. From clock, to typewriters, mainframes, PCs, software, storage... the idea behind our innovation is to make it easier for our clients to do their business. Now we are taking it one step further to help our clients make the world better.
If you have been watching standard TV, Youtube or Hulu, you probably have seen a commercial for the IBM Smarter Planet initiative. These great adverts keep the tradition of IBM marketing our message to the masses. They describe how IBM is making our world better by using technology through many disciplines; Healthcare, Traffic, Food, etc. If you dig a little deeper than the catchy ads, you see a real movement not only to 'save the planet' but to make our lives better.
One of the ways IBM is making our planet better is by increasing the utilization of our systems. Today's average commodity server rarely uses more than 6% of its available capacity. This hold true for our storage systems as well. We find storage systems are bound by traditional technologies that keep us from keeping up with demand.
Looking at how this relates to my involvement I see how both SONAS and N series fit this mantra. The technologies allow clients to conserve energy by decreasing the amount of storage needed to achieve typical installations.
N series software allows a client to over subscribe a system by cloning the volumes with out adding additional space. This software called FlexClone allows clients to use products like VMWare, Xen or Hyper-V to create zero space copies of the original image. This zero consumption keeps the original blocks locked to the original image and any new changes are added to the free space as a delta. In traditional storage systems, a 10GB image would consume 1 TB for just 100VMs. With FlexClone, the only space needed for all 101 VMs would be just 10GB. Lowering both the OPEX and CAPEX for this system.
The IBM Scale out NAS system (SONAS) is a gaining steam as the private cloud business has increased in the business market. Not only are research universities and high performance computing labs seeing the benefits, so are mid-market to enterprise business leaders. Typical storage systems are not utilized to their full potential because of the purpose of the system or how it was integrated into their data center.
With a SONAS system, we no longer have to think how this system will be provision as all of the equipment can respond to requests from multiple parts of the business. If you have 5 systems that provide storage to your business and one of those systems is struggling to keep up with demand, the only way to keep up with the requests is to move data off by hand to the other systems. This is time consuming and could introduce mistakes and possible data loss. SONAS allows clients to be flexible in a dynamic-on-demand business environment. No longer will you have one system slowing down productivity as all of the storage in a SONAS system can be distributed through out the entire client interface. This will increase your efficiency rates and lower the required amount of systems in your data center, lowering environmental cost, CAP/OPEX.
There are other Storage systems that can increase utilization, Information Archive moves older data off to low cost, slower disk allowing you to store more on primary, faster disks. XiV keeps data spread throughout the entire system in case of a failure with no traditional RAID overhead. We at IBM are constantly looking for ways to increase the utilization of our systems.
IBM is working hard to build a smarter planet that not helps our clients, but helps the human race. Either through smarter storage systems, servers, software or consulting, IBM is working hard to bring this vision to a realization. Take a look at your systems and take stock of their utilization. Can they be doing more for you? Find out more about the IBM Smarter Planet initiative here.
I was working with a company today that had 5 storage vendors supplying gear for their data center. They were interested in SONAS not only for the scalability but for the idea that we could consolidate their storage footprint from 5 vendors to 1 with out having to sacrifise performance. Now must vendors would say, sure we can rip and replace all of the storage there and replace it with their own, but can they do it to the same scale (both horizontal and vertical), with a high performance engine like GPFS, one global names space and one single management tool? The customer was really impressed by our solution as is very interested in how we can help him go from 40+ racks of storage down to 13 and increase his efficiency. The idea is to use multiple teirs to put meta data on the faster disk, data that is being accessed currently on a medium speed and then 'near to archive on slow, fat disk. Currently they have no way of moving data between their 5 different systems. This causes them to keep running into issues where they are buying expensive disk to run applications/databases where they have data that is stagnant or non as important on the existing fast disks. SONAS will allow them to create pools of storage (tiers) and policies to move the data between the pools. Then with the HSM integration, the data can be archived off to tape until needed down the road. From a cost perceptive, the cost associated with the size and speed of the disk match their purpose with out them having to manually move the data themselves. With IT departments that have a huge growth in storage and are not hiring more admins, this makes sense. They can set the policies up front and not have to worry about some old file sitting on a expensive SAS disk. The other expense savings is in support contracts. The reduction from five support contracts to one will save customers money, and allows them to have one place to get all of their support. I can not wait to start work on this account as it looks like we will be putting in a great system and helping a client save money.
I was on my way down to Miami today and was talking to the gentleman sitting next to me about storage technology and the conversation turned to how everyone is scrambling to be in the cloud business. He had heard multiple vendors come in and start talking about cloud technology and how it was going to save him money, time and effort. This gentleman worked for a retail chain that has multiple district offices through out the eastern US and headquarters in Atlanta. He has multiple technologies all helping him keep the business running but nothing planned and as the company grew, they simply cookie cut the previous installation and planted it into the new office. Each office would also replicate back to HQ and that would be the main repository for backups/restores. I would guess there are thousands of companies out there with similar setups. So instead of going into how he could leverage cloud storage technology, I asked him what were his problems and listened. They basically came down to this: 1. Multiple independent islands of storage that are aging, causing his support contracts to go up. 2. Backups take way to long and systems are slowing down as they get closer to 'capacity'. 3. Future growth was expensive as every time they added a new capacity, they had to add entire systems. Now they were not cutting edge technolgy leaders nor were they wanting to be, but he needed a way to solve some of these traditional storage problems. He didn't want to go out and buy a new large system that would take forever to get in and while it may solve his problems, it would bring in even more issues. What he needed is less overhead and more throughput. We sat there for a while thinking, we didn't say much until I offered this tidbit, "So what does cloud mean to you?" After a nice laugh, he stated that he really didn't know and the more he read, the more 'cloudy' became the answer. There are many interpretations about what cloud really is and it differs between storage vendors. If there is a true declination of what cloud storage really could be, I think it could be defined using NAS technology. NAS lends to be a kinder and gentler protocol set and the need is growing leaps and bounds. Our traditional way of adding more systems and creating more independent silos works for smaller environment but it does not scale when clients want large disk pools of storage under one umbrella. There are ways of making volumes span in to large pools but the underlying storage is still made up smaller components that are typically active/active/passive nodes, even the best load balance will not help if you are overloading that system. There are ways to find a balance between the same old way and going out and dropping tons of cash on huge storage gear. Find a system that will grow and scale as your storage needs do. Think of ways to keep everything under one umbrella (name space for example) and also try to solve issues that you are having today with real technology and not work arounds. With NAS technology, we will always be at the mercy of the backup target whether its disk or tape. No matter if we are taking snapshots or ndmp backups, we have to write that out to some target to have a restore point. This is your basic strategy on how to do a backup/restore, why not consider using different types of disks to create a tier and offload disks to slower pools as the data gets 'older'. A few vendors have said there is no need for tiering, mainly because their systems can't take advantage of this and therefore they shun those who do. ILM tiering can help you achieve not only higher utilization rates with the storage but it puts the data that is accessed more frequently on faster disk, and moves the rest away to makes more room. Why pay for fast disk if the data on it is not being accessed frequently? Future expansion has always been tough for administrators, they tend to over buy on controller size and skimp on the disk. Systems like SONAS from IBM allows you to grow both in storage capacity and server throughput; independently. If a customer needs more storage but doesn't need the additional throughput, why force him to add more controllers? SONAS systems can scaled up to 30 storage servers and 14.4 PB of spinning disk all under one name space. No more having multiple nodes with their own names; this storage is called Accounting1, Accouting2 .... etc. They are called Storage and everyone gets the benefit of having all of the nodes, not just one system. By the time we had gone through all of this, our flight was landing. It was a great talk and both of us gained a different perceptive on how cloud is perceived. If any of you want to find out more information about the IBM Cloud strategy or SONAS go to the following links: IBM CLOUD SONAS by IBM
This weekend I was working on moving some of my winter clothes and spring/summer clothes in and out of my closet and into containers. Last Fall I purchased a few plastic containers that sealed so I could put my short sleeve golf shirts away and some of my shorts. Here in North Carolina, we can get a mild day and it is nice to have a short sleeve shirt to wear. On these days I would go back to the containers and dig through the nicely folded items until I found the shirt I wanted. Sometimes I had to go through multiple containers because I had forgotten which one I had put it in a few months ago. This weekend when I pulled out the containers they were in a mess, nothing was folded and it took me more time trying to figure out what was what as they all were mixed up. I then wondered what if I bought a bigger container and instead of using multiple ones, I could use one large container to store all of my winter clothes? What would the issues be, would I have enough space to store the container? Would there be someway of indexing the clothing inside to quickly find what I was looking for? Was there a way to put some clothes that I would need in case of cool day in a separate container just in case I needed them? There seemed to be more issues with just using one larger container than I thought. It would be easy to dump all the clothes into the larger bin and claim victory but that did not help me down the road. I needed a system, something to help me consolidate efficiently while still giving me access to those things I used on occasion. I also had to keep in mind the space I was going to use in my storage area. I didn't want to buy one large container and not be able to get it in the space I already had allocated. I needed a flexible system, maybe a few boxes that had labels and I could get to quickly if I needed something inside. Take a look at some of the noise some of the storage vendors are making about data storage consolidation. Most of them are telling you we can come in and take your smaller boxes and dump the data into one big box. While that helps on saving you space and might keep you from administrating multiple storage devices, you need to look at the downside of just having one big pool of disks. A large storage system that is replacing multiple smaller systems will need more cache and processor power to handle the same load as before. If you want to move data around to different tiers of disks or tape, can you achieve that with the new system? I started down the road of buying the biggest container I could find but decided against it as it would be too much trouble to find things. Your data storage systems need to be flexible enough to have multiple storage pools so that data that is able to move off faster disk and make room for data that is accessed more frequently. This not only allows your clients to have better response times on files they frequently use, but it tells you how much 'real' data people are using in your data center. The other issue I had was I needed some type of labeling systeming or an index to tell me where the shirts were and were was my ski jacket, etc. Your data is much the same, you need to keep up with where data lives in the storage system. As our storage systems get larger, we need save the meta file data easily and keep it in a table so we can run queries against it. There is also last part of moving my clothes around that I hated the most, the purge. I went through and found the shirts that had been worn too many times or may not fit the same as when I bought them originally. I packed these in a cheap cardboard box and took them to a donation place. This is the same as getting rid of old data in your system. Old data that is not being accessed is costing you money. You not only have to pay the environmental cost of keeping those bits spinning, but its taking up room where new data could reside therefore costing you money to expand. True archive and purging of data will be needed for any system large or small. Make sure you find a system that is easy to work with and automates this process based on policy. In the end, if you are looking at consolidation of your data storage, there are multiple things you will need to find out about a system. Just because a bigger container can replace multiple smaller containers may not give you the flexibility needed to meet your changing needs. For more information on a better way to consolidate your storage platform and moving your data, check out the information on SONAS and TSM.
Sorry Bill, there is a new question burning in our minds today. There seems to be a lot of buzz lately about tiering your data storage and who can and who can not, why and how but not alot of people are talking about when to tier your storage. Netapp has indicated they are not as concerned with a tiering approach and this is true for the IBM XiV product. Others like 3par and IBM' SONAS has it built in for clients to move data from one pool to the next. But how does one gauge this old standard of giving the best to the most demanding and the least to the dregs of our storage footprint?
Tiering can be based on performance needed from the client/application or the length of time and frequency of use. Some vendors will come out and say we treat all data the same and can shift needed resources to more used areas of the subsystem. And some allow you to create pools of storage to allocate cycles for just that application. The main difference is what happens when a system is over subscribed, do you have a guarantee without the pools that your application will always get the set performance it needs. Archival tiering allows you to move data that has a lower frequency of being accessed to lower cost (large and slow) disk and then to tape. The movement from pool to pool is based on either rules or policies set by the administrator based on date or time. This is a bigger issue with NAS data than SAN due to the nature of NAS files.
An indication of when to tier is the size of your storage system. Is it worth to create three tiers of storage for a 5 - 10 TB storage system? Probably not and there are simple ways of isolating storage for higher performance that has been tired and true. If your storage doesn't have built in tiering, you can use isolation of drives to increase the amount of performance to an application. You can also use higher amounts of cache like the N series PAM cards. This can decrease the latency response time and improve your application performance by adding additional read cache.
A larger system that has 100 TB and up would be ideal for tiering based on performance. As your storage grows there is data that needs to be on fast disk and data that can live elsewhere. Think of your storage as a tool chest of wrenches, screwdrivers and sorts. As you get more tools in your tool chest, you will want to keep those used more frequently in the top where you can get to them quickly like the trusty screwdriver that does both Phillips and flat head screws. But the tools that used less can be in the shelves below, sorted by either size or frequency of use.
Tiering data may be important to you as your build out your system and maybe you need to implement it day one. With the growth in the digital media, whether you are taking pictures for a marketing campaign or producing a new digital movie, we will see data storage grow ten fold in the coming years. I suspect tiering will be needed more for these projects as their data plat form scales out quickly and the smaller storage units will be used as secondary units in field offices or retail stores. Either way, you will need to evaluate whether to tier or not to tier based on your storage needs today and in the coming future. Would Shakespeare believe in tiering? Only if it sold more tickets for his play maybe...
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management 2. Storage Efficiency 3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors 4. Most importantly, see how we recover these applications in a matter of minutes!
The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
8GB of RAM (2x the amount in the N3600 and 4x the amount of the N3300)
512 mb NVRAM
2 integrated SAS ports and 8 total 1GBPS Ethernet port
PCI-e port for expansion
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
There are few times that I look at what a company markets as the 'Next big thing' in the storage world and get the same reaction I got when I started learning about the SONAS product. There is already some technical details in the announcement and in Tony's blog from a few days ago so I wont go into that today, but I will go over how this product really makes a paradigm shift in the NAS storage world.
Traditionally NAS storage is looked as the little brother to the bigger systems of SAN. SAN systems tend to be the athletes of the storage high school with their matching letter jackets and oversized girth. All the while, NAS was the band geeks, some frail and thin and some over sized but always in large numbers and not very organized. NAS technology was born from the need to share data over he company and as the amount of information grew so did the servers, network bandwidth and backups. SAN storage is still the big guy on campus but the people that track trends for our industry say NAS has become just as important as the large databases, ERP systems and the like.
If you look at how we have stored NAS data, it has been on single file systems that had local disk drives shared out over a single 10/100 mb network. As storage systems became more advanced, we saw people using clustering, snapshots, thin provision, de-duplication and replication to help keep our companies communicating. When we needed more throughput or more storage we added a server or added disks which created islands of unshared power.
If you look at 2009 and one of the hottest buzz words in the storage market, it was cloud computing. Having a large source of power in one area to pull resources from without having to provision new equipment. We also saw more and more clients looking at NAS protocols as the Ethernet could support faster speeds than traditional fibre channel. A huge amount of you have been looking at and moving your virtual environments to NFS to help cut down on administration overhead and to take advantage of the CNA technology.
With a higher demand for NAS technology, comes the burden of being able to scale at the same rate the storage, network and throughput increases. Older NAS systems allowed clients to increase the amount of storage but once you reach the critical mass the system allowed you had to purchase another clustered system. This creates multiple islands of storage pools that have be managed, provisioned and backed up. Not a great solution for companies that are growing and have fewer administrators to do the work.
Now, IBM has a product that allows our NAS clients to grow and scale as their companies grow. SONAS is a highly scable NAS that works like a cloud. The underlying technology, GPFS, is the same found in some of the fastest computers in the world. SONAS uses a method of scaling in both storage and throughput by adding storage pods (60 SATA or SAS disks) or interface modules (x3650 servers) like Lego blocks. All of this is managed by a central command module that allows a client to have full control over the entire system no matter how much storage or servers are in the system.
So the "Next big thing" in my opinion is here today and IBM is using the best of the best of IBM research for it's clients. The SONAS solution is designed from the ground up as a true blue NAS storage solution. Look for future SONAS blogs on GPFS, creating an ILM strategy and more.