There is an ancient proverb that says " When you have only two pennies left in the world, buy a loaf of bread with one, and a lily with the other.". There is some wisdom in this old saying that we can still apply to today's IT budget and strategy. If you have been keeping up with the news, you would know companies are starting to invest again in their IT hardware and software. This maybe the turn in some of the hardest times in the hardware business. But what are customers really buying and planning to buy with their dollars? What is my bread and what is my lily today? The bread represents nourishment of the body. We have to eat in order to keep going. With out it, we starve and eventually die. This would be the basic part of a business IT strategy. What do you have to do to keep the lights on? I have this conversation with IT planners all the time. People love to do the newest and greatest, but have a smaller understanding or take for granted the things they have to do to keep the business going. The lily is a beautiful and majestic flower. Dating as far back as 1580 B.C., when images of lilies were discovered in a villa in Crete, these majestic flowers have long held a role in ancient mythology. Derived from the Greek word “leiron,” (generally assumed to refer to the white Madonna lily), the lily was so revered by the Greeks that they believed it sprouted from the milk of Hera, the queen of the gods.
The storage market is evolving with the help of cloud storage, unified platforms and consolidation. IT planners and CIOs are dealing with a new way of putting value to these terms and offering their business units a charge back model not only based on data consumption but throughput and retention. The smarter businesses are seeing that running multiple storage platforms with trapped efficiency does not work in today's data center. Storage has to be big, wide and easy to use. Long gone are the days where 10-25 TB were a big deal. We now see systems that start at those levels and go to infinite proportions. Networks are becoming faster and even consolidated with 10/20 gbps driving protocols like FCoE and iSCSI. Backups are being replaced by better replication algorithms that have quality of service levels and automated failover.
NAS storage can take advantage of these technologies that can also help you keep the lights on. Most businesses have some form of NAS storage to help employees share documents, spreadsheets, images, and what nots. There is a movement from the traditional block based systems to unstructured data sets using NAS and these are pushing the market and vendors to come up with better NAS products. Companies like Amazon, Facebook, Twitter, all push vendors to think about how they do storage.
So how are you planning your IT spending are you going to spend more on things that you have to have or will you spend more on the things that look nice? I suspect in most cases there will be an 80/20 split of bread to lily ratio. But how you classify what is needed and what is 'nice to have' in your IT department will change as your business changes this year. Businesses are putting more demand on IT with fewer resources. Even though there is evidence businesses are spending more the hardware recently, the resources (admins) are still not there. The only way companies will be able to achieve success with such a high demand on storage with out the resources is to have simple, scalable storage that allows single admins to manage multiple petabytes of storage.
IBM is working to help customers achieve this type of new IT department. Cloud is one way, either public or even private, but also from a basic system level. Interfaces that are less complicated like the V7000 or XiV allow admins to move easily with out much training. SONAS offers large scale out NAS storage where storage and throughput can be scaled independently.
This year, take time to figure out what is needed and what will be cool to have in your department. Technology will always change, even if its a change back to what we had 20 years ago (mainframe/virtualization). Keep in mind it might look like a lily today, but will be a loaf soon, where do you want to be when the business needs it.
I keep hearing how great our compression appliance really is and how quick and easy it is to setup. I did some asking around the office and was sent this video. It does look simple and I wish other products had this type of instructional video. If you want more information about RTC, check out the IBM RTC site here. Enjoy the video and if you like this and have a suggestion for another one let me know!
The hardware doesn't change but it will include both IBM Tivoli Storage Productivity Center for Disk Midrange Edition and IBM Tivoli Storage FlashCopy Manager to help round out a complete set of software functions. This is a very cool way of putting together a suite of software that makes sense for this platform. Much like the N series SnapManger suite the flash copy manager can take consistent backups/snapshots of databases and the like. TPC is a monitoring tool that allows admins to view data both historical and real-time.
Another part of the package is IBM services that can come in and help customers with the setup of the hardware and software. Customers are always wanting to bring in new gear and get it up and running as quickly as possible and IBM has the engineers to do just that. This service will provide planning, implementation, configuration, testing and basic skills instruction to help you eliminate the need for in-house resources skilled in the technology and free up your IT staff to focus on other higher priority business initiatives.
This package is not a way for customers to get their V7000 up and running but its a way to monitor and make the system more efficient. The V7000 already has a long list of features that we have taken from our enterprise storage and now we have the tools and means to help make this solution even better.
There is a demo coming up on January 20th that will show the integration of N series and VMware. The long awaited Virtual Storage Console and Rapid Cloning will be the highlights of the demo. So what is VSC? It is N series software that enables administrators to manage and monitor storage side attributes of ESX-ESXi hosts. VSC functions as a plugin to vCenter and uses APIs to set and retrieve information from the array.
VSC adds a tab into vCenter and enables the following:
View Status of Storage Controllers
View Status of physical hosts, including versions and overall status
Check for the proper configuration of ESX settings as it applies to:
HBA driver timeouts
Provide the ability to set the appropriate to set the appropriate timeouts on multiple ESX hosts simultaneously with a single mouse click
Launch FilerView from within VSC for storage provisioning
Provides access to mbrtools (mbrscan, mbralign, mbrcreate) to identify and correct partition alignment issues
Ability to set credentials to access storage controllers
Ability to collect diagnostics from the ESX hosts, FC switches and Storage controllers
First, off I want to say what an awesome year IBM had in storage! We announced several new products and improvements to older ones. SONAS was one of the NAS product of 2010 at IBM. The idea that came from bringing a parallel file system and merging it with commodity parts is brilliant. People have been building these systems for years and having to deal with the issues of interoperability and supportability, can now focus more on making storage work for them. Real Time Compression was also released for the N series product. This was an acquisition that really helps IBM position compression technology in the NAS market. RTC today is an appliance that compresses the data into smaller packages with no performance degrade. I believe we will see more of this technology spread into other aspects of storage line.
The biggest storage announcement was definitely the introduction of a new mid-tier storage device, Storwize V7000. This device is based on the tried and true SVC code base with some new enterprise class features from our DS8000 line. This system has the cool XiV like interface and a very cool form factor and with things like easy-tier and disk virtualzation, the box is going to be hard to beat in 2011.
Second, I want to honor IBM as we celebrate our centennial year of business. The Computing Tabulating Record Company started on June 15, 1911 and while the name has changed and our products and services have changed, but our mission and dedication to our clients remains unchanged. So many of us do not even begin to understand the role IBM has made on our world as it is today. IBM has been well known through most of its recent history as one of the world's largest computer companies and systems integrators. With over 388,000 employees worldwide, IBM is one of the largest and most profitable information technology employers in the world. IBM holds more patents than any other U.S. based technology company and has eight research laboratories worldwide. The company has scientists, engineers, consultants, and sales professionals in over 170 countries. IBM employees have earned Five Nobel Prizes, four Turning Awards, five National Medals of Technology and five National Medals of Science.
Lastly, I want to challenge everyone, IBMers, clients, everyone, to really look at what is going on in the storage space this year. With the explosive growth of data we are seeing people buying unprecedented amounts of storage. Most of the vendors are going to be investing in R & D for storage and coming out with new and time saving features. Clients should challenge their vendors to exceed their requirements not just make them. I also want vendors to look beyond products and start looking the services that help clients make better decisions and support the products they have purchased.
The last day of techfest, I was able to sit down with another FTSS from the IBM Storage team. Neil Youshak is a FTSS that covers the south Florida territory (and more). Not only is Neil an awesome engineer, but he is a triathlete and swims with sharks. Thanks to Neil for the time and look for more interviews soon.
I was fortunate enough today to talk with a great engineer from IBM about his experience at IBM Tech Fest, Keith Thuerk. Keith is based in the South East and is a FTSS (SE) for IBM and has been helping clients find IBM storage solutions in his area for over two years. He has a strong background in networking and works hard on finding solutions that are creative and fit customer's pain points. This week, Keith and other engineers from the East came together for technical training on IBM storage.
Keith and I talked about training and how important it is to keep up your skills. We also chatted about how social media is changing the marketplace.
Keith is also a blogger for IBM and is tweeting information about IBM storage. You can find his blog, Data Center 7.0 here and follow him on twitter, @kthuerk
I am always blown away with the expertise and insight our Advanced Technical Services team displays. They are our “Go To” guys for driving technology to our field teams and they are the last resort before getting into a development team. For me, they are a well of information and experience that I can use to help build solutions.
Today, I am sitting in the SONAS system training with Mark Taylor. Mark and I have been working together at IBM for a few years and I have the most respect for him. Mark is responsible for supporting the N series and SONAS at IBM along with a few other team members. He has is known for being a stickler on our solution assurance calls and is always finding solutions for our clients.
Our mission this week is to learn more about storage products on a deeper level. Many of the technical sales group has specialty that they focus on. It could be XiV, SONAS, Mid-Tier storage, what ever.This week, when we leave on Friday, we should come away as more rounded technical experts.
I am still amazed at the SONAS product and how powerful it really is, especially compared to other products in the market place. I find it hard to compare to other brands due to great feature set it brings and integration with TSM. No other storage out there is able to provide unstructured NAS data a platform to live on from cradle to grave like SOANS.
This mimics how IBM is doing more solution selling in the marketplace.Our Storage team is partnering with the POWER team and the Software teams to provide customers with a ‘one stop’ solution. If you look just at the SONAS product, it has multiple components all from IBM; X series servers, TSM and GPFS software, XiV storage. We are finding that if we combine these products into a solution based product, customers can solve more issues with the same amount of dollars. I believe this is the future of IBM storage and storage in general.
SONAS does have a couple of points that I would like to see cleaned up. One is the GUI, and the other is its policy writer. From what I can tell, the information in the SONAS GUI is very similar to that of the XiV system. It just has a different look and feel.With the Storwize V7000 getting the ‘XiV’ look and feel, I suspect future releases of SONAS might get the same treatment.As for the policy engine, it’s all based on SQL Query language. If you know how to write it then it’s not an issue but there are some out there that might not be privy to such skills. There are some guidelines and examples that can be used to help setup the policies like moving data from one pool to the next but I suspect people will rely more on their Technical Advisor to help define those rules.
Tomorrow is all about protect tier.I am excited about the hands on and finding how this box can really save people space with their backups.
All this week I will be attending a training event at IBM called Tech Fest. Kind of Comic Con meets IBM Storage University.Technical engineers from all over the country descend upon Washington DC (ok it’s really Gaithersburg) to learn about IBM Storage.
The goal is bring everyone up to speed on the latest products coming out of IBM Storage: SONAS, XiV, DS8800, Storwize V7000, etc. A pure technical deepdive with the R&D teams to get a better understanding of the new storage and features.
Training is essential to keeping a sales force moving forward. Not only to present new ideas to clients but to solve those issues that has been around for years. With out training, people are forced to pick and choose their products they get up to speed on and with a large portfolio of storage at IBM, which can be a huge undertaking. I for one try to keep up with the NAS systems and that is a never ending saga.
One idea I have had in the last few weeks leading up to this is how to simplify the entire IBM Storage portfolio. We have a ton of products that have great features but they seem to cover a certain area in the data center. You need storage virtualization, we have a system that can do that (actually two now with Storwize V7000). You need a high performance box built on all sata technology? We have a system that can do that too.I was really hoping the big wigs at IBM would start simplifying the product line and have the systems be more universal than they are today.
We have a good thing going with the Storwize V7000. If we could put NAS technology in that system, and integrate the XiV interface into our products, we could start simplifying our products. We should have a low end storage, mid and enterprise storage all based on the unified platform. I am sure we can do this as the products are mostly based on commodity parts, it’s just the software integration.
There are definite advantages in simplifying the product line and I bet we can work towards that goal. Besides sales, support and development can be simplified and improved as there would be fewer things to learn. I think there are lots of benefit and some risk.
So this week I am going to be talking to lots of people and getting their opinions on IBM Storage. If you want to follow me on twitter, subscribe to richswainWORK.
If there was ever a time for IBM to look at the storage market and come up with a product, today is that time. IBM released a new storage platform called Storwize. If you remember, IBM acquired another business named Storwize but the two do not have anything in common. It's a cool name and I am glad we got to use it! There is a ton of information coming out about the product and what it can do and how it will help you, but I wanted to take a little different approach to the announcement today. I am going to be doing some live blogging and tweeting about my journey to the announcement in New York City. I will be trying to help everyone who can not make it get a feel for what is going on and hopefully be able to interview some people along the way.
As for now, I will be putting up some video blogs (Vlogs?) and tweeting. If you don't follow me, my account is 'richswainWORK'. IBM will also be using the #ibmstorage hash all day to keep up with everyone's comments and questions so fire away, we have a staff of people just waiting to help.
IBM released a new Data Ontap version last Friday along with some other minor releases but more about those later. Data Ontap 8 7mode was the first release of a new 64-bit architecture that will allow N series customers to take advantage of larger aggregates. A little history. Back about 8 years ago, Netapp purchased a company named Spinnaker for the use of their 64 bit code, global name space and some other odds and ends. For the most part, Netapp has been re-branding this code as their GX platform allowing customers who want the feature set to purchase it aside from their Data Ontap base. GX was not a heavy seller as it was complicated to install and much more pricey than the other brand and Netapp decided to co-mingle the two code streams into one. At first glance this sounds like a good idea. The Data Ontap code definitely had some limitations (small aggregates sizes, limited growth and no global name space) but the merging of the two streams was harder than Netapp imagined. This was shown by Netapp promising a release of the new merged code for over years and finally a release was available for testing. There were many bugs (as RC code can be) but Netapp worked through the majority of them to produce a stepping stone release of the merged code called 7 mode. The developers used bits and pieces of the GX code to get the 64-bit architecture allowing customers to build larger aggregates, up to 100TB in size. This was really important as the release of the 2 TB Sata drives were coming and the limitation of 16TB in an aggregate would of killed any performance on the system. With only 8 2TB drives in the aggregate, the maximum IOPs throughput would be limited to about 400 IOPS per 16TB of drive space, not a good ratio at all. Therefor having a larger aggregate size allows them to put up to 50 2TB drives achieving a more respectable 2500 IOPS per aggregate. Now that we have the 7 mode available, there are some upsides and some downsides. First, as stated above, the aggregate sizes have increased tremendously. Allowing for more data disks in the aggregate increases the amount of IOPs the filer can pool. On the downside of this news, we see that you can not simply flip a switch and increase an aggregate created in the old 32-bit code to a new 64-bit aggregate. Customers will have to create a new aggregate after upgrading to the 7-mode version of Data Ontap 8 and then migrate with some restore method (think DR restore from backup) on to the new space. You can not mirror the two as SnapMirror can only mirror between like for like aggregates (32-bit to 32-bit and 64-bit to 64-bit). No big deal if you are new customer or if the filer is a new addition to the filer farm, but for those existing customers I believe this will be a lot tougher. If you do not have the drive space to create a new 100TB or less aggregate, you will have to either wait to buy more disks or do a manual backup (not snapshot), destroy the existing aggregate, and build a new aggregate on the 64-bit code, then restore. This is and the fact this is the first release of the new code family, will be why customers will not adopt the new code very quickly. There are also some other gotchas like no support for Performance Accelerator Cards (PAMII), no real interoperability between the two code bases and more. When I was an administrator, I hated having to read the release notes for the 'fine print gotchas' but in this case I encourage everyone to read the notes thoroughly and perhaps engaging your local IBM Storage engineer to help you access if you are a good candidate to upgrade or not. The fact this is a stepping stone to the full code line does help customers that need to move to the 64-bit architecture today without slowing down Netapp's development team. They are working on the next release of Data Ontap 8 called cluster mode. This will be the code that allows customers to cluster more than one pair of systems under one global name space. I suspect this will be a great addition to the Data Ontap code line and will give Netapp more traction in the larger enterprise business. There were also some firmware releases for the EXN3000 shelf on Friday as well. For more information on what was released, visit www.ibm.com support page
Well the last two days have been crazy with really good sessions, lots of networking with tons of people and great discussions throughout the entire conference. The sessions have been well attended and people are really asking great questions. For the most part, I hear that everyone is learning from the sessions, which I hope they dont get overloaded with so much information. Today I presented on PAM II technology for the N series system. We disussed the need for large Read Cache systems and how its not only the size of the disks that are driving this need, but also the business requesting for lower return times on data. During this session, the question was brought up about the new acquisition of StorWize and how that would effect the NAS solutions at IBM.
Here is IBM VP of Storage, Doug Balog, talking about the product.
I think its going to be a good product to put in front of our NAS systems and it will drive the heavy read cache systems like PAM II and the huge amounts of cache in the SONAS systems. Speaking of StorWize, I wanted to give everyone a little more information about this product and maybe why IBM purchased them. They provide real time compression technology that will reduce the storage need by compressing the data into images. They have an engine called Random Access Compression Engine (RACE) which is just a compression algorithm that does the conversion with no overhead. The Storwize appliance will work with popular NAS systems, including IBM N series and SONAS, as well as non-IBM NAS systems from EMC, HP, NetApp and others. Storwize real-time compression can provide added value to clients already using data deduplication, thin provisioning and other storage efficiency technologies.
I am at the IBM Storage University this week with the hope to spread the good word about NAS technology at IBM. The opening session was awesome and SONAS was mentioned a couple of times as part of the IBM Storage strategy. Listen below to a few remarks (short clip) from IBM VP Storage Doug Balog.
My session on NAS technology was well attended and people asked thoughtful questions. We talked about the N series and a couple of new features we have been adding through out the year. Then we talked about the SONAS platform which I think is one of the hottest topics being discussed here this week. I also worked in the solution center where all of the vendors setup booths even Netapp, who is a platinum sponser came with a very large booth this year, right at the door. I didn't get a chance to talk to that team afterward but I hope they were able to speak to alot of people here about N series.
I had a ton of people coming by and asking about SONAS, and not just what is it, but how can it help them.
Today there are some great sessions that I am hoping to attend. One is a N series client from IBM talking about Managing the Largest AGFA PACS solution in the Americas. Then there is my session on ILM/HSM in the SONAS system. I am hoping we will have a great turnout for that ! There are so many sessions that I want to attend, I need to clone myself so that I can get them all.
Just a quick note on two new Solution Briefs that were posted to the IBM SONAS front page. These briefs have important information on using Symantec's EndPoint and NetBackup to protect data on the SONAS system. As always, the devil is in the details but this gives SONAS clients a look into how we can utilize these two solutions with the very powerful SONAS system.
I am also headed up to the IBM Storage University to present on NAS technology at IBM. If you get a chance to stop by and see one of my sessions, please come up and give me feedback on the blog (and other things). I will also be in the Solution Center at the SONAS booth.
Move that File! You know that show were people are moved out of their old house, an army of contractors come in and build a new house, then the people come back and are astonished at their new home. I was watching an older episode the other night and released how much this improves a family's mobility, productiveness and state of mind. While, their old house was ok, it provided some what of a shelter, the new house was 100x better. I think of SONAS in the same way. There are many ways to do NAS technologies. Some take time to develop and build, but others are just as effective with little to no planning. I was talking to a client the other day and his response to NAS was to put NFS servers in all of their locations. It's cheap and something they can repeat like a cookie cutter many times over. What he was not taking in to his planning was administrating all of these islands of storage and how much he was spending on data sitting on expensive disk. If he was able to consolidate these servers and have a way of moving data around and eventually off to the greenest storage media out there, tape, then how much more money and time would that save him? He didn't have an answer but we are working on plan for him today. IBM announced yesterday that SONAS version 1.1.1 will now support ILM tiering with GPFS and then moving data off to tape using Tivoli's Storage Manger HSM. These two work in concert with the policy manager on the SONAS system to move data on and out of pools based on the meta data properties. As discussed in previous posts, SONAS separates the meta data which allows the scan engine to pass the needed data on to the ILM or TSM agents. These agents then move data between the pools and allows the client to free up space on valuable spinning disks. If you are one of the people that says tape and tiering is not needed, then think about the idea of putting data that hasn't been touched on a disk that costs $0.03 per GB. Its not that your storage isn't cool and you may not need tiers for your high performance, but what if the only data that was on the system was data that was actively being used and not my old spreadsheet from 2009. Along with the ILM announcement, IBM released the following with version 1.1.1
SONAS with IBM XIV storage
Higher capacity SAS drives
HTTPS protocol support
Network Information Service (NIS) support
I will post more information this week and next week on the replication and the XiV integration.
There is always a part of the business that gets over looked and usually its the people that are in the trenches making things work and keeping those machines going. I recently had the pleasure of spending some time with three great IBM CEs here in the Raleigh NC area. I was impressed with their professionalism and thoroughness while working on the SONAS upgrade. They made sure everything was installed, cabled and tabled according to the documentation. It is one thing to have a great product and lots of features, but it is even more important to have people who can service the system and do it with the highest level of craftsmanship. Thanks guys and gals for everything you do to help make our job easier!
Today, I helped our local Client Engineers install a couple of new nodes and some more storage into a local SONAS system. This was exciting for me as I love working with the hardware and software and it keeps up my keyboard skills. This client is bringing online more demand and needs both horsepower (interface nodes) and storage to accommodate a new business line. I was amazed at how easy the system is to upgrade and soon his little starter rack is almost full. We added two interface nodes, IBM xSeries 3650 m2 and two 60 disk shelves to the unit. Once the disks are online and presented up to the interface modules, they can start creating shares for the new operation. As they need more storage or more interface nodes, another rack will be but in and the same process of pooling these resources together will come together. The idea of having multiple interface nodes and storage pools is to not have single points of failure. In traditional storage, if a controller goes down, its partner has to pick up the entire work load for the down hardware. Not so in SONAS, if a controller goes down, the work is then evenly spread across all of the other nodes in the system. This is why we do not have a problem of loosing CIFS connections when systems go down. The addition of new storage is also interesting as we are tripling the amount of storage the base system had originally with two 4 U shelves. These shelves are highly dense, top loading containers using either SAS or SATA disks. In this instance today, we were installing 120 2 TB SATA drives. A total of 240TB in 8 U of space. Not too shabby. At the end of the day, I was pleased to see that IBM is moving forward with smarter storage systems. If you look at the entire portfolio, you can see that our systems like XiV grid, the auto tiering on DS8700, SVC virtualzation, all of these systems are helping our goal of a Smarter Planet. Look for some more pictures and maybe a video on Monday.