RichardSwain 060000VQ8G Tags:  nseries nas lsi marriage ibm ds5000 xiv ds4000 sql exchange netapp engenio onstor ds3000 13,872 Views
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
RichardSwain 060000VQ8G Tags:  ibm video 100_years_of_ibm_in_100_s... think #ibm100 cloud watson history centennial bigdata punchcard fractals 14,735 Views
If you haven't heard (get from under that rock) IBM is turning 100 this year and the company is having an awesome time celebrating our longevity. From technical advances, the Apollo program to blazing trails through race and gender equality, IBM has and IS doing the job for all of the world. The company has changed in so many ways and has to adapt in ways only IBMers can but we have survived and thrived.
Find more information about our centennial celebration here.
Here is a great 100 second video of all the cool and great things IBM has done over the last 100 years.
RichardSwain 060000VQ8G Tags:  tsm tivoli snapshots rto data wine sonas nas hsm rpo backup ltfs recovery ibm protection 8,028 Views
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use.
Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
Labor day has come and gone and so has all of the holidays between now and Thanksgiving. This is only augmented with the hope that your favorite football team (both American football and what we call Soccer) has a great weekend match and you get to celebrate with the beverage of your choice.
During your work-week, which can and sometimes does include weekends, all you hear is no more money to do the things you have to do to keep the business running. If you have kept up with squeezing more out your systems with virtualization that’s great but your network is now overtaxed. The staff that used to take care of certain aspects of the day to day running of your data center has been let go and their job has been ‘given’ to you with no thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the price of gas is so high that you decide to bike to work to help save the planet. You spend more time on the road commuting and look like you need a shower when you get to work after dodging traffic all morning. Your coffee is priced higher now because the coffee house wants to use Fair Trade coffee from farmers in a county you have never been. And your dog is on anti-depressing meds because you are not home as much and he can’t go out in the yard because of the killer bees migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice when we find things that not only help us but are easy to use. When you come across these items they make such an impression that you like to tell others about your great fortunes. I came by a solution that was very easy to use and the value was so great that at first I didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and all had wonderful things to say about the technology. I listened but was hesitant to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other analyst, they all point to compression of the data as being one of the larger items for handling future growth. There are a lot of vendors that claim they can compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I understood how deduplication works and where it was useful. But this was compression which is a different ball game. Now we are able to shrink the storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the performance numbers were going to be and found an astonishing outcome. The RTC appliance made a performance improvement on the overall solution. It does help by adding cache and adding processing to the serving of data but it also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no compression, then all of the data has to be laid out on the disk, that sping for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB of data. But if we get 2:1 or 3:1 compression ratios, then all of the components have to work less. No longer are they working to save 100GB of data but 50GB or 25GB or data. This allows the system to process more data and have cycles to respond quicker to I/O requests (IE lower latency).
So the final thing is always the question of how hard is this to install. Is there a period of time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy. So easy that there is a good YouTube video that goes through the entire process of unpacking to racking to compressing data. I think the video speaks for itself:
So if you are back at work today and find your life swirling around you like a hurricane, stop and be reassured there is a few things out there that still can make your life a little easier. It doesn’t make the killer bees go away but maybe it will give you peace of mind that your storage doesn’t run out in the near future.
RichardSwain 060000VQ8G Tags:  storage ibm nseries rtc #ibmtechu sonas ibmtechconfs 4,104 Views
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
One of my favorite TV programs is the BBC show Top Gear. They go through and test cars not only for handling, looks, and cup holders but mainly for power. At the end they run all of the cars through the same test track and get a time. That time then gets recorded on their list of all the cars tested and is celebrated for achievement or scorned at for doing poorly. No matter what the car turns up, they were all treated equally.
Today, IBM is announcing a test done by a certain benchmark called SPECsfs. This has been the yardstick for all NAS vendors wanting to flex their muscles and show how they handle small block I/O. Vendors can bring how ever many drives and tweaks they want but the test itself is very rigid and has to be certified before the results are published.
IBM put together a SONAS system consisting of 10 interface nodes and 8 storage pods with all SAS disk. A total of about 900TB of usable disk, and about 1/3 of the maximum SONAS configuration. There was no solid state disk or extra tweaks done just a SONAS system that you could order today. That said, the IBM SONAS set a new world record for performance for a single file system at 403,000 IOPS per second.
Yes you read that right, 403k IOPS in a single file system. If you look at the other vendors they have used multiple file systems to aggregate the performance together in order to achieve a benchmark. Then they tend to use a virtual name space with software that is layered over all of the file systems, but here SONAS is one file system over 900TB with a true global name space. Some issues with multiple file system is they cannot stripe data across the file systems and the load balancing becomes an issue. If you look at the comparison of performance per file system, you can see that IBM is WAY beyond the competitors.
So you maybe asking, "Yeah that's pretty cool but what was the response time?". According to the test, the average response time was 3.23 MS from 0 to 403k IOPs per second. This is extremely good and when you think that was coming from one file system of 900TB, you realize how good that number is compared to other results.
There will be tons of vendors trying to debunk how IBM out performed them and how they have better software or better market share but it really boils down to these key points:
I have included the slide deck for the announcement below. Feel free to check out the information on the SPECsfs website.
I was driving into the IBM Almaden Research Center and just enjoying the beautiful scenery of the San Jose area. The campus is on top of a hill and surrounded by farm lands. I would really like to have a corner office here, but I don't think I would get much done. So here is my Vlog for this morning and I am hoping to get some interviews on here from some of the presenters and attendees.
Here is a great demo from my friend Ian Wright, who is an excellent engineer. Keep on the look for more videos from him!
RichardSwain 060000VQ8G 742 Views
I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point.
Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.
1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.
2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call 'stickiness'. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say "I want these <insert vendor name> out of here", are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?
I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.
Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.
The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to 'cloud'. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.
Who doesn't like getting something free? IBM is offering customers the chance to grab some more flash storage by offering a flash drive at no charge with the purchase of two flash drives on new Storwize V7000s or V5000s. There is a maximum of 4 drives at no charge for the V7 and 2 drives at no charge for the V5. What does this really mean? Since a V7 can hold up to 24 drives in a controller, you will be getting automatically a 25% free upgrade while only paying 75% of the total (24 drives) purchase cost.
The drives sizes for this deal are the 1.6TB, 1.9TB and the 3.2 TB eMLC Flash drives. This offer is only for new systems and does not apply to upgrades and only up to December 26, 2016.
RichardSwain 060000VQ8G Tags:  works dedupe it tools flash better compression 1 Comment 2,302 Views
Just a quick drop on the Data Reduction Tool that we use in the field to help estimate how much storage customers will save by running their data on our A9000 All Flash Array. This system (not SSDs) is based on the XIV Grid architecture and is available to customers since this past summer.
One of the things that many of our customers tell us is our competition is out offering silly data savings only based their word. IBM has for the past 5 years giving customers a real estimate on their compression savings using our compression estimation tool within 5% with out change to your code or storage system. Now we have the ability to run the tool on your data for savings on our compression, deduplication and pattern analysis.
This tool is downloaded from the IBM site and run on the host against the storage lun/volume. At the end you will be able to see the savings broken down in those three categories plus how much could also be saved using thin provisioning. The tool is CLI based and should be run by an admin with proper access.
All said this tool is the best thing out there to really give customers an idea of true savings. For more information please follow these link below
For more information about the tool or help running it feel free to contact me or your local IBM Storage Engineer.
RichardSwain 060000VQ8G 654 Views
Natural disasters such as earthquakes, floods and hurricanes all have at least one thing in common, it makes companies look at their DR plan. This scenario plays out something like this:
CIO texts IT person in charge of keeping the company running "Hey, just checking to make sure we are ok in case this hurricane hits us???? :)"
Reply "Yeah, but we can just move stuff around to the other datacenter and we have most of it in the cloud, we are headed to the bar for hurricanes, come join us!"
Having a DR plan is only as good as the last test. When I was starting my IT career I had to help put together a DR plan and the go to the offsite location to test it. This was an eye opening/watershed experience for me as I learned not everything you write on paper can be done in the time you actually have. I can still remember thinking we could restore from tape all of the databases and backup libraries in a few hours and be back up and running. My test plan was flawed because I didn't:
a. Understand the business needs (SLAs
b. Have input from different IT sectors (network, directory services, databases, backups, etc. )
c. have a plan b, c and d
Now we have the ability to claim our data is safe because it's "In the Cloud" and that does take some burden off the IT department but in reality the onsite infrastructure still has to be able to connect to the cloud. We also have replication of everything which lowers our downtime and keeps things more in a crash consistent state. VMware allows us to move servers from one data center to the next and we are more accustomed to keeping things up all the time.
This scenario of up all the time still doesn't excuse us from not having a DR plan and testing it. If you rely on the software to make sure your business is always up and running you need to understand the processes it takes to get the software up and running. There is something I have heard people use more and more when describing large down time problems "The Perfect Storm". This is where a process is not understood or taken in consideration when keeping the business running. For me when I was younger this was the fact we needed to have a directory service restored before we can start restoring servers. I didn't understand the primary need to have ALL of the users/passwords for the servers before they restored.
I hope everyone in the FL/GA/SC area stays safe and have taken all of the necessary precautions for their homes and businesses to stay safe. Good luck and God bless you all.
In response to: IBM Announcements May 2014 - Storwize FamilyTony, great job, also the V7kU 1.5 code now offers mult-tenancy now. FYI