RichardSwain 060000VQ8G Tags:  nseries nas marriage lsi ibm ds5000 xiv ds4000 sql exchange onstor engenio netapp ds3000 13,629 Views
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
RichardSwain 060000VQ8G Tags:  ibm video 100_years_of_ibm_in_100_s... think #ibm100 cloud watson history centennial bigdata punchcard fractals 14,562 Views
If you haven't heard (get from under that rock) IBM is turning 100 this year and the company is having an awesome time celebrating our longevity. From technical advances, the Apollo program to blazing trails through race and gender equality, IBM has and IS doing the job for all of the world. The company has changed in so many ways and has to adapt in ways only IBMers can but we have survived and thrived.
Find more information about our centennial celebration here.
Here is a great 100 second video of all the cool and great things IBM has done over the last 100 years.
RichardSwain 060000VQ8G Tags:  tsm tivoli snapshots rto data wine sonas nas hsm rpo backup ltfs recovery ibm protection 7,905 Views
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use.
Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
Labor day has come and gone and so has all of the holidays between now and Thanksgiving. This is only augmented with the hope that your favorite football team (both American football and what we call Soccer) has a great weekend match and you get to celebrate with the beverage of your choice.
During your work-week, which can and sometimes does include weekends, all you hear is no more money to do the things you have to do to keep the business running. If you have kept up with squeezing more out your systems with virtualization that’s great but your network is now overtaxed. The staff that used to take care of certain aspects of the day to day running of your data center has been let go and their job has been ‘given’ to you with no thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the price of gas is so high that you decide to bike to work to help save the planet. You spend more time on the road commuting and look like you need a shower when you get to work after dodging traffic all morning. Your coffee is priced higher now because the coffee house wants to use Fair Trade coffee from farmers in a county you have never been. And your dog is on anti-depressing meds because you are not home as much and he can’t go out in the yard because of the killer bees migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice when we find things that not only help us but are easy to use. When you come across these items they make such an impression that you like to tell others about your great fortunes. I came by a solution that was very easy to use and the value was so great that at first I didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and all had wonderful things to say about the technology. I listened but was hesitant to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other analyst, they all point to compression of the data as being one of the larger items for handling future growth. There are a lot of vendors that claim they can compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I understood how deduplication works and where it was useful. But this was compression which is a different ball game. Now we are able to shrink the storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the performance numbers were going to be and found an astonishing outcome. The RTC appliance made a performance improvement on the overall solution. It does help by adding cache and adding processing to the serving of data but it also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no compression, then all of the data has to be laid out on the disk, that sping for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB of data. But if we get 2:1 or 3:1 compression ratios, then all of the components have to work less. No longer are they working to save 100GB of data but 50GB or 25GB or data. This allows the system to process more data and have cycles to respond quicker to I/O requests (IE lower latency).
So the final thing is always the question of how hard is this to install. Is there a period of time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy. So easy that there is a good YouTube video that goes through the entire process of unpacking to racking to compressing data. I think the video speaks for itself:
So if you are back at work today and find your life swirling around you like a hurricane, stop and be reassured there is a few things out there that still can make your life a little easier. It doesn’t make the killer bees go away but maybe it will give you peace of mind that your storage doesn’t run out in the near future.
RichardSwain 060000VQ8G Tags:  nseries ibm storage rtc #ibmtechu sonas ibmtechconfs 4,042 Views
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
Do you expect more out of your storage? IBM thinks you should and is putting their money where their mouth is. In the past it has gone under different names like STG University and Storage Symposium, but now IBM has revamped its premier storage conference. The big announcement came today with much fanfare that included a new website, some videos and bunch of hype on twitter. A three part conference for executives, gear heads and business partners there is something for everyone. But what will be different tham years in the past? I think IBM looked around how other vendors use conferences to help pump up its customer base (VMWorld, EMCwhatever) and decided to put some hype in the conference.
Think of this as a great place to go and network, learn and have a good time. The conference will be in Orlando and there will be tons of time to sit in class rooms and learn about the latest technologies but there will be sessions where IBM will be pulling in our top execs and analysts to tell you where IBM is going in the storage world.
The Executive Edge will feature different speakers from Jeff Jonas, Aviad Offer and IT Finance expert Calvin Braunstein. This track will take executives through new announcements, deep dives on technical platforms, one on one sessions with IBM Execs and some great entertainment. This is a new feature of the conference as in the past it was more geared towards the technical teams.
Of course the Executive Edge will be limited so talk to your local storage sales person to get a chance to be a part of this special event. There will be time to bring in your team and have special sessions and round tables with the IBM engineers who can help you find your way down this path of crazy storage growth. And there is a golf course on site which I have heard is very nice. Bring your clubs or rent them, I am sure there will be plenty of us out there so find a partner and have a good time.
More importantly IBM is making the effort to step up the event and have it on par with the other IBM conferences like Pulse. The technical portion will have over 250 sessions on storage related topics. You will also get road-map information from the product teams as well as a chance to become a certified technician. One area that has been expanding is our hands on labs and this year we will have the biggest one yet. You will be able to come in to the labs and actually see our storage systems and have a chance to 'test drive' them.
Early bird registration is open now
and you can sign up today. The conference will be in sunny Orlando
Florida at the Waldorf Astoria and Hilton Orlando at Bonnet Creek. The
event starts on June 4th and runs to the 8th. You can follow the
conference on twitter @IBMEdge and use the hashtag #ibmedge For the conference website go here
I look forward to seeing you in June.
Now available is the IBM System Storage N series with VMware vSphere
Redbooks are a great way of learning a new technology or a reference for configuration. I have used them for years not just in storage but for X series servers and for software like TSM. The people that write the books spend a great deal of time putting them together and I believe most of them are written by volunteers.
This is the third edition of this Redbook and if you have read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1 environments
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction of both the N series systems and VMware vSphere. There are sections on installing the systems, deploying the LUNs and recovery. After going through this Redbook, you will have a better understanding of a complete and protected VMware system. If you need help with how to size your hardware there is a section for you. If you are looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making sure you have proper alignment between the system block and the storage array. This will negatively impact the system by a factor of 2 in most random reads/writes as two blocks will be required for one request. To avoid this costly mistake or to correct VMs you have already setup a section in the book called Partition alignment walks you through the entire process of correctly setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of deduplication, compression and cloning to drive the efficiency of the storage higher. These software features allow customers to store more systems on the storage array than if they used traditional hard drives. Also there is how to use snapshots for cloning, mirrors for Site Recovery Manager and long term storage aka Snapvaults. At the end of the book are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene, there is a great guide that will help you from start to finish setting up your vSphere environment. The information is there, use the search feature or sit down on a Friday with a high-lighter, which ever fits your style and learn a little about using a N series system with VMware.
Here is the link to this Rebook:
IBM N series and VMware vShpere
For more information on Redbooks go here!