RichardSwain 060000VQ8G Tags:  ian_wright storwize video v7000 ibm storage youtube 1 Comment 3,571 Views
My friend and colleague Ian Wright has put together an awesome You Tube video of the V7000 with the Flash Copy manager software. Ian has made several videos of the V7000 including a tour of the GUI, how to do things and now this. Ian says in an email to me earlier: "The video starts out with a restore of an accidentally deleted email (but not a restore of the spam that was deleted) and goes on to show recovering an accidentally deleted database. Both are actions that I think should resonate with customers using these applications."
I thought this was an awesome example of the V7000 and the Rapid Application Storage Solution that was release a few months back. Please take a few minutes to go through the video and give Ian some feedback.
This week, I am at SNW in San Jose, CA. If you have never heard of the conference, its all about storage and networking and pulls in all of the big vendors to put on labs, lectures and a vendor hall. People come from all over the world to this event to learn what is new and how to do things better.
One thing that I love doing at these events is talking to customers and potential customers about IBM storage technology solutions. Often we find the conversations do not talk about products as much as the technology in them that fix some sort of issue in the data center. I think this is best seen when you come in to the IBM booth. There is no hardware to see blinking lights or yank cables. We have something better, people who know the solutions to your issues.
If you ask any of the IBMers that work these events, they always say it’s a love hate relationship. The hours are long and you stand on feet for 4-8 hrs. The best part is talking about IBM solutions and finding out what people are doing in the field. This is the best way to help drive innovation, listening to the customer. IBM has programs that send our developers into the field to listen to customers and this is just one example of that program.
Another event at SNW this year was a gathering of the storage social media moguls. This is a non-vendor specific event and is open to everyone. It is associated with a certain hash tag of #storagebeers and they have been going on all over the world. Last night was the largest storagebeers to date and it was a whos who of this community. But what was better than meeting the people that you see on twitter or those who write blogs, was the idea of putting all of the vendor fighting behind us and just a group of people who work in the storage industry talking about whatever was on their mind. If you find yourself at an event like SNW or VMWorld, check to see if there is a #storagebeers and go back and meet some really cool people.
If you are at SNW and want to come by for a chat, you will find me at the IBM Booth today between 11am and 3pm. I would love to spend some time learning about what you are doing in the data center.
RichardSwain 060000VQ8G Tags:  pearson nseries x3650 sonas tony ibm storage r1.2 nas 4,668 Views
May 9th has been a target on my calendar for some time now. Inside of IBM, we have been waiting for this day to come so we could talk about the new things being released in the storage platform. It almost feels like Christmas morning with a bunch of new presents under the tree. Each gift has inside something that is either really cool or something very useful. The only difference is your Aunt Matilda and her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the entire storage platform. I will concentrate on just the IBM NAS ones but if you are interested in knowing what is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of gifts for him under the tree this morning. Not only did he find presents under the tree but there were a few little things in his stocking. Here is what Santa brought:
This SONAS release is labeled R1.2 and can be obtained by contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few gifts. A new N6270 to replace the N6070. This new system is in line with the N6200 series with larger amounts of RAM and processors. Just like the smaller N6240, there is an expansion controller where customers can add more PCI control cards like HBAs, 10GbE or even FCoE. A new disk shelf was also released which uses the smaller 2.5 inch drives with improved back end performance.
And over at the Real Time Compression house they got new support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as these were just a fraction of the Storage announcements today. Also today is the IBM Storage Executive Summit in New York City. My friend and fellow blogger Tony Pearson is covering this great event and will be updating his blog and twitter feed. If you were not able to make it to NYC for the event, feel free to tweet him your questions @az990tony You can also send questions to our IBM Storage feed at @ibmstorage
I was just thinking the other day that I really need to write an article for my blog about the upcoming releases. When I opened the page it said I had not written anything since May of this year. Time really flies when you are having fun, so they say.
IBM just released a new XiV system dubbed the Gen 3. Generation 1 of course was built by the XiV company before IBM purchased them, then came Gen2 shortly there after. As you expect the system has to keep up with customer demands and technology refreshes but some thing very unique caught my eye. The performance with this system will be heads and shoulders above the competition.
Nehalem micro-architecture now makes up the heart of the processing power within the grid with tons more cache to boot. There is a change in the inter-connectivity from Ethernet to Infiband. I can’t wait to see the new SPC2 numbers when they are published.
I suspect with the introduction of more cache (via SSD) and the switch over to near-line SAS drives is only going to help increase performance from gen2 to a gen3 system. The self tuning/healing, tierless storage is still at the heart of the system and still redefines how storage is done today.
There are plenty of blogs and articles on the specifics of the release but here is the IBM announcement page http://www-03.ibm.com/systems/storage/news/announcement/big-data-20110712.html
RichardSwain 060000VQ8G Tags:  storage ibm nseries rtc #ibmtechu sonas ibmtechconfs 4,101 Views
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
RichardSwain 060000VQ8G Tags:  netapp nseries ibmstorage ftc nas unified storage privacy ibm cloud 5,185 Views
Last week at the IBM Technical Conference I was able to spend some time with a couple of friends discussing technology. It is always interesting to hear their take on where the storage market is going and what lays ahead in the future. As my Netapp pal and I were chatting about the messaging around unified architecture, we both noted that unified to one perceptive is disjointed to another.
IBM and Netapp have been using the term unified for its NAS/SAN device for about 5 years now. The idea is to share a common code base on the same hardware to increase functionality and usability of that storage. Other vendors have gone similar routes using multiple code bases and/or hardware but I see that as a NAS gateway in front of SAN storage system.
This has been very successful in data centers both large and small. But the idea of how we manage storage is changing. Virtualization is changing the idea of how and even where our data may be stored. The term cloud is something of a marketing term but I like the term Storage Utility better. Utility companies such as electric, water, sewer and even cable provide a product to its consumers and storage utility vendors could do the same.
Most people are not concerned about process companies take to make water drinkable or how electricity is generated as long as it is safe, reliable and easy for them to consume. Storage as a Utility is no different, it is only when the storage is offline or hacked in by outsiders the consumers are concerned. There are laws that govern utilities and the FTC has put some privacy laws together to help consumers but I believe we can take it a little further (a blog for another time).
As our data is changing from traditional spinning drives in our data center to a storage utility, we will need some type of bridge that will ease the pain of transition. The main reason people do not adapt new technology is because the transition is often too painful and the benefit of new technology is less than the need to move. Whether it is a software package that helps move data or a hardware device, it will have to give access to both file based data and object based data. This will allow for users to read the files as needed no matter what their connectivity or location. It could also be used to help drive efficiencies up buy allowing data to move from file based (high cost) to object based (lower cost) environments.
Today there are some vendors who have early versions of this type of unified solution. They are bridging the gap between what we have today in private data centers and the future of public utility storage. This is very early in the transition but with this type of technology, we will be able to adapt and provide a better way of storing data. Will it still be called a unified solution? Only the marketing people can tell us that.
Labor day has come and gone and so has all of the holidays between now and Thanksgiving. This is only augmented with the hope that your favorite football team (both American football and what we call Soccer) has a great weekend match and you get to celebrate with the beverage of your choice.
During your work-week, which can and sometimes does include weekends, all you hear is no more money to do the things you have to do to keep the business running. If you have kept up with squeezing more out your systems with virtualization that’s great but your network is now overtaxed. The staff that used to take care of certain aspects of the day to day running of your data center has been let go and their job has been ‘given’ to you with no thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the price of gas is so high that you decide to bike to work to help save the planet. You spend more time on the road commuting and look like you need a shower when you get to work after dodging traffic all morning. Your coffee is priced higher now because the coffee house wants to use Fair Trade coffee from farmers in a county you have never been. And your dog is on anti-depressing meds because you are not home as much and he can’t go out in the yard because of the killer bees migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice when we find things that not only help us but are easy to use. When you come across these items they make such an impression that you like to tell others about your great fortunes. I came by a solution that was very easy to use and the value was so great that at first I didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and all had wonderful things to say about the technology. I listened but was hesitant to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other analyst, they all point to compression of the data as being one of the larger items for handling future growth. There are a lot of vendors that claim they can compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I understood how deduplication works and where it was useful. But this was compression which is a different ball game. Now we are able to shrink the storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the performance numbers were going to be and found an astonishing outcome. The RTC appliance made a performance improvement on the overall solution. It does help by adding cache and adding processing to the serving of data but it also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no compression, then all of the data has to be laid out on the disk, that sping for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB of data. But if we get 2:1 or 3:1 compression ratios, then all of the components have to work less. No longer are they working to save 100GB of data but 50GB or 25GB or data. This allows the system to process more data and have cycles to respond quicker to I/O requests (IE lower latency).
So the final thing is always the question of how hard is this to install. Is there a period of time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy. So easy that there is a good YouTube video that goes through the entire process of unpacking to racking to compressing data. I think the video speaks for itself:
So if you are back at work today and find your life swirling around you like a hurricane, stop and be reassured there is a few things out there that still can make your life a little easier. It doesn’t make the killer bees go away but maybe it will give you peace of mind that your storage doesn’t run out in the near future.