RichardSwain 060000VQ8G Marcações:  ibm video cloud #ibm100 think 100_years_of_ibm_in_100_s... watson history bigdata centennial punchcard fractals 15.268 Visualizações
If you haven't heard (get from under that rock) IBM is turning 100 this year and the company is having an awesome time celebrating our longevity. From technical advances, the Apollo program to blazing trails through race and gender equality, IBM has and IS doing the job for all of the world. The company has changed in so many ways and has to adapt in ways only IBMers can but we have survived and thrived.
Find more information about our centennial celebration here.
Here is a great 100 second video of all the cool and great things IBM has done over the last 100 years.
If there was ever a time for IBM to look at the storage market and come up with a product, today is that time. IBM released a new storage platform called Storwize. If you remember, IBM acquired another business named Storwize but the two do not have anything in common. It's a cool name and I am glad we got to use it!
There is a ton of information coming out about the product and what it can do and how it will help you, but I wanted to take a little different approach to the announcement today. I am going to be doing some live blogging and tweeting about my journey to the announcement in New York City. I will be trying to help everyone who can not make it get a feel for what is going on and hopefully be able to interview some people along the way.
As for now, I will be putting up some video blogs (Vlogs?) and tweeting. If you don't follow me, my account is 'richswainWORK'. IBM will also be using the #ibmstorage hash all day to keep up with everyone's comments and questions so fire away, we have a staff of people just waiting to help.
RichardSwain 060000VQ8G Marcações:  tsm tivoli snapshots rto data wine sonas nas hsm rpo backup ltfs recovery ibm protection 8.330 Visualizações
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use.
Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
RichardSwain 060000VQ8G Marcações:  pearson x3650 nseries sonas tony ibm storage r1.2 nas 4.923 Visualizações
May 9th has been a target on my calendar for some time now. Inside of IBM, we have been waiting for this day to come so we could talk about the new things being released in the storage platform. It almost feels like Christmas morning with a bunch of new presents under the tree. Each gift has inside something that is either really cool or something very useful. The only difference is your Aunt Matilda and her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the entire storage platform. I will concentrate on just the IBM NAS ones but if you are interested in knowing what is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of gifts for him under the tree this morning. Not only did he find presents under the tree but there were a few little things in his stocking. Here is what Santa brought:
This SONAS release is labeled R1.2 and can be obtained by contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few gifts. A new N6270 to replace the N6070. This new system is in line with the N6200 series with larger amounts of RAM and processors. Just like the smaller N6240, there is an expansion controller where customers can add more PCI control cards like HBAs, 10GbE or even FCoE. A new disk shelf was also released which uses the smaller 2.5 inch drives with improved back end performance.
And over at the Real Time Compression house they got new support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as these were just a fraction of the Storage announcements today. Also today is the IBM Storage Executive Summit in New York City. My friend and fellow blogger Tony Pearson is covering this great event and will be updating his blog and twitter feed. If you were not able to make it to NYC for the event, feel free to tweet him your questions @az990tony You can also send questions to our IBM Storage feed at @ibmstorage
RichardSwain 060000VQ8G 1.033 Visualizações
I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point.
RichardSwain 060000VQ8G Marcações:  dedupe works it tools flash compression better 1 Comentário 3.070 Visualizações
Just a quick drop on the Data Reduction Tool that we use in the field to help estimate how much storage customers will save by running their data on our A9000 All Flash Array. This system (not SSDs) is based on the XIV Grid architecture and is available to customers since this past summer.
One of the things that many of our customers tell us is our competition is out offering silly data savings only based their word. IBM has for the past 5 years giving customers a real estimate on their compression savings using our compression estimation tool within 5% with out change to your code or storage system. Now we have the ability to run the tool on your data for savings on our compression, deduplication and pattern analysis.
This tool is downloaded from the IBM site and run on the host against the storage lun/volume. At the end you will be able to see the savings broken down in those three categories plus how much could also be saved using thin provisioning. The tool is CLI based and should be run by an admin with proper access.
All said this tool is the best thing out there to really give customers an idea of true savings. For more information please follow these link below
For more information about the tool or help running it feel free to contact me or your local IBM Storage Engineer.
I am always blown away with the expertise and insight our Advanced Technical Services team displays. They are our “Go To” guys for driving technology to our field teams and they are the last resort before getting into a development team. For me, they are a well of information and experience that I can use to help build solutions.
Today, I am sitting in the SONAS system training with Mark Taylor. Mark and I have been working together at IBM for a few years and I have the most respect for him. Mark is responsible for supporting the N series and SONAS at IBM along with a few other team members. He has is known for being a stickler on our solution assurance calls and is always finding solutions for our clients.
Our mission this week is to learn more about storage products on a deeper level. Many of the technical sales group has specialty that they focus on. It could be XiV, SONAS, Mid-Tier storage, what ever. This week, when we leave on Friday, we should come away as more rounded technical experts.
I am still amazed at the SONAS product and how powerful it really is, especially compared to other products in the market place. I find it hard to compare to other brands due to great feature set it brings and integration with TSM. No other storage out there is able to provide unstructured NAS data a platform to live on from cradle to grave like SOANS.
This mimics how IBM is doing more solution selling in the marketplace. Our Storage team is partnering with the POWER team and the Software teams to provide customers with a ‘one stop’ solution. If you look just at the SONAS product, it has multiple components all from IBM; X series servers, TSM and GPFS software, XiV storage. We are finding that if we combine these products into a solution based product, customers can solve more issues with the same amount of dollars. I believe this is the future of IBM storage and storage in general.
SONAS does have a couple of points that I would like to see cleaned up. One is the GUI, and the other is its policy writer. From what I can tell, the information in the SONAS GUI is very similar to that of the XiV system. It just has a different look and feel. With the Storwize V7000 getting the ‘XiV’ look and feel, I suspect future releases of SONAS might get the same treatment. As for the policy engine, it’s all based on SQL Query language. If you know how to write it then it’s not an issue but there are some out there that might not be privy to such skills. There are some guidelines and examples that can be used to help setup the policies like moving data from one pool to the next but I suspect people will rely more on their Technical Advisor to help define those rules.
Tomorrow is all about protect tier. I am excited about the hands on and finding how this box can really save people space with their backups.
Well the last two days have been crazy with really good sessions, lots of networking with tons of people and great discussions throughout the entire conference. The sessions have been well attended and people are really asking great questions. For the most part, I hear that everyone is learning from the sessions, which I hope they dont get overloaded with so much information.
Today I presented on PAM II technology for the N series system. We disussed the need for large Read Cache systems and how its not only the size of the disks that are driving this need, but also the business requesting for lower return times on data. During this session, the question was brought up about the new acquisition of StorWize and how that would effect the NAS solutions at IBM.
Here is IBM VP of Storage, Doug Balog, talking about the product.
I think its going to be a good product to put in front of our NAS systems and it will drive the heavy read cache systems like PAM II and the huge amounts of cache in the SONAS systems. Speaking of StorWize, I wanted to give everyone a little more information about this product and maybe why IBM purchased them. They provide real time compression technology that will reduce the storage need by compressing the data into images. They have an engine called Random Access Compression Engine (RACE) which is just a compression algorithm that does the conversion with no overhead. The Storwize appliance will work with popular NAS systems, including IBM N series and SONAS, as well as non-IBM NAS systems from EMC, HP, NetApp and others. Storwize real-time compression can provide added value to clients already using data deduplication, thin provisioning and other storage efficiency technologies.
For more information on the StorWize acquisition, go here for the press release.
There is always a part of the business that gets over looked and usually its the people that are in the trenches making things work and keeping those machines going. I recently had the pleasure of spending some time with three great IBM CEs here in the Raleigh NC area. I was impressed with their professionalism and thoroughness while working on the SONAS upgrade. They made sure everything was installed, cabled and tabled according to the documentation. It is one thing to have a great product and lots of features, but it is even more important to have people who can service the system and do it with the highest level of craftsmanship. Thanks guys and gals for everything you do to help make our job easier!
Today, I helped our local Client Engineers install a couple of new nodes and some more storage into a local SONAS system. This was exciting for me as I love working with the hardware and software and it keeps up my keyboard skills. This client is bringing online more demand and needs both horsepower (interface nodes) and storage to accommodate a new business line. I was amazed at how easy the system is to upgrade and soon his little starter rack is almost full.
We added two interface nodes, IBM xSeries 3650 m2 and two 60 disk shelves to the unit. Once the disks are online and presented up to the interface modules, they can start creating shares for the new operation. As they need more storage or more interface nodes, another rack will be but in and the same process of pooling these resources together will come together.
The idea of having multiple interface nodes and storage pools is to not have single points of failure. In traditional storage, if a controller goes down, its partner has to pick up the entire work load for the down hardware. Not so in SONAS, if a controller goes down, the work is then evenly spread across all of the other nodes in the system. This is why we do not have a problem of loosing CIFS connections when systems go down.
The addition of new storage is also interesting as we are tripling the amount of storage the base system had originally with two 4 U shelves. These shelves are highly dense, top loading containers using either SAS or SATA disks. In this instance today, we were installing 120 2 TB SATA drives. A total of 240TB in 8 U of space. Not too shabby.
At the end of the day, I was pleased to see that IBM is moving forward with smarter storage systems. If you look at the entire portfolio, you can see that our systems like XiV grid, the auto tiering on DS8700, SVC virtualzation, all of these systems are helping our goal of a Smarter Planet. Look for some more pictures and maybe a video on Monday.
I was on my way down to Miami today and was talking to the gentleman sitting next to me about storage technology and the conversation turned to how everyone is scrambling to be in the cloud business. He had heard multiple vendors come in and start talking about cloud technology and how it was going to save him money, time and effort. This gentleman worked for a retail chain that has multiple district offices through out the eastern US and headquarters in Atlanta. He has multiple technologies all helping him keep the business running but nothing planned and as the company grew, they simply cookie cut the previous installation and planted it into the new office. Each office would also replicate back to HQ and that would be the main repository for backups/restores. I would guess there are thousands of companies out there with similar setups.
So instead of going into how he could leverage cloud storage technology, I asked him what were his problems and listened. They basically came down to this:
1. Multiple independent islands of storage that are aging, causing his support contracts to go up.
2. Backups take way to long and systems are slowing down as they get closer to 'capacity'.
3. Future growth was expensive as every time they added a new capacity, they had to add entire systems.
Now they were not cutting edge technolgy leaders nor were they wanting to be, but he needed a way to solve some of these traditional storage problems. He didn't want to go out and buy a new large system that would take forever to get in and while it may solve his problems, it would bring in even more issues. What he needed is less overhead and more throughput.
We sat there for a while thinking, we didn't say much until I offered this tidbit, "So what does cloud mean to you?" After a nice laugh, he stated that he really didn't know and the more he read, the more 'cloudy' became the answer.
There are many interpretations about what cloud really is and it differs between storage vendors. If there is a true declination of what cloud storage really could be, I think it could be defined using NAS technology. NAS lends to be a kinder and gentler protocol set and the need is growing leaps and bounds. Our traditional way of adding more systems and creating more independent silos works for smaller environment but it does not scale when clients want large disk pools of storage under one umbrella. There are ways of making volumes span in to large pools but the underlying storage is still made up smaller components that are typically active/active/passive nodes, even the best load balance will not help if you are overloading that system.
There are ways to find a balance between the same old way and going out and dropping tons of cash on huge storage gear. Find a system that will grow and scale as your storage needs do. Think of ways to keep everything under one umbrella (name space for example) and also try to solve issues that you are having today with real technology and not work arounds.
With NAS technology, we will always be at the mercy of the backup target whether its disk or tape. No matter if we are taking snapshots or ndmp backups, we have to write that out to some target to have a restore point. This is your basic strategy on how to do a backup/restore, why not consider using different types of disks to create a tier and offload disks to slower pools as the data gets 'older'. A few vendors have said there is no need for tiering, mainly because their systems can't take advantage of this and therefore they shun those who do. ILM tiering can help you achieve not only higher utilization rates with the storage but it puts the data that is accessed more frequently on faster disk, and moves the rest away to makes more room. Why pay for fast disk if the data on it is not being accessed frequently?
Future expansion has always been tough for administrators, they tend to over buy on controller size and skimp on the disk. Systems like SONAS from IBM allows you to grow both in storage capacity and server throughput; independently. If a customer needs more storage but doesn't need the additional throughput, why force him to add more controllers? SONAS systems can scaled up to 30 storage servers and 14.4 PB of spinning disk all under one name space. No more having multiple nodes with their own names; this storage is called Accounting1, Accouting2 .... etc. They are called Storage and everyone gets the benefit of having all of the nodes, not just one system.
By the time we had gone through all of this, our flight was landing. It was a great talk and both of us gained a different perceptive on how cloud is perceived. If any of you want to find out more information about the IBM Cloud strategy or SONAS go to the following links:
SONAS by IBM
Who doesn't like getting something free? IBM is offering customers the chance to grab some more flash storage by offering a flash drive at no charge with the purchase of two flash drives on new Storwize V7000s or V5000s. There is a maximum of 4 drives at no charge for the V7 and 2 drives at no charge for the V5. What does this really mean? Since a V7 can hold up to 24 drives in a controller, you will be getting automatically a 25% free upgrade while only paying 75% of the total (24 drives) purchase cost.
The drives sizes for this deal are the 1.6TB, 1.9TB and the 3.2 TB eMLC Flash drives. This offer is only for new systems and does not apply to upgrades and only up to December 26, 2016.
RichardSwain 060000VQ8G Marcações:  xiv 100years storwize svc sonas recap 2010 v7000 4.855 Visualizações
First, off I want to say what an awesome year IBM had in storage! We announced several new products and improvements to older ones. SONAS was one of the NAS product of 2010 at IBM. The idea that came from bringing a parallel file system and merging it with commodity parts is brilliant. People have been building these systems for years and having to deal with the issues of interoperability and supportability, can now focus more on making storage work for them. Real Time Compression was also released for the N series product. This was an acquisition that really helps IBM position compression technology in the NAS market. RTC today is an appliance that compresses the data into smaller packages with no performance degrade. I believe we will see more of this technology spread into other aspects of storage line.
The biggest storage announcement was definitely the introduction of a new mid-tier storage device, Storwize V7000. This device is based on the tried and true SVC code base with some new enterprise class features from our DS8000 line. This system has the cool XiV like interface and a very cool form factor and with things like easy-tier and disk virtualzation, the box is going to be hard to beat in 2011.
Second, I want to honor IBM as we celebrate our centennial year of business. The Computing Tabulating Record Company started on June 15, 1911 and while the name has changed and our products and services have changed, but our mission and dedication to our clients remains unchanged. So many of us do not even begin to understand the role IBM has made on our world as it is today. IBM has been well known through most of its recent history as one of the world's largest computer companies and systems integrators. With over 388,000 employees worldwide, IBM is one of the largest and most profitable information technology employers in the world. IBM holds more patents than any other U.S. based technology company and has eight research laboratories worldwide. The company has scientists, engineers, consultants, and sales professionals in over 170 countries. IBM employees have earned Five Nobel Prizes, four Turning Awards, five National Medals of Technology and five National Medals of Science.
Lastly, I want to challenge everyone, IBMers, clients, everyone, to really look at what is going on in the storage space this year. With the explosive growth of data we are seeing people buying unprecedented amounts of storage. Most of the vendors are going to be investing in R & D for storage and coming out with new and time saving features. Clients should challenge their vendors to exceed their requirements not just make them. I also want vendors to look beyond products and start looking the services that help clients make better decisions and support the products they have purchased.
Good luck to everyone in the New Year!
Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.
1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.
2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call 'stickiness'. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say "I want these <insert vendor name> out of here", are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?
I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.
Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.
The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to 'cloud'. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.
RichardSwain 060000VQ8G 871 Visualizações
Natural disasters such as earthquakes, floods and hurricanes all have at least one thing in common, it makes companies look at their DR plan. This scenario plays out something like this:
CIO texts IT person in charge of keeping the company running "Hey, just checking to make sure we are ok in case this hurricane hits us???? :)"
Reply "Yeah, but we can just move stuff around to the other datacenter and we have most of it in the cloud, we are headed to the bar for hurricanes, come join us!"
Having a DR plan is only as good as the last test. When I was starting my IT career I had to help put together a DR plan and the go to the offsite location to test it. This was an eye opening/watershed experience for me as I learned not everything you write on paper can be done in the time you actually have. I can still remember thinking we could restore from tape all of the databases and backup libraries in a few hours and be back up and running. My test plan was flawed because I didn't:
a. Understand the business needs (SLAs
b. Have input from different IT sectors (network, directory services, databases, backups, etc. )
c. have a plan b, c and d
Now we have the ability to claim our data is safe because it's "In the Cloud" and that does take some burden off the IT department but in reality the onsite infrastructure still has to be able to connect to the cloud. We also have replication of everything which lowers our downtime and keeps things more in a crash consistent state. VMware allows us to move servers from one data center to the next and we are more accustomed to keeping things up all the time.
This scenario of up all the time still doesn't excuse us from not having a DR plan and testing it. If you rely on the software to make sure your business is always up and running you need to understand the processes it takes to get the software up and running. There is something I have heard people use more and more when describing large down time problems "The Perfect Storm". This is where a process is not understood or taken in consideration when keeping the business running. For me when I was younger this was the fact we needed to have a directory service restored before we can start restoring servers. I didn't understand the primary need to have ALL of the users/passwords for the servers before they restored.
I hope everyone in the FL/GA/SC area stays safe and have taken all of the necessary precautions for their homes and businesses to stay safe. Good luck and God bless you all.