I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point.
But on more than one occasion I have had to go to the closet of 'junk' to get something that helped me in completing a project. A Cat5 cable for my son's computer, an extra wireless mouse when my other one died. Yes I could go through it all and sort it out and come up with some nice labels for it all, but that takes time. It's just easier to close the container lid and forget about it until I realize I need something and its easy enough to grab it.
Now this is not a hoarding issue like those you see on TV where people fill their house, garage, sheds and barns with all kinds of things. Those people who show up on TV have taking the 'collecting' business to another level and some call them 'hoarders'. But if you watch shows like "American Pickers" on the History Chanel, you will notice that most of the 'hoarders' know what they have and where, a meta data knowledge of their antiques.
When you look at how businesses are storing their data today, most are looking to keep as much as possible in production. Data that is no longer serving a real purpose but storage admins are too gun shy to hit the delete button on it for fear of some VMWare admin calling up to see why their Windows NT 4 server is not responding. If you have tools that can move data around based on the age or last accessed then you have made a great leap into making savings. But these older ILM systems can not handle the growth of unstructured data of 2017.
Companies want to be able to create a container for the data and not have to worry if the data is on prem, off prem, on disk or tape. Set it and forget it is the basic rule of thumb. But this becomes difficult due to the nature of data as it has many different values depending on who you ask. A 2 year old invoice is not as valuable to someone in Engineering as it is to the AR person who is using it to base their next billing cycle.
One of the better ways to cut through the issue is to have a flexible platform that can move data from expensive flash down to tape and cloud with out changing the way people access the data. If the user can not tell the difference where his data is coming from and does not have to change the way he gets to it then why not look at putting the cold data on something low cost like tape and cloud tape.
This type of system can be accomplished but using the IBM Spectrum Scale platform. The file system has a global name space across all of the different types of media and can even use the cloud as a place to store data without changing the way the end user will access the data. The file movement is policy based and allows admins to not ask the user if the data is needed, it simply can move it to a lower cost as it gets older/colder. The best part is because of a new licensing scheme, customers only pay the TB license for data that is on disk and flash. Any data that sits on Tape does not contribute to the overall license cost.
For example: 500TB of data, 100 TBs that is less than 30 days old and 400 that will greater than 30 days. If stored on a Spectrum Scale file system, you only have to pay for the 100 TBs that is being stored on disk and not the 400 TB on tape. This greatly reduces the cost to store data as while not taking features away from our customers.
For more great information on the IBM Spectrum Scale go here to this link and catch up.
IBM Storage
Data Hoarding: How much is it really costing you? |
How to Save Money by Buying Dumber FlashQuick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper. 1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price. 2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call 'stickiness'. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say "I want these <insert vendor name> out of here", are you going to say well the whole company runs on that and its going to take about 12-18 months to do that? I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance. Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched. The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to 'cloud'. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors. |
FREE Flash Drives being offered on Storwize SystemsWho doesn't like getting something free? IBM is offering customers the chance to grab some more flash storage by offering a flash drive at no charge with the purchase of two flash drives on new Storwize V7000s or V5000s. There is a maximum of 4 drives at no charge for the V7 and 2 drives at no charge for the V5. What does this really mean? Since a V7 can hold up to 24 drives in a controller, you will be getting automatically a 25% free upgrade while only paying 75% of the total (24 drives) purchase cost. The drives sizes for this deal are the 1.6TB, 1.9TB and the 3.2 TB eMLC Flash drives. This offer is only for new systems and does not apply to upgrades and only up to December 26, 2016. |
Data Reduction Tool
RichardSwain
Etiquetas: 
dedupe
works
it
tools
flash
compression
better
1 Comentario
6.692 vistas
Just a quick drop on the Data Reduction Tool that we use in the field to help estimate how much storage customers will save by running their data on our A9000 All Flash Array. This system (not SSDs) is based on the XIV Grid architecture and is available to customers since this past summer. One of the things that many of our customers tell us is our competition is out offering silly data savings only based their word. IBM has for the past 5 years giving customers a real estimate on their compression savings using our compression estimation tool within 5% with out change to your code or storage system. Now we have the ability to run the tool on your data for savings on our compression, deduplication and pattern analysis. This tool is downloaded from the IBM site and run on the host against the storage lun/volume. At the end you will be able to see the savings broken down in those three categories plus how much could also be saved using thin provisioning. The tool is CLI based and should be run by an admin with proper access. All said this tool is the best thing out there to really give customers an idea of true savings. For more information please follow these link below For more information about the tool or help running it feel free to contact me or your local IBM Storage Engineer.
-Rich |
Hurricane Matthew Brings up Fears of DR StrategiesNatural disasters such as earthquakes, floods and hurricanes all have at least one thing in common, it makes companies look at their DR plan. This scenario plays out something like this: CIO texts IT person in charge of keeping the company running "Hey, just checking to make sure we are ok in case this hurricane hits us???? :)" Reply "Yeah, but we can just move stuff around to the other datacenter and we have most of it in the cloud, we are headed to the bar for hurricanes, come join us!" Having a DR plan is only as good as the last test. When I was starting my IT career I had to help put together a DR plan and the go to the offsite location to test it. This was an eye opening/watershed experience for me as I learned not everything you write on paper can be done in the time you actually have. I can still remember thinking we could restore from tape all of the databases and backup libraries in a few hours and be back up and running. My test plan was flawed because I didn't: a. Understand the business needs (SLAs b. Have input from different IT sectors (network, directory services, databases, backups, etc. ) c. have a plan b, c and d Now we have the ability to claim our data is safe because it's "In the Cloud" and that does take some burden off the IT department but in reality the onsite infrastructure still has to be able to connect to the cloud. We also have replication of everything which lowers our downtime and keeps things more in a crash consistent state. VMware allows us to move servers from one data center to the next and we are more accustomed to keeping things up all the time. This scenario of up all the time still doesn't excuse us from not having a DR plan and testing it. If you rely on the software to make sure your business is always up and running you need to understand the processes it takes to get the software up and running. There is something I have heard people use more and more when describing large down time problems "The Perfect Storm". This is where a process is not understood or taken in consideration when keeping the business running. For me when I was younger this was the fact we needed to have a directory service restored before we can start restoring servers. I didn't understand the primary need to have ALL of the users/passwords for the servers before they restored. I hope everyone in the FL/GA/SC area stays safe and have taken all of the necessary precautions for their homes and businesses to stay safe. Good luck and God bless you all. -Rich |
Re: IBM Announcements May 2014 - Storwize FamilyIn response to: IBM Announcements May 2014 - Storwize Family Tony, great job, also the V7kU 1.5 code now offers mult-tenancy now. FYI |
IBM Edge 2014 Call for Customer Speakers now OPEN
IBM Edge 2014 Call for Speakers is Open!
Do you have a story that you want to tell? IBM is giving you a chance to tell the world how you are making your business or the industry better. This year we are focused on four areas: Social, Mobile, Analytics and Cloud. These subjects will be the cornerstone of the conference and sessions will be selected on how your business was changed by them.
We want attendees to better understand why its important to move Infrastructure from an afterthought to a strategic mission critical choice, and want presenters to discuss how IBM infrastructure is a unique enabler for growth and innovation. Ideally, a speaker can incorporate how the company's strategic and forward thinking decisions about infrastructure have directly impacted the enterprise's ability to respond effectively to new opportunities, challenges and the demands of growth and innovation. We look forward to inviting customers to speak and will pay for their conference fee as a token of thanks.
If you are interested in speaking and attending Edge, please email me back with some details below and I will get your information into the database for the selection committee.
Please send me your name, company and contact information and the specifics you might include in your project/story including:
What IBM solution components? what is your implementation?
What terms of the decision process did you go through and the impact to your business?
Will you be comfortable including some business impact context?
Is there a tie-in with Cloud, Analytics, Mobile or Social type of workload ?
How has IBM technology helped you run the business better?
If you are interested, please let me know by March 12 as the deadline is next Friday for submissions.
|
IBM Storwize® V7000 Unified stores up to five times more unstructured data in the same space with integrated Real-time CompressionIBM
Storwize® V7000 Unified stores up to five times more unstructured data in the
same space with integrated Real-time Compression http://www.ibm.com/systems/storage/news/ibm-smarter-storage-20121004.html Today IBM announced the enhancement of compressing not only block data on the V7000 but also now it includes the file data on the V7000 Unified. The V7000 was first set up with compression back in the summer with a big announcement surrounding “Smarter Storage”. This optimization was the same code and engine that was purchased from a company named Storwize a few years ago. IBM initially kept the compression appliance that Storwize was first known for in the market. Using LZ compression with a RACE (Random Access Compression Engine) providing an optimized real-time compression without performance degradation. Thus slowing down data growth and reducing the amount of storage to be managed, powered and cooled. The compression does not require the compression or decompression of entire files to access the data block. The engine will compress and decompress the relevant data blocks “on the fly”. As data is written the RACE engine compresses the data into a smaller chunk and its 100% transparent for systems, storage and applications. The V7000 Unified can now deliver a larger compressed platform than any other mid-range platform. With compression percentages around 75%, a system that was maxed out at 2.8 PB (960 drives x 3TB each) can now see the system handle up to 5 PB of storage. Each V7000 Unified with code base 6.4 has the option of
turning on a 45 day trial of the compression software. After setting the license to “45” then you
can add new compressed volumes on the system.
You can also compress d Compression has been part of NAS for a very long time. We have seen compression of files from jpeg to office documents. But the best part is the end user will never have to worry about which files needed to zipped or compressed. Everything that comes through the V700 Unified can be compressed in line before it writes the data to disks. A couple of other improvements that IBM announced were the addition of a integrated LDAP server to V7000 Unified. This now allows customers to use both local authentication and external authentication servers to allow access to data. Another feature was the ability to upgrade a V7000 to a V7000 Unified in the field. If you currently own a V7000 but need to add file access to the system, IBM will sell you the two file modules and corresponding software to upgrade you system. Now mind you there is a list of requirements that will need to be met so check with your local storage engineer for more information. And finally we now have support for a 4 way cluster on V7000 unified. This allows for more disks to be provisioned and can compete with some of the other mid-range storage platforms in the market. This all together makes a nice round of improvements that will make life easier for IBM customers. As the V7000 platform matures it looks like IBM is putting their money where their mouth is and making storage smarter and more efficient. More to come on this platform as I suspect we will see bigger things down the road. |