Something is changing in Sweden at the moment. A huge increase in the demand for large scale storage systems for file and object access has reached different vendors. The requirements within these solutions also differ a lot from the standard SAN or NAS storage systems nomally proposed to an also standard proposed virtual environment. So what is driving this change, except for the normal growth rate of already existing applications? One reason I know to be true is of course the changing requirements of the applications and business functions running in our Swedish IT environments. These applications are moving away from only handling the traditional business focused functions (CRM, ERP etc…) to also handling data used and created for the classic buzz words right now: Analytics, IoT and Big Data. Specifically Sweden I feel are doing fun stuff within this area and facts on the table we are pretty tech-savy in Sweden (Spotify, Skype, Minecraft etc..) and we like to be on the edge of technology innovation.
This change has been spoken about since a long time back…. I can remember many meetings years ago, where I have on multiple occasions talked about this growth. Usually on slides with a graph showing the unstructured data growth and how it will explode – you know which one I mean (see below).
But in real life this explosion didnt really happened during this time. In the end they still needed the same type of system without a real architectual difference. So we proposed a standard SAN or NAS system with some cool functions – every vendor of course has one of those and competed with us for the deal.
So why am I writing about this now. Well I believe now is the time to really talk about this unstructured data growth and how the different vendors can solve this problem. We have seen so many requests for a storage solution where the share amount of data can not be handled in the standard SAN or NAS system as it does not scale enough (performance or capacity) or for that reason can not be managed efficient enough or have the availability/reliability needed in these large configurations.
Why are we different?
From now on I will be pretty straight forward. Just a mere start to a list of reasons why and where we stick out from the rest of the normal NAS products delivered by all the major vendors in the market.
“Spectrum Scale – a true software defined solution by IBM”
- We provide a storage solution that can export one or multiple filesystems and/or object stores over standard and native network interfaces with added advanced functions such as replication, snapshots etc… (see below).
The “why” list:
- You create your own storage controller configuration. We can be installed on any server platform from any vendor in the market. x86, Power and Mainframe ?? – yes you guessed it……#SDS.
- We support any form of backend storage media that can be presented as a block device. This means that we even have a larger support matrix then Spectrum Virtualize (Virtualize/SVC = +400 storage system support matrix). #open
- We support the use of tape as a active tier and storage pool. As this is implemented the files on tape media will still be visual in the OS but blurred out. When the file is accessed we fetch the file to faster media to match the performance needed. #coldstorage #weneedtosaveourdataforever
- The “storage media” used can also be the cloud. We support Amazon, Swift and Softlayer as storage targets and can be used together with local storage as one single file system. NO GATEWAY needed. #hybridcloud
- Summary of the first two points = put our software on whatever server you want, no matter the vendor or CPU and at the same time use any storage device, internal or external to store the data on, no matter the vendor or architecture. This opens endless possibilities in creating an up to date storage solution with the best storage media possible today and tomorrow. Not the best storage media 3 years ago when you bought your SAN array – which of course got old and where the vendor didn’t develop that hardware product further etc… happened to you???. #prettyopeniwouldsay
- Just the fact that you can build your storage solution on the IBM Power processor and server architecture you have the option of creating a storage system that almost only Mainframe users will be able to experience !! The Power CPU has without a doubt the best architecture for processing data and this can be yours without buying the biggest badest storage system in the market (a k a DS8000). #ibmpower
- Our solution scales further then any competitor on the market. 3-digit PB scale systems are running around the world today preforming at 3-digit GB/s speed. These numbers are the definition of the new age of data. This will of course not be for eveyone, but when the need for at least 2-digit or even 1 digit of above scaling units you will need to really think about the type of solution you implement. #infinitescaling
- How do we achieve above performance and scale ?? – Well Spectrum Scale is a parallel file system, where the intelligence is in the client and the clients spreads the load across all storage nodes in a cluster even for individual files, while in traditional Scale-Out NAS, one file can really only be accessed through one node at a time by an individual client (BOTTLENECK!!!). The architecture also lets us scale performance independently from capacity as well as the other way around. #highestperformance
- The Spectrum Scale file system is a global file system with the ability to enable collaboration between different geographical locations all with access to the same data using roles based functions to control how users are able to cache, write, read and push data along/between the different locations for best effciency and performance. Add Aspera (=high-speed data transport protocol) to that and you can really enable a #global-high-speed-filesystem
- We use analytics to get insight into the data, files and objects that you are storing within you solution based on patterns. Those patterns can be usage, users, groups, names, extensions, metadata, capacity, performance, time etc. From this insight we can move, find, identify or even remove data based on these patterns to make your system work at the highest efficiency as well as lower your total cost of ownership. #cognitivedatamanagement
Except all this: End-to-end checksum, unified File and Object native interfaces, new enhanced graphical user interface, snapshots, async. and sync. replication, backup/restore-integration, policy driven compression, encryption, hadoop integration and much more.
Any of this seems interesting??? – please contact me (Sweden), your IBM businesses partner or IBM rep. too talk more 🙂 …a very good day to you all!!
This blog post was originally posted on LinkedIn at: https://www.linkedin.com/pulse/new-hype-sweden-large-scale-file-object-storage-henrik-warfvinge
Share this post: