Can IBM FlashSystem bring scale out systems back into the analytics game?
seb_ 060000QVK2 Visits (6726)
There are some good videos out there on the STG Europe Youtube channel about the infrastructures able to cope with analytics workloads. Distinguished Engineer John Easton discusses the requirements for these kind of workloads in the video "IBM Big Data with John Easton" below:
He points out that it is more efficient to use large memory systems with high computing power like Power Systems or System z instead of multiple parallel working System x nodes. The reason for that is the high I/O demand contrary the high wait times that result out of the usage of disk based storage systems to share the data between the nodes during processing. Especially for real-time analytics he recommends to have all the computation within the same box.
The same preference of a scale up approach of high powered systems versus scale out infrastructures is explained by Paul Prieto, Technical Strategist for Business Analytics in the video "Choosing the right platform for Cognos Analytics":
Can flash make a difference?
With I/O performance being the main reason for avoiding a scale out strategy, there is of course the question: What if I the I/O performance could be drastically enhanced? Before IBM acquired Texas Memory Systems in 2012, their RamSan systems were rarely used to accelerate scale out infrastructures as far as I know. The main use case was to boost the few big boxes running highly productive applications but waiting for their I/O due to inadequate I/O latencies provided by traditional disk storage systems. With their I/O latencies within the range of two-digit to lower three-digit microseconds and their capability to sustain several hundred thousands of IOPS they were used as a Tier 0 storage for only the most demanding and business-critical workloads.
With the integration of the now called IBM FlashSystem into the IBM storage portfolio another use case emerged and since then played a growing role in these deployments: IBM FlashSystem behind IBM SAN Volume Controller.
The pair "FlashSystem plus SVC" represents in fact two approaches:
Especially the second way combined with the wide range of supported host systems, HBAs, and operating systems now makes it interesting for a former no-go: Running applications with really high I/O demand like analytics on scale out commodity systems while relying on an impressive I/O performance available outside in the SAN. But of course - as always - it's not that simple. Yes, there will still be scenarios where such a scale out approach is just not applicable. Especially then it might make much sense to speed up the storage even for the scale up purpose-built business analytics systems. However for many - for example SMB - companies it'd make perfect sense to run their analytics on flash-accelerated clusters of x86 based commodity hardware...
...if they do it right.
So how to do it right?
Well, this blog is not intended to explain reference architectures or architectural best practices for analytics. But I want to add the SAN point of view. (I guess you already wondered when this will start - given the usual topics of "seb's sanblog") And from my perspective as a SAN troubleshooter I can at least tell you what should be taken into consideration to not let it fail from the beginning. There are two major points: The general architecture and the hardening of the SAN. The proper architecture (for example by keeping the FlashSystem and SVC attached to the core) is the base, but a hand full of issues could have an unacceptable impact on the performance. Many of them I already covered in earlier blog posts and some of them will be the topics of future ones.
The main goal is to prevent the SVC ports from being blocked. Ever. May it be back pressure due to slow drain devices, sub-optimal cabling patterns, "unlucky" longdistance settings, enabled but unused QoS, too few buffers set for the F-ports, sheer overload of links, and many others.
With disk-based storage we talked about good average latencies of around 3ms. As the combination of FlashSystem plus SVC now works with a tenth of that and lower, the storage network's performance really start to make a difference. Usually we talk about single-digit microseconds one-way from device to device in a well-designed SAN. But the issues described above could increase this into the range of hundreds of milliseconds. Then of course it will hardly be possible to provide real-time business analytics. Therefore it is important to harden the SAN with the possibilities you have today, like - speaking of Brocade fabrics - Fabric Watch, bottleneckmon, Advanced Performance Monitoring, port fencing, traffic isolation zones, and so on. Brocade's "Fabric Resiliency Best Practices" are a good first step in this direction.
I think it's still possible to create a scale out infrastructure for business analytics even - and especially - with SAN based storage, as long as it's optimally prepared and using IBM FlashSystem solutions to overcome the mechanically caused latencies of disk storage. But it's crucial to ensure that these benefits are not rendered void by avoidable performance problems.
IBM Experts are more then willing to support you in this challenge. ;-)