bySteve Kenniston History truly does repeat itself. We are talking about the history of
data storage. Every once and a while a new technology comes along that
requires a new way to think about infrastructure. Notice I said
“infrastructure”. I’d like to paint two analogies:
1: RAID – Prior to RAID users stored their data on disk and if they
could afford it, they backed that data up to have a protected copy of
their data. When RAID came out, users were able to store their data on
multiple disks appearing as one device. The benefits to this were,
increased data reliability, better performance. This new technology
however, fundamentally changed how disk was sold, but the questions were
How much capacity do you need?
What type of performance does your application require?
sales reps point of view changed. There were a number of new
considerations that needed to be taken into account. First, the age old
question, “Will I sell less storage “stuff?” Remember the person, at
the time, selling the disk was probably also selling the backup tape and
software to protect that information. If the disks are more reliable,
maybe the customer won’t need as much tape? Second, when the capacity
question came up, the seller also needed to know what type of RAID the
customer wanted to ensure they sold them enough drives. It was no
longer as simple as asking the capacity requirements and dividing it by
the drive capacity at the time. Now depending upon RAID levels there
was a new set of math that needed to be done. Third was the notion of
performance and more spindles meant more performance so now that the
capacity equation was solved for, you also needed to know the I/O
requirements in order to make sure the right number of drives were sold
to solve for the capacity as well as the performance.
what, we figured it out and the industry never looked back. RAID is a
defacto standard in all storage subsystems today, I even run RAID in my
home. The business benefits of having RAID far outweighed the costs.
In fact, it is probably one of the first times in storage history that
the question of, “how can you afford not to have it”, came up.
2: Virtual Machines – When VMware came out the value proposition was,
do more work, with less physical infrastructure. And again, the
business benefits far outweighed the technology hurdle of implementing
the new solution.
in mind that it is much harder to change process in IT than it is to
change technology, IT decided that this new way of serving up processing
power to applications was well worth all of the process changes that it
would require. One example, backup would need to change when
implementing virtual server technology. The data would grow 4x and the
processing of that information for backup would take longer, in a world
where time was all to valuable. However the business benefit justified
Again, the sellers questions were consistent:
How many virtual servers do you need? (Capacity)
What type of performance do you need for each virtual server?
answers to these questions allowed a sales rep to configure the right
number of physical systems to handle the right number of systems to make
the line of business successful. Additionally, some of the same
considerations came up. “Will I sell less server and make less money?”
Now that there was new server technology (more processors, the ability
to handle more memory) systems could be bigger, and more expensive.
Sellers also needed to know a bit more about “capacity”, how many
virtual systems could a physical system run successfully? They also
needed to have an understanding of performance. Now sellers were
configuring systems to run the equivalent of 20 to 100 servers on one
Today I would suggest that we are at a cross roads in history. New technology has come along that will have asignificantimpact
on the storage world. First, research from IBM reflects the fact that
disk drives can no longer keep getting two times as dense for half the
cost as they had been throughout the late 90’s and early 2000’s. The
technology doesn’t exist today to make the drives spin faster, stay cool
and not loose data. Until now. Real-time compressionis
a game changing technology that will add significant value to the
storage industry without having to change the way IT thinks about the
deployment of their storage.
is growing at such a significant pace today and with the latest IBM
research about disk capacities, something needs to change. Data centers
are just running out of space and more customers want to keep more data
on line for reasons such as competitive edge or compliance, but no
matter the reason, they want access to their information. Enter
real-time compression. Now there is a fundamental difference between
real-time compression and other compression technologies and compression
implementations but I am not going get into it here, but it is safe to
say that post process and in-line compression are very different than
real-time compression and users can’t get the benefits of improved
primary storage capacity, transparently, with no performance impact with
anything but real-time compression technology.
real-time compression, like other game changing technology, doesn’t
require any new questions; there are just simply a new set of math
How much capacity is required?
What is the performance requirement?
time, real-time compression will be as ubiquitous as RAID, and just
like users don’t think that much about RAID, users won’t need to think
about compression. Compression will become an expected feature of the
array. It doesn’t matter that it now takes fewer drives to satisfy the
original question around capacity and performance. With data growing as
fast as it is and with disks not being able to keep up their growth
pace, something needs to change and that something is real-time
compression. Soon, it won’t matter what the physical disk capacity is
of a disk drive, it will be about a disks virtual disk capacity, what it
has the capability of storing that matters. It is time we all started
thinking this way.
In the cover story this month,
Lee Cleveland, Distinguished Engineer, Power Systems direct attach
storage, and Andy Walls, Distinguished Engineer, chief hardware
architect for DS8000 and solid-state drives (SSDs), sat down to talk
about all of the new storage technologies IBM has been releasing lately.
What I didn’t have room for in the article was a nice summary of the
technologies that can help you improve access, manage growth, protect
data, reduce costs or reduce complexity. Whatever your goals, IBM has an
integrated storage option for every organization.
Here are the quick highlights of the latest storage announcements:
IBM Storwize V7000
New advanced software functions
New easy-to-use, Web-based GUI
RAID and enclosure RAS services and diagnostics
Additional host, controller and ISV interoperability
Integration with IBM Systems Director
Enhancements to Tivoli Storage Productivity Center (TPC), FlashCopy Manager (FCM) and Tivoli Storage Manager (TSM) support
Proven IBM software functionalities
Easy Tier (dynamic HDD/SSD management)
RAID 0, 1, 5, 6, 10
Storage virtualization (local and external disks)
Non-disruptive data migration
Global and Metro Mirror
FlashCopy up to 256 copies of each volume
IBM Storwize Rapid Application Storage Solution
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1 (with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
Brocade is leading the way by helping
organizations around the world build cloud-optimized networks that increase
business agility and profitability. Offering robust, yet flexible network
solutions, Brocade enables organizations to choose the best type of cloud model
for their unique business requirements and objectives. Brocade is introducing
two industry-leading product line advancements that are catered towards your
customer's existing IT infrastructure.
Developments to the Data Center SAN Environment
Based on years of proven success, Brocade SAN
fabrics provide the most reliable, scalable, high-performance foundation for
private cloud architectures. Brocade continues that leadership with the
industry's first 16 Gbps Fibre Channel SAN solutions:
The Brocade® DCX® 8510 Backbone, the industry's most
powerful SAN backbone for private cloud storage
The Brocade 6510 Switch, the new price/performance leader in enterprise SAN
The Brocade 1860 Fabric Adapter, a new class of adapter that meets all your
customer's Fibre Channel/FCoE/IP connectivity needs in a single device
Brocade Network Advisor, an easy-to-use, unified network management platform
the NetIron® MLX
Brocade network solutions for service providers
combine high scalability and performance to transform your customer's business
with new revenue-generating cloud services—increasing their overall level of
profitability. Key offerings include:
New 10 GbE, 100 GbE, and advanced management modules for the Brocade MLX
Series of high-performance core routers
Compact Brocade NetIron CER 2000 Series routers, delivering high scalability
and performance at the network edge
Leading-edge enhancements in the Brocade NetIron 5.2 software release for
IPv6 scaling, broader MPLS connectivity, and more
Brocade Network Advisor, an easy-to-use, unified network management platform
World-class professional services and technical support for carrier-class
by Steve Kenniston Alright, landed safe in Prague and was picked up by one of my
colleagues and whisked away to the IBM office. There we did an
interview with Czech writer Martin Noska from Computerworld for IDG in
Czech Republic. The first Noska informed me was that IBM is the number
one in storage sales in Czech Republic (just like Poland!). He also had
some very good questions and he with “What are IBM’s biggest challenges
in the storage business”? I had thought about this for a while and I
would have to say it is really about marketing our storage “solutions”
to the customer base. IBM is a double edge sword. IBM is so big and
has so many products it becomes difficult to market or message all of
our products without inundating all of our customers and confusing
them. If you think about it, IBM has hundreds of thousands of customers
and business partners, if not more. This is one of our strengths.
When customers have needs or requirements we have very good input into
our product portfolio, perhaps the best in the business. Combine this
with the fact that IBM has not only storage solutions but technology
across the entire stack from servers to networking. So when it comes to
developing the right technology, that solves real customer problems, I
would argue that IBM’s portfolio is the best in the business. IBM takes
an extreme amount of care when developing a solution to ensure that it
matches the customer requirements based on the changing needs of IT.
Having an integrated portfolio that works well with our ISV partners,
VMware for example, allows us to help customers speed their time to ROI
and be very competitive in the market place. The challenge is, how do
we properly message our new solutions to our customers, in a timely
manner so that they are well aware of new products without giving them
too much information such that it just becomes noise? It is difficult
to say the least.
The interview went very well. There were questions about tape, where
we discussed the advantages of IBM’s LFTS technology for more advanced
tape usage, we discussed the direction data deduplication will go as
well. Noska’s view was that there hadn’t been any advancement in data
deduplication in the last 5 years. I told him that for secondary
storage, backup, that he is right, I also told him that the real
advancement to deduplication will come when it is ready for primary
storage. Today deduplication isn’t ready for primary, but it will be
On Monday the 13th we traveled to visit Avnet. They are a
great IBM partner. Like most partners they have a very large SMB
install base and also like a lot of SMB feedback I have been getting,
they are looking for a building block solution that has all of the
software features implemented as a part of the stack. SMB and
Enterprise alike are starting to realize that the value in any array is
becoming the software stack that makes the hardware, efficient,
optimized, flexible, and dynamic. IT’s job continues to get more and
more challenging with developing strategic initiatives for the business
to make them more competitive and it is the job of the vendor to make
sure these solutions are as optimized and cost effective as possible.
We also visited DHL. These guys have one of the greatest datacenters
I have ever visited. They are very advanced and push a lot of data.
The do some very strategic logistics for a number of companies in Europe
and Asia. They, like many others have a number of challenges. Since
my blog post about “The 5 Most Interesting things at VMworld”
(#4) I heard something very interesting today. I asked “What is your
most challenging storage issue”? He told me that storage was not is
“most difficult” challenge. Storage efficiency was important to him in
order to keep driving down costs for his organization as they deliver a
service to the different groups that make up DHL, but his most difficult
challenge was with server I/O in his VMware environment. If you read
#4 in my post, regarding Proximal Data, this is exactly the issue the
address. As VM instances grow on the physical servers, the I/O starts
to become the big problem. DHL runs over 4000 instances of VMware and
as the business demands more applications and application resources,
they are bound by the I/O of the server, which also causes them to WAY
over provision their storage for performance reasons. This is very time
consuming, management intensive and expensive. The combination of a
solution like Proximal Data as well as compression can help them
optimize their infrastructure to save money and deliver better, more
cost effective services to their lines of business.
On the lighter side, I spend the weekend in Prague. What an amazing
city. The weather was fantastic and I was able to take a lot of great
photos. I walked around Prague Castle, ate some authentic Czech food,
visited the memorial for the Czech hockey players that passed in the
Russian plane crash and met some pretty interesting people. You can
check out some of my photos of Prague at www.facebook.com/skenniston.
Coincidentally the photo above shows the "Golden Lane" where the
Alchemists worked to turn anything they could find into gold in the city
Systems combining block and file storage maximize benefits of server
The data center of the future
looks an awful lot like data centers of the past in one important respect:
storage demands. While the trend toward server virtualization and consolidation
is transforming the way data centers are being designed, built and managed,
rampant data growth continues to be a limiting factor.
In its annual “Digital
Universe” study, EMC projects a nearly 45-fold annual data growth by 2020. Data
growth was cited as the No. 1 data center hardware infrastructure challenge in a
recent Gartner survey of representatives from 1,004 large enterprises in eight
“While all the top data center
hardware infrastructure challenges impact cost to some degree, data growth is
particularly associated with increased costs relative to hardware, software,
associated maintenance, administration and services,” said April Adams, research
director at Gartner. “Given that cost containment remains a key focus for most
organizations, positioning technologies to show that they are tightly linked to
cost containment, in addition to their other benefits, is a promising
In order to drive down costs
and reduce operational complexity, organizations virtualizing their data centers
and beginning the journey to the cloud require a storage infrastructure that is
both simple and efficient. Unified storage delivers on both counts.
Unified storage is the
combination of block- and file-based storage in the same system with common
management. These multiprotocol systems can be attached to servers via IP and/or
The Road to
Unified storage is an evolving
technology, but not a new technology. A variety of vendors have taken stabs at
providing block- and file-oriented storage in a single box since the late 1990s.
Some of the earliest attempts involved simply putting two machines together in a
single enclosure and then creating a GUI to handle management of both.
Next came NAS gateways, which
used a NAS box as an entry to SAN storage. In this setup, a NAS box provides
file-based access to applications via a LAN port, and then stores the data on a
block-oriented storage array that can be accessed across the SAN. While this
approach accommodates both block and file protocols, it has some disadvantages.
One of the major problems is that data must be transferred twice — once across
the NAS Ethernet connection and again across the Fibre Channel or IP SAN — which
adds to I/O latency. Another issue is that the management of NAS gateways
continues to be separate from the management of SAN arrays.
More recent unified storage
platforms leverage virtualization technology to offer a much deeper integration
of file- and block-based storage. A file system performs I/O to disk blocks
using a common virtualized disk-volume engine. Virtualization allows
administrators to create a seamless pool of unified storage and enables
transparent data movement for tiered storage.
While NetApp introduced unified
storage to the market several years ago, it is now available from most storage
vendors. Many of these solutions include features such as data replication,
incremental snapshots and remote mirroring that contribute to robust business
Aligning Storage with
IT organizations face growing
pressure to transform the data center to meet increasing demands for wider
access to information, transactions and services. To a great degree, this means
creating a technology infrastructure composed of virtualized computing and
networking. By breaking the relationship between applications and the IT systems
on which they run, virtualization frees system administrators from providing
specific hardware with static configurations.
However, many organizations
have found that the benefits of virtualization are offset by increased storage
complexity and expense. For example, the creation of hundreds or even thousands
of virtual server image files often leads to massive storage waste. Because each
of these images is typically many gigabytes in size, the total storage required
in virtual environments can be 30 percent more than in an equivalent physical
environment. As a result, virtual machine sprawl increases operational overhead
and compromises storage utilization efficiency and overall business agility.
Unified storage improves
utilization by allowing organizations to consolidate and virtualize storage
across storage protocols, environments and mixed storage platforms. Combinations
of block storage (Fibre Channel or iSCSI) and file storage (NAS systems with
CIFS or NFS) can be managed via a common set of features such as snapshots, thin
provisioning, tiered provisioning, replication, synchronous mirroring and data
migration — all from a single user interface. This shift toward a shared
infrastructure enables organizations to achieve storage utilization rates of 85
percent or more, compared to the sub-50-percent rates in standalone storage
“IT managers are looking for
storage solutions that not only deliver immediate value, but also enable
flexibility and growth over time, so that storage can adapt to changes in an
organization's applications, user needs or business demands,” said Mark Peters,
senior analyst at Enterprise Strategy Group. “Storage solutions that are both
virtualized and unified are ideal to address the needs for both storage
flexibility and data growth.”
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications. Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift. Key Findings
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar. Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications. Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market. Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas. Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 --
Brocade (NASDAQ: BRCD) today announced that FleetCor,
a leading independent global provider of specialized payment products
and services to businesses, commercial fleets, major oil companies,
petroleum marketers and government fleets, has selected Brocade as the
vendor to build its cloud-optimized
network. This new network enhances FleetCor's ability to securely
process millions of transactions monthly and ultimately better serve its
commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor
cardholders worldwide, and they are used to purchase billions of gallons
of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help
evolve its data center and IT operations into a more agile private cloud
infrastructure. Brocade® cloud-optimized networks
are designed to reduce network complexity while increasing performance
and reliability. Brocade solutions for private cloud networking are
purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we
looked at market leadership and non-stop access to critical data," said
Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade
cloud-optimized networking solutions are perfect for our data centers
because they allow us to optimize applications faster, virtually
eliminate downtime and help us meet service level agreements for our
customers. Moving to a cloud-based model also provides us the
flexibility to make adjustments on the fly and access secure information
virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data
centers, citing scalability as a major driver for the purchase. This
approach enables FleetCor to virtualize its geographically distributed
data centers and leverage the equipment it already has, at the highest
level, to achieve maximum return on investment. The Brocade MLXe
provides additional benefits for FleetCor by using less power and has a
smaller footprint than competitive routers; critical in power-and
space-constrained locations in order to allow for growth. The Brocade
MLXe also enables continuous business operation for FleetCor based on
Multi-Chassis Trunking, massive scalability supporting highest 100 GbE
density in the industry with no performance degradation for advanced
features like IPv6 and flexible chassis options to meet network and
The Brocade ServerIron ADX
Series of high-performance application delivery switches provides
FleetCor with a broad range of application optimization functions to
help ensure the reliable delivery of critical applications.
Purpose-built for large-scale, low-latency environments, these switches
accelerate application performance, load-balance high volumes of data
and improve application availability while making the most efficient use
of the company's existing infrastructure. It also delivers dynamic
application provisioning and de-provisioning for FleetCor's highly
virtualized data center, enables seamless migration and translation to
IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers
FleetCor has eliminated thousands of costly networking cables, saving
it hundreds of thousands of dollars and allowing the company to segment,
streamline and secure its network. FleetCor has also been able to
easily integrate Brocade network technology with third-party offerings
already installed in the network, for complete investment protection.
FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for
its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in
each of our data centers to help us leverage the benefits of cloud
computing and the Brocade MLXe delivered on all fronts," said Keirbeck.
"By virtualizing our data center, Brocade allows for non-stop access to
the mission-critical data that FleetCor and its customers rely on every
day. We chose the Brocade MLXe because of the tremendous results we
already saw from our existing Brocade solutions and the exceptional
support and service."
According to a report from analyst firm Gartner, "Although 'economic
affordability' is an immediate, attractive benefit, the biggest
advantages (of cloud services) result from characteristics such as
built-in elasticity and scalability, reduced barriers to entry,
flexibility in service provisioning and agility in contracting."(1)
"The contest between man and machine on Jeopardy! was decided when IBM’s
Watson computer landed on the second Daily Double on day three. The
clue was: “This two-word phrase means the power to take private property
for public use as long as there is just compensation.” Watson’s
response: “What is eminent domain?”" http://asmarterplanet.com/blog/2011/02/watson-on-jeopardy-day-three-what-we-learned-about-how-watson-thinks.html
IBM offers strong capabilities in information management, reporting and analysis. A merger with SPSS in 2009 further enables customers to drive competitive action from both structured and unstructured data. SPSS was an early driver of predictive analytics and influenced its emergence on the market; now it’s an established leader in the field. This IBM company’s predictive-analytics offerings provide organizations a distinct advantage as analytics becomes a mainstay in today’s gridlocked marketplace.
The IBM SPSS predictive-analytics software portfolio combines various capabilities that integrate multiple data sources for statistical, mathematical and other algorithmic analyses and predictive modeling—along with an infrastructure that helps organizations effectively deploy predictions. The results are higher-quality decisions, measurably better outcomes and a higher ROI.
Business analytics combines the forward-looking capacities of predictive analytics with the data-exploration and reporting capabilities of business-intelligence applications. Because it gives organizations the power to use their rich stores of data in many different ways, business analytics is at the heart of providing business insight; it’s the engine that drives better outcomes. System Components>
The Real ROI
Organizations that invest in predictive analytics improve their capability to gain detailed insight into present conditions and to evaluate likely future events and outcomes. They quickly identify ways to improve business performance by cutting costs, minimizing risk and developing successful strategies for increasing revenue. They often outperform their peers. Not surprisingly, the demand for predictive analytics continues to grow. In a 2009 IBM study, 83 percent of CIOs said analytics is a priority.
Companies that deploy predictive solutions clearly demonstrate the power of predictive analytics. Ninety-four percent of SPSS customers achieved a positive ROI with an average payback period of 10.7 months, according to a Nucleus Research study. Returns were achieved through reduced costs, increased productivity, increased employee and customer satisfaction, and greater visibility. Flexibility, performance and price were all key factors in SPSS software purchase decisions.
IBM has announced a new model in its 8000 Tier 1 series of storage
arrays, to be generally available November 19, 2010. The key differences
between the previous 8700 model and the new 8800 model is the use of
2.5 inch 6Gb/sec SAS-2 drives for the back end, and up to 8Gb/sec FC
It uses the same packaging as the new Storwize V7000 with 24 SAS drives in a 2U space.
The total number of drives is 1056 taking three frames.
The 8800 storage array is a welcome addition to the IBM 8000 series,
providing additional power and reducing the footprint and power
consumption significantly compared with earlier models. The 8800 comes
with all the tier 1 functionality that is expected, and is an excellent
tier 1 performance array. The Easy Tier software is best of breed for
tier 1 storage arrays, and Wikibon believes that it will be extensively
The IBM 8800 does not have the drive and capacity options of EMC
or Hitachi. IBM outperforms the EMC VMAX on environmentals for
performance-focused arrays but needs significant work to compete with
Hitachi’s VSP environmentals.
Action Item: Organizations that have IBM 8000 series installed
will be very pleased to have the performance and environmentals of the
8800 storage array, and will usually be best served by continuing to use
the well established 8000 software, processes, and procedures for true
tier 1 applications rather than converting to other vendors. IBM 8000
users should put Easy Tier on a fast track for adoption. However, IBM
will need to do more to attract new users to the IBM’s tier 1 offering.
Brocade Unlocks the Power of the Cloud Through Open, Multi-Vendor Virtual Compute Blocks
Brocade and Its Partners Help Customers Build the Next Generation of Distributed and Virtualized Data Centers in a Simple, Evolutionary Way
LAS VEGAS, NV-- (MARKET WIRE) --08/30/11--(VMworld 2011) --Today at VMworld,Brocade(NASDAQ: BRCD), the leader infabric-baseddata center architectures, today announced significant advancements to the Brocade®CloudPlex™ architecturewith new Brocade Virtual Compute Blocks. These bundled solutions consist of integrated, tested and validated multi-vendor server, virtualization, networking and storage resources. Demonstrating substantial partner traction, the new solutions are available today, delivered and supported in collaboration with a wide range of alliance partners, includingDell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
This open approach is an underlying tenet of the Brocade CloudPlex architecture, which was announced inMay 2011. The open, extensible framework is designed to help customers build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. It is the foundation for integrated compute blocks and it supports existing multi-vendor infrastructure to unify customers' assets into a single compute and storage domain.
"Organizations are seeking to maximize the benefits of cloud computing through more efficient infrastructure procurement, pre-integrated components, faster support response, and greater choice in best-in-class products to meet specific business needs," saidJohn McHugh, CMO of Brocade. "Brocade Virtual Compute Blocks leverage our Ethernet fabrics and industry-leading Fibre Channel SAN fabrics to allow our partners to create integrated stacks that optimize cost effectiveness, flexibility and performance. Because these solutions are open, they allow our customers to scale components independently and better utilize legacy infrastructures."
According to IDC research, "As organizations move to create a dynamic data center enabled by virtualization, they are moving to architectures where server, storage, and network assets are in tighter alignment into converged infrastructures. IDC defines a converged infrastructure as one in which the server, storage, and network infrastructure resources are treated as pools to be assigned as needed to business services... The top benefits organizations achieve by implementing a converged infrastructure are cost savings, simplified management, better availability, increased flexibility, and higher utilization."(1)
Brocade Virtual Compute Block Partner Solutions Brocade Virtual Compute Block solutions include hypervisor software integrated with servers, storage and Brocade fabric networking products in bundled, pre-racked and pre-tested configurations enriched by technology from Dell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
Dell Brocade and Dell have partnered to develop a reference architecture that includes Dell Compellent Fibre Channel storage, Dell PowerEdge servers, Brocade data center and SAN switches and the VMware hypervisor, which is being shown at the Brocade VMworld booth.
"Our reference architecture developed with Brocade demonstrates Dell Compellent's commitment to provide open, cloud-optimized solutions for our customers' increasingly dynamic requirements in Fibre Channel environments," saidPhil Soran, president of Dell Compellent. "Enterprises that deploy this reference architecture benefit from the ability to scale virtualization with their business requirements while deploying industry-leading storage from Dell Compellent and Fibre Channel networking solutions from Brocade."
EMC EMC and Brocade have joined forces with several partners to deliver Virtual Compute Blocks, which combine VMware virtualization software and management tools, EMC® VNXe™ unified storage, servers and integrated Brocade Fibre Channel and Ethernet fabric networking technologies. EMC and Brocade are now working with Arrow, Tech Data, First Distribution and Acao to deliver Virtual Compute Blocks in the U.S., and in parts ofEurope,Africa, andSouth America. These integrated, easy-to-install solutions enable EMC customers to quickly deploy private and hybrid cloud infrastructures, which provide data center consolidation, availability, scalability and automation.
"Our integration work with Brocade is a key enabler for our resellers in providing simplified deployment of Virtual Compute Blocks and further demonstrates our commitment to delivering cloud infrastructure solutions for our mutual customers that help transform data centers into highly efficient and agile environments," saidJosh Kahn, vice president of Solutions Marketing at EMC.
Fujitsu Fujitsu and Brocade have partnered to create solutions supporting Fujitsu's Dynamic Infrastructures architecture, which will help enterprises boost business agility, efficiency and IT economics. These are designed for data centers of the future, delivering powerful automated pools of computing resources made up of server, storage, network and virtualization technology.
"Fabric-based networks are an important requirement to successful deployments of solutions that will enable our customers to accelerate their cloud-based IT initiatives," saidJens-Peter Seick, senior vice president of theProduct Development GroupatFujitsu Technology Solutions. "We are pleased to add Brocade Ethernet fabric technologies to our portfolio, which enhances the long-term partnership we have had in deploying SANs for our customers' virtualized environments."
Hitachi Data Systems Hitachiconverged data center solutionscombine storage, compute and networking, with software management, automation and optimization to automate, accelerate and simplify cloud adoption. As a key networking partner, Brocade provides networking solutions for Hitachi converged data center solutions, including Ethernet switch, Fibre Channel fabric data center switches, and Fibre Channel switch modules for the Hitachi Compute Blade family. Solutions include:
Hitachi solutions built on Microsoft Hyper-V Cloud Fast Track: A combination of Hitachi storage and compute, with Brocade networking and Microsoft Windows Server 2008 R2 with Hyper-V andSystem Centerfor high-performance private cloud infrastructures and an avenue for further automation and orchestration.
Hitachi Unified Compute Platform: An open and converged platform that provides orchestration and management within the portfolio of Hitachi converged solutions for automated dynamic management of servers, storage and networking to create business resource pools from a simple, yet comprehensive interface.
Hitachi Converged Platform for Microsoft Exchange 2010: The first in a portfolio of pre-tested application-specific converged solutions, engineered for rapid deployment and tightly integrated with Exchange 2010's powerful new features for resilience, predictable performance and seamless scalability.
"HDS and Brocade have partnered to deliver tested and proven solutions with tightly integrated storage, compute and networking products that allow our mutual customers to benefit from Ethernet switch and Fibre Channel fabric technologies to create flexible cloud-based infrastructures," saidAsim Zaheer, vice president of Corporate and Product Marketing atHitachi Data Systems. "Through quicker deployment, automation and scalability, Hitachi converged data center solutions help organizations adopt cloud at their own pace and see predictable results and faster time to value."
VMware VMware and Brocade have developed a reference architecture solution that enables organizations to create a scalable virtual desktop infrastructure (VDI) environment.
The VMware/Brocade VDI reference architecture,VMware View™, combines Brocade VDX data center switches and converged network adapters, Intel x-86-based rack servers, iSCSI-based storage and TrendMicro security software.
Benefits of the VMware/Brocade VDI solution include best-in-class performance and scalability, enhanced security, ease-of-migration and lower total cost of ownership.
"VMware and Brocade have collaborated on a joint VDI solution that addresses our customers' needs to improve business productivity though increased performance, secured client access and elimination of business disruptions," saidVittorio Viarengo, vice president of End-User Computing at VMware. "IT organizations can utilize our reference architecture to deploy a quick-start configuration within their data center or at remote locations. In addition, it can be used as a test or development platform for businesses eager to gain the benefits and advantages of virtualizing user desktops."
Avnet Virtual Compute Block Solutions Separately today at VMworld, Brocade and Avnet announced the joint development of marketing and enablement support for a new set of multi-vendor, pre-tested and configured virtualization solutions. The first of these is a reference architecture and validated solution designed to cost effectively scale virtual desktop infrastructure (VDI) environments to support thousands of clients (or desktops) per solution bundle. The VDI bundle will help Avnet reseller partners design and deploy open, efficient and scalable virtualization solutions for their end customers by incorporating Brocade and VMware networking and hypervisor technologies in conjunction with a variety of compute and storage platforms.
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, DCX, Fabric OS, andSAN Healthare registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
VMware, VMware View and VMworld are registered trademarks and/or trademarks of VMware, Inc. inthe United Statesand/or other jurisdictions. The use of the word "partner" or "partnership" does not imply a legal partnership relationship between VMware and any other company.
Video is really gaining a foothold in how content is delivered, not only in the consumer space but also in the high tech space. Recently I was at SNW where SiliconAngle.TV was broadcasting live from the event. If this is where we are going, I thought it would be good to have my good friends at MediaBoss TV help me with a commercial. All comments welcome.
IBM Scale Out NAS sets World Performance record My series last week on IBM Watson (which you can read [here], [here], [here], and [here]) brought attention to IBM's Scale-Out Network Attached Storage [SONAS]. IBM Watson used a customized version of SONAS technology for its internal storage, and like most of the components of IBM Watson, IBM SONAS is commercially available as a stand-alone product.
Like many IBM products, SONAS has gone through various name changes. First introduced by Linda Sanford at an IBM SHARE conference in 2000 under the IBM Research codenameStorage Tank, it was then delivered as a software-only offeringSAN File System, then as a services offeringScale-out File Services (SoFS), and now as an integrated system appliance,SONAS, in IBM's Cloud Services and Systems portfolio.
If you are not familiar with SONAS, here are a few of my previous posts that go into more detail: