Storage Networking is part of the IBM System Storage team. There were several break-out sessions on the third day at the [IBM System Storage Technical University 2011] related to storage networking.
- SAN Best Practices
I always try to catch a session from Jim Blue, who works in our "SAN Central" center of competency team. This session was a long list of useful hints and tips, based on his many years of experience helping clients.
- SAN Zoning works by inclusion, limiting the impact of failing devices. The best approach is to zone by individual initiator port. The default policy for your SAN zoning should be "deny".
- Ports should be named to identify who, what, where and how.
- While many people know not to mix both disk and tape devices on the same HBA, Jim also recommends not mixing dissimilar disks, test and production, FCP and FICON.
- The sweet spot is FOUR paths. Too many paths can impact performance.
- When making changes to redundant fabrics, make changes to the first fabric, then allow sufficient time before making the same changes to the other fabric.
- Use software tools like Tivoli Storage Productivity Center (Standard Edition) to validate all changes to your SAN fabric.
- Do not mix 62.5 and 50.0 micron technology.
- Use port caps to disable inactive ports. In one amusing anecdote, he mention that an uncovered port was hit by sunlight every day, sending error messages that took a while to figure out.
- Save your SAN configuration to non-SAN storage for backup
- Consider firmware about two months old to be stable
- Rule of thumb for estimating IOPS: 75-100 IOPS per 7200 RPM drive, 120-150 IOPS per 10K RPM drive, and 150-200 IOPS per 15K RPM drive.
- Decide whether your shop is just-in-time or just-in-case provisioning. Just-in-time gets additional capacity on demand as needed, and just-in-case over-provisions to avoid scrambling last minute.
- Avoid oversubscribing your inter-switch links (ISL). Aim for around 7:1 to 10:1 ratio.
- Don't go cheap on bandwidth between sites for long-distance replication
- Next Generation Network Fabrics - Strategy and Innovations
Mike Easterly, IBM Director of Global Field Marketing, presented IBM System Networking strategy, in light of IBM's recent acquisition of Blade Network Technologies (BNT). BNT is used in 350 of the Fortune 500 companies, and is ranked #2 behind Cisco in sales of non-core Ethernet switches (based on number of units sold).
Based on a recent survey, companies are upgrading their Ethernet networks for a variety of reasons:
- 56 percent for Live Partition Mobility and VMware Vmotion
- 45 percent for integrated compute stacks, like IBM CloudBurst
- 43 percent for private, public and hybrid cloud computing deployments
- 40 percent for network convergences
Many companies adopt a three-level approach, with core directors, distribution switches, and then access switches at the edge that connect servers and storage devices. IBM's BNT allows you to flatten the network to lower latency by collapsing the access and distribution levels into one.
IBM's strategy is to focus on BNT for the access/distribution level, and to continue its strategic partnerships for the core level.
IBM BNT provides better price/performance and lower energy consumption. To help with hot-aisle/cold-aisle rack deployments, IBM BNT provides both F and R models. F models have ports on the front, and R models have ports in the rear.
IBM BNT supports virtual fabric and HW-offload iSCSI traffic, and future-enabled for FCoE. Support for TRILL (transparent interconnect of lots of links) and OpenFlow will be implemented through software updates to the switches.
While Cisco Nexus 1000v is focused on VMware Enterprise Plus, IBM BNT's VMready works with VMware, Hyper-V, Linux KVM, XEN, OracleVM, and PowerVM. This allows single pane of management of VMready and ESX vSwitches.
In preparation for Converged Enhanced Ethernet (CEE), IBM BNT will provide full 40GbE support sometime next year, and offer switches that support 100GbE uplinks. IBM offers extended length cables, including passive SFP+ DAC at 8.5 meters, and 10Gbase-T Cat7 cables up to 100 meters.
- Inter-datacenter Workload Mobility with VMware vSphere and SAN Volume Controller (SVC)
This session was co-presented between Bill Wiegand, IBM Advanced Technical Services, and Rawley Burbridge, IBM VMware and midrange storage consultant. IBM is the leader in storage virtualization product (SVC), and is the leading reseller of VMware.
Like MetroCluster on IBM N series, or EMC's VPLEX Metro, the IBM SAN Volume Controller can support a stretched cluster across distance that allows virtual machines to move seamlessly from one datacenter to another. This is a feature IBM introduced with SVC 5.1 back in 2009. This can be used for PowerVM Live Partition Mobility, VMware vMotion, and Hyper-V Quick Migration.
SVC stretched cluster can help with both Disaster Avoidance and Disaster Recovery. For Disaster Avoidance, in anticipation of an outage, VMs can be moved to the secondary datacenter. For Disaster Recover, additional automation, such as VMware High Availability (HA) is needed to restart the VMs at the secondary datacenter.
IBM stretched cluster is further improved with a feature called Volume Mirroring (formerly vDisk Mirroring) which creates two physical copies of one logical volume. To the VMware ESX hosts, there is only one volume, regardless of which datacenter it is in. The two physical copies can be on any kind of managed disk, as there is no requirement or dependency of copy services on the back-end storage arrays.
Another recent improvement is the idea of spreading the three quorum disks to three different locations or "failure domains". One in each data center, and a third one in a separate building, somewhere in between the other two, perhaps.
Of course, there are regional disasters that could affect both datacenters. For this reason, SVC stretched cluster volumes can be replicated to a third location up to 8000 km away. This can be done with any back-end disk arrays, as again there is not requirement for copy services from the managed devices. SVC takes care of it all.
Networking is going to be very important for a variety of transformational projects going forward in the next five years.
technorati tags: IBM, SAN, FCP, FICON, BNT, VMready, TRILL, OpenFlow, SVC, iSCSI, FCoE, CEE, VMware, ESX, vMotion, VPLEX, MetroCluster, PowerVM,
Tags: 
iscsi
trill
vmready
esx
vplex
fcp
openflow
bnt
ficon
vmotion
vmware
san
powervm
fcoe
svc
metrocluster
ibm
cee
|
Continuing my coverage of the annual [2010 System Storage Technical University], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, they were called "Meet the Experts", one for mainframe storage, and the other for storage attached to distributed systems. This year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This post provides a recap of the Storage Hardware free-for-all.
The emcee for the event was Scott Drummond. The other experts on the panel included Dan Thompson, Carlos Pratt, Jack Arnold, Jim Blue, Scott Schroder, Ed Baker, Mike Wood, Steve Branch, Randy Arseneau, Tony Abete, Jim Fisher, Scott Wein, Rob Wilson, Jason Auvenshine, Dave Canan, Al Watson, and myself, yours truly, Tony Pearson.
 What can I do to improve performance on my DS8100 disk system? It is running a mix of sequential batch processing and my medical application (EPIC). I have 16GB of cache and everything is formatted as RAID-5.
We are familiar with EPIC. It does not "play well with others", so IBM recommends you consider dedicating resources for just the EPIC data. Also consider RAID-10 instead for the EPIC data.
 How do I evaluate IBM storage solutions in regards to [ PCI-DSS] requirements.
Well, we are not lawyers, and some aspects of the PCI-DSS requirements are outside the storage realm. In March 2010, IBM was named ["Best Security Company"] by SC Magazine, and we have secure storage solutions for both disk and tape systems. IBM DS8000 and DS5000 series offer Full Disk Encryption (FDE) disk drives. IBM LTO-4/LTO-5 and TS1120/TS1130 tape drives meet FIPS requirements for encryption. We will provide you contact information on an encryption expert to address the other parts of your PCI-DSS specific concerns.
 My telco will only offer FCIP routing for long-distance disk replication, but my CIO wants to use Fibre Channel routing over CWDM, what do I do?
IBM XIV, DS8000 and DS5000 all support FC-based long distance replication across CWDM. However, if you don't have dark fiber, and your telco won't provide this option, you may need to re-negotiate your options.
 My DS4800 sometimes reboots repeatedly, what should I do.
This was a known problem with microcode level 760.28, it was detecting a failed drive. You need to replace the drive, and upgrade to the latest microcode.
 Should I use VMware snapshots or DS5000 FlashCopy?
VMware snapshots are not free, you need to upgrade to the appropriate level of VMware to get this function, and it would be limited to your VMware data only. The advantage of DS5000 FlashCopy is that it applies to all of your operating systems and hypervisors in use, and eliminates the consumption of VMware overhead. It provides crash-consistent copies of your data. If your DS5000 disk system is dedicated to VMware, then you may want to compare costs versus trade-offs.
 Any truth to the rumor that Fibre Channel protocol will be replaced by SAS?
SAS has some definite cost advantages, but is limited to 8 meters in length. Therefore, you will see more and more usage of SAS within storage devices, but outside the box, there will continue to be Fibre Channel, including FCP, FICON and FCoE. The Fibre Channel Industry Alliance [FCIA] has a healthy roadmap for 16 Gbps support and 20 Gbps interswitch link (ISL) connections.
 What about Fibre Channel drives, are these going away?
We need to differentiate the connector from the drive itself. Manufacturers are able to produce 10K and 15K RPM drives with SAS instead of FC connectors. While many have suggested that a "Flash-and-Stash" approach of SSD+SATA would eliminate the need for high-speed drives, IBM predicts that there just won't be enough SSD produced to meet the performance needs of our clients over the next five years, so 15K RPM drives, more likely with SAS instead of FC connectors, will continue to be deployed for the next five years.
 We'd like more advanced hands-on labs, and to have the certification exams be more product-specific rather than exams for midrange disk or enterprise disk that are too wide-ranging.
Ok, we will take that feedback to the conference organizers.
 IBM Tivoli Storage Manager is focused on disaster recovery from tape, how do I incorporate remote disk replication.
This is IBM's Unified Recovery Management, based on the seven tiers of disaster recovery established in 1983 at GUIDE conference. You can combine local recovery with FastBack, data center server recovery with TSM and FlashCopy manager, and combine that with IBM Tivoli Storage Productivity Center for Replication (TPC-R), GDOC and GDPS to manage disk replication across business continuity/disaster recovery (BC/DR) locations.
 IBM Tivoli Storage Productivity Center for Replication only manages the LUNs, what about server failover and mapping the new servers to the replicated LUNs?
There are seven tiers of disaster recovery. The sixth tier is to manage the storage replication only, as TPC-R does. The seventh tier adds full server and network failover. For that you need something like IBM GDPS or GDOC that adds this capability.
 All of my other vendor kit has bold advertising, prominent lettering, neon lights, bright colors, but our IBM kit is just black, often not even identifying the specific make or model, just "IBM" or "IBM System Storage".
IBM has opted for simplified packaging and our sleek, signature "raven black" color, and pass these savings on to you.
 Bring back the SHARK fins!
We will bring that feedback to our development team. ("Shark" was the codename for IBM's ESS 800 disk model. Fiberglass "fins" were made as promotional items and placed on top of ESS 800 disk systems to help "identify them" on the data center floor. Unfortunately, professional golfer [<a href="http://www.shark.com/">Greg Norman</a>] complained, so IBM discontinued the use of the codename back in 2005.)
 Where is Infiniband?
Like SAS, Infiniband had limited distance, about 10 to 15 meters, which proved unusable for server-to-storage network connections across data center floorspace. However, there are now 150 meter optical cables available, and you will find Infiniband used in server-to-server communications and inside storage systems. IBM SONAS uses Infiniband today internally. IBM DCS9900 offers Infiniband host-attachment for HPC customers.
 We need midrange storage for our mainframe please?
In addition to the IBM System Storage DS8000 series, the IBM SAN Volume Controller and IBM XIV are able to connect to Linux on System z mainframes.
 We need "Do's and Don'ts" on which software to run with which hardware.
IBM [Redbooks] are a good source for that, and we prioritize our efforts based on all those cards and letters you send the IBM Redbooks team.
 The new TPC v4 reporting tool requires a bit of a learning curve.
The new reporting tool, based on Eclipse's Business Intelligence Reporting Tool [BIRT], is now standardized across the most of the Tivoli portfolio. Check out the [Tivoli Common Reporting] community page for assistance.
 An unfortunate side-effect of using server virtualization like VMware is that it worsens management and backup issues. We now have many guests on each blade server.
IBM is the leading reseller of VMware, and understands that VMware adds an added layer of complexity. Thankfully, IBM Tivoli Storage Manager backups uses a lightweight agent. IBM [System Director VMcontrol] can help you manage a variety of hypervisor environments.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
technorati tags: IBM, Technical University, DS8100, EPIC, PCI-DSS, FDE, Encryption, XIV, CWDM, DS5000, SAS, InfiniBand, FCIA, FCoE, FICON, GUIDE, Tivoli, Productivity Center, TPC-R, GDPS, SONAS, SVC, BIRT, Systems Director, VMcontrol
Tags: 
svc
sas
infiniband
pci-dss
birt
ds8100
tpc-r
ficon
fcoe
sonas
vmcontrol
gdps
ibm
fde
ds5000
productivity+center
#ibmtechu
tivoli
systems+director
encryption
epic
xiv
fcia
guide
#ibmstorage
cwdm
#storage
technical+university
|
Wrapping up my coverage of the annual [2010 System Storage Technical University], I attended what might be perhaps the best session of the conference. Jim Nolting, IBM Semiconductor Manufacturing Engineer, presented the new IBM zEnterprise mainframe, "A New Dimension in Computing", under the Federal track.
The zEnterprises debunks the "one processor fits all" myth. For some I/O-intensive workloads, the mainframe continues to be the most cost-effective platform. However, there are other workloads where a memory-rich Intel or AMD x86 instance might be the best fit, and yet other workloads where the high number of parallel threads of reduced instruction set computing [RISC] such as IBM's POWER7 processor is more cost-effective. The IBM zEnterprise combines all three processor types into a single system, so that you can now run each workload on the processor that is optimized for that workload.
- IBM zEnterprise z196 Central Processing Complex (CPC)
Let's start with the new mainframe z196 central processing complex (CPC). Many thought this would be called the z11, but that didn't happen. Basically, the z196 machine has a maximum 96 cores versus z10's 64 core maximum, and each core runs 5.2GHz instead of z10's cores running at 4.7GHz. It is available in air-cooled and water-cooled models. The primary operating system that runs on this is called "z/OS", which when used with its integrated UNIX System Services subsystem, is fully UNIX-certified. The z196 server can also run z/VM, z/VSE, z/TPF and Linux on z, which is just Linux recompiled for the z/Architecture chip set. In my June 2008 post [Yes, Jon, there is a mainframe that can help replace 1500 servers], I mentioned the z10 mainframe had a top speed of nearly 30,000 MIPS (Million Instructions per Second). The new z196 machine can do 50,000 MIPS, a 60 percent increase!
(Update: Back in 2007, IBM and Sun mutually supported [OpenSolaris on an IBM System z mainframe]. Unfortunately, after Oracle acquired Sun, the OpenSolaris Governing Board has [grown uneasy over Oracle's silence] about the future of OpenSolaris on any platform. The OpenSolaris [download site] identifies 2009.06 as the latest release, but only for x86 and SPARC chip sets. Apparently, the 2010.03 release expected five months ago in March has slipped. Now it looks official that [OpenSolaris is Dead].)
The z196 runs a hypervisor called PR/SM that allows the box to be divided into dozens of logical partitions (LPAR), and the z/VM operating system can also act as a hypervisor running hundreds or thousands of guest OS images. Each core can be assigned a specialty engine "personality": GP for general processor, IFL for z/VM and Linux, zAAP for Java and XML processing, and zIIP for database, communications and remote disk mirroring. Like the z9 and z10, the z196 can attach to external disk and tape storage via ESCON, FICON or FCP protocols, and through NFS via 1GbE and 10GbE Ethernet.
- IBM zEnterprise BladeCenter Extension (zBX)
There is a new frame called the zBX that basically holds two IBM BladeCenter chassis, each capable of 14 blades, so total of 28 blades per zBX frame. For now, only select blade servers are supported inside, but IBM plans to expand this to include more as testing continues. The POWER-based blades can run native AIX, IBM's other UNIX operating system, and the x86-based blades can run Linux-x86 workloads, for example. Each of these blade servers can run a single OS natively, or run a hypervisor to have multiple guest OS images. IBM plans to look into running other POWER and x86-based operating systems in the future.
If you are already familiar with IBM's BladeCenter, then you can skip this paragraph. Basically, you have a chassis that holds 14 blades connected to a "mid-plane". On the back of the chassis, you have hot-swappable modules that snap into the other side of the mid-plane. There are modules for FCP, FCoE and Ethernet connectivity, which allows blades to talk to each other, as well as external storage. BladeCenter Management modules serve as both the service processor as well as the keyboard, video and mouse Local Console Manager (LCM). All of the IBM storage options available to IBM BladeCenter apply to zBX as well.
Besides general purpose blades, IBM will offer "accelerator" blades that will offload work from the z196. For example, let's say an OLAP-style query is issued via SQL to DB2 on z/OS. In the process of parsing the complicated query, it creates a Materialized Query Table (MQT) to temporarily hold some data. This MQT contains just the columnar data required, which can then be transferred to a set of blade servers known as the Smart Analytics Optimizer (SAO), then processes the request and sends the results back. The Smart Analytics Optimizer comes in various sizes, from small (7 blades) to extra large (56 blades, 28 in each of two zBX frames). A 14-blade configuration can hold about 1TB of compressed DB2 data in memory for processing.
- IBM zEnterprise Unified Resource Manager
You can have up to eight z196 machines and up to four zBX frames connected together into a monstrously large system. There are two internal networks. The Inter-ensemble data network (IEDN) is a 10GbE that connects all the OS images together, and can be further subdivided into separate virtual LANs (VLAN). The Inter-node management network (INMN) is a 1000 Mbps Base-T Ethernet that connects all the host servers together to be managed under a single pane of glass known as the Unified Resource Manager. It is based on IBM Systems Director.
By integrating service management, the Unified Resource Manager can handle Operations, Energy Management, Hypervisor Management, Virtual Server Lifecycle Management, Platform Performance Management, and Network Management, all from one place.
- IBM Rational Developer for System z Unit Test (RDz)
But what about developers and testers, such as those Independent Software Vendors (ISV) that produce mainframe software. How can IBM make their lives easier?
Phil Smith on z/Journal provides a history of [IBM Mainframe Emulation]. Back in 2007, three emulation options were in use in various shops:
- Open Mainframe, from Platform Solutions, Inc. (PSI)
- FLEX-ES, from Fundamental Software, Inc.
- Hercules, which is an open source package
None of these are viable options today. Nobody wanted to pay IBM for its Intellectual Property on the z/Architecture or license the use of the z/OS operating system. To fill the void, IBM put out an officially-supported emulation environment called IBM System z Professional Development Tool (zPDT) available to IBM employees, IBM Business Partners and ISVs that register through IBM Partnerworld. To help out developers and testers who work at clients that run mainframes, IBM now offers IBM Rational Developer for System z Unit Test, which is a modified version of zPDT that can run on a x86-based laptop or shared IBM System x server. Based on the open source [Eclipse IDE], the RDz emulates GP, IFL, zAAP and zIIP engines on a Linux-x86 base. A four-core x86 server can emulate a 3-engine mainframe.
With RDz, a developer can write code, compile and unit test all without consuming any mainframe MIPS. The interface is similar to Rational Application Developer (RAD), and so similar skills, tools and interfaces used to write Java, C/C++ and Fortran code can also be used for JCL, CICS, IMS, COBOL and PL/I on the mainframe. An IBM study ["Benchmarking IDE Efficiency"] found that developers using RDz were 30 percent more productive than using native z/OS ISPF. (I mention the use of RAD in my post [Three Things to do on the IBM Cloud]).
What does this all mean for the IT industry? First, the zEnterprise is perfectly positioned for [three-tier architecture] applications. A typical example could be a client-facing web-server on x86, talking to business logic running on POWER7, which in turn talks to database on z/OS in the z196 mainframe. Second, the zEnterprise is well-positioned for government agencies looking to modernize their operations and significantly reduce costs, corporations looking to consolidate data centers, and service providers looking to deploy public cloud offerings. Third, IBM storage is a great fit for the zEnterprise, with the IBM DS8000 series, XIV, SONAS and Information Archive accessible from both z196 and zBX servers.
To learn more, see the [12-page brochure] or review the collection of [IBM Redbooks]. Check out the [IBM Conferences schedule] for an event near you. Next year, the IBM Storage University will be held July 18-22, 2011 in Orlando, Flordia.
technorati tags: IBM, Technical University, zEnterprise, x86, POWER7, RISC, z/OS, Linux, AIX, OpenSolaris, Oracle, FICON, NFS, z196, zBX, DB2, SAO, IEDN, INMN, RDz, ISV, Eclipse, Cloud Computing
Tags: 
z196
#ibmstorage
db2
cloud+computing
nfs
x86
zenterprise
opensolaris
risc
rdz
#ibmtechu
ibm
z/os
power7
inmn
#storage
sao
aix
isv
linux
ficon
iedn
technical+university
eclipse
zbx
oracle
|
Mastering the art of stretching out a week-long event into two weeks' worth of blog posts, I continue my
coverage of the [Data Center 2010 conference], Tuesday afternoon I attended several sessions that focused on technologies for Cloud Computing.
(Note: It appears I need to repeat this. The analyst company that runs this event has kindly asked me not to mention their name on this blog, display any of their logos, mention the names of any of their employees, include photos of any of their analysts, include slides from their presentations, or quote verbatim any of their speech at this conference. This is all done to protect and respect their intellectual property that their members pay for. The pie charts included on this series of posts were rendered by Google Charting tool.)
- Converging Storage and Network Fabrics
The analysts presented a set of alternative approaches to consolidating your SAN and LAN fabrics. Here were the choices discussed:
- Fibre Channel over Ethernet (FCoE) - This requires 10GbE with Data Center Bridging (DCB) standards, what IBM refers to as Converged Enhanced Ethernet (CEE). Converged Network Adapters (CNAs) support FC, iSCSI, NFS and CIFS protocols on a single wire.
- Internet SCSI (iSCSI) - This works on any flavor of Ethernet, is fully routable, and was developed in the 1990s by IBM and Cisco. Most 1GbE and all 10GbE Network Interface Cards (NIC) support TCP Offload Engine (TOE) and "boot from SAN" capability. Native suppot for iSCSI is widely available in most hypervisors and operating systems, including VMware and Windows. DCB Ethernet is not required for iSCSI, but can be helpful. Many customers keep their iSCSI traffic in a separate network (often referred to as an IP SAN) from the rest of their traditional LAN traffic.
- Network Attached Storage (NAS) - NFS and CIFS have been around for a long time and work with any flavor of Ethernet. Like iSCSI, DCB is not required but can be helpful. NAS went from being for files only, to be used for email and database, and now is viewed as the easiest deployment for VMware. Vmotion is able to move VM guests from one host to another within the same LAN subnet.
- Infiniband or PCI extenders - this approach allows many servers to share fewer number of NICs and HBAs. While Infiniband was limited in distance for its copper cables, recent advances now allow fiber optic cables for 150 meter distances.
Interactive poll of the audience offered some insight on plans to switch from FC/FICON to Ethernet-based storage:
Interactive poll of the audience offered some insight on what portion storage is FCP/FICON attached:
Interactive poll of the audience offered some insight on what portion storage is Ethernet-attached:
Interactive poll of the audience offered some insight on what portion of servers are already using some Ethernet-attached storage:
Each vendor has its own style. HP provides homogeneous solutions, having acquired 3COM and broken off relations with Cisco. Cisco offers tight alliances over closed proprietary solutions, publicly partnering with both EMC and NetApp for storage. IBM offers loose alliances, with IBM-branded solutions from Brocade and BNT, as well as reselling arrangements with Cisco and Juniper. Oracle has focused on Infiniband instead for its appliances.
The analysts predict that IBM will be the first to deliver 40 GbE, from their BNT acquisition. They predict by 2014 that Ethernet approaches (NAS, iSCSI, FCoE) will be the core technology for all but the largest SANs, and that iSCSI and NAS will be more widespread than FCoE. As for cabling, the analysts recommend copper within the rack, but fiber optic between racks. Consider SAN management software, such as IBM Tivoli Storage Productivity Center.
The analysts felt that the biggest inhibitor to merging SAN and LANs will be organizational issues. SAN administrators consider LAN administrators like "Cowboys" undisciplined and unwilling to focus on 24x7 operational availability, redundancy or business continuity. LAN administrators consider SAN administrators as "Luddites" afraid or unwilling to accept FCoE, iSCSI or NAS approaches.
- Driving Innovation through Innovation
Mr. Shannon Poulin from Intel presented their advancements in Cloud Computing. Let's start with some facts and predictions:
Today:
- About 25 percent of the
[world's population] is connected to the Internet
- There are over 2.5 billion photos on Facebook, which runs on 30,000 servers
- 30 billion videos viewed every month
- Nearly all Internet-connected devices are either computers or phones
|
By 2015:
- An additional billion people on the Internet
- Cars, televisions, and households will also be connected to the Internet
- The world will need 8x more network bandwidth, 12x more storage, and 20x more compute power
|
To avoid confusion between on-premise and off-premise deployments, Intel defines "private cloud" as "single tenant" and "public cloud" as "multi-tenant". Clouds should be
automated, efficient, simple, secure, and interoperable enough to allow federation of resources across providers. He also felt that Clouds should be "client-aware" so that it know what devices it is talking to, and optimizes the results accordingly. For example, if watching video on a small 320x240 smartphone screen, it makes no sense for the Cloud server to push out 1080p. All devices are going through a connected/disconnected dichotomy. They can do some things while disconnected, but other things only while connected to the Internet or Cloud provider.
An internal Intel task force investigated what it would take to beat MIPS and IBM POWER processors and found that their own Intel chips lacked key functionality. Intel plans to address some of their shortcomings with a new chip called "Sandbridge" sometime next year. They also plan a series of specialized chips that support graphics processing (GPU), network processing (NPU) and so on. He also mentioned Intel released "Tukwilla" earlier this year, the latest version of Itanium chip. HP is the last major company to still use Itanium for their servers.
Shannon wrapped up the talk with a discussion of two Cloud Computing initiatives. The first is [Intel® Cloud Builders], a cross-industry effort to build Cloud infrastructures based on the Intel Xeon chipset. The second is the [Open Data Center Alliance], comprised of leading global IT managers who are working together to define and promote data center requirements for the cloud and beyond.
- Fabric-Based Infrastructure
The analysts feel that we need to switch from thinking about "boxes" (servers, storage, networks) to "resources". To this end, they envision a future datacenter where resources are connected to an any-to-any fabric that connects compute, memory, storage, and networking resources as commodities. They feel the current trend towards integrated system stacks is just a marketing ploy by vendors to fatten their wallets. (Ouch!)
A new concept to "disaggregate" caught my attention. When you make cookies, you disaggregate a cup of sugar from the sugar bag, a teaspoon of baking soda from the box, and so on. When you carve a LUN from a disk array, you are disaggregating the storage resources you need for a project. The analysts feel we should be able to do this with servers and network resources as well, so that when you want to deploy a new workload you just disaggregate the bits and pieces in the amounts you actually plan to use and combine them accordingly. IBM calls these combinations "ensembles" of Cloud computing.
Very few workloads require "best-of-breed" technologies. Rather, this new fabric-based infrastructure recognizes the reality that most workloads do not. One thing that IT Data Center operations can learn from Cloud Service Providers is their focus on "good enough" deployment.
This means however that IT professionals will need new skill sets. IT administrators will need to learn a bit of application development, systems integration, and runbook automation. Network adminis need to enter into 12-step programs to stop using Command Line Interfaces (CLI). Server admins need to put down their screwdrivers and focus instead on policy templates.
Whether you deploy private, public or hybrid cloud computing, the benefits are real and worth the changes needed in skill sets and organizational structure.
technorati tags: IBM, FCoE, iSCSI, NAS, NFS, CIFS, DCB, CEE, CNA, TOE, SAN, LAN, Convergence, FICON, Ethernet, BNT, Cisco, HP, 3COM, Intel, ODCA, CLI
Tags: 
cli
intel
nfs
fcoe
dcb
ethernet
odca
hp
lan
nas
bnt
ibm
convergence
3com
cee
san
ficon
cna
iscsi
cifs
toe
cisco
|
Did IBM XIV force EMC's hand to announce VMAXe? Let's take a stroll down memory lane.
In 2008, IBM XIV showed the world that it could ship a Tier-1, high-end, enterprise-class system using commodity parts. Technically, prior to its acquisition by IBM, the XIV team had boxes out in production since 2005. EMC incorrectly argued this announcement meant the death of the IBM DS8000. Just because EMC was unable to figure out how to have more than one high-end disk product, doesn't mean IBM or other storage vendors were equally challenged. Both IBM XIV and DS8000 are Tier-1, high-end, enterprise-class storage systems, as are the IBM N series N7900 and the IBM Scale-Out Network Attached Storage (SONAS).
In April 2009, EMC followed IBM's lead with their own V-Max system, based on Symmetrix Engenuity code, but on commodity x86 processors. Nobody at EMC suggested that the V-Max meant the death of their other Symmetrix box, the DMX-4, which means that EMC proved to themselves that a storage vendor could offer multiple high-end disk systems. Hitachi Data Systems (HDS) would later offer the VSP, which also includes some commodity hardware as well.
In July 2009, analysts at International Technology Group published their TCO findings that IBM XIV was 63 percent less expensive than EMC V-Max, in a whitepaper titled [COST/BENEFIT CASE
FOR IBM XIV STORAGE SYSTEM Comparing Costs for IBM XIV and EMC V-Max Systems]. Not surprisingly, EMC cried foul, feeling that EMC V-Max had not yet been successful in the field, it was too soon to compare newly minted EMC gear with a mature product like XIV that had been in production accounts for several years. Big companies like to wait for "Generation 1" of any new product to mature a bit before they purchase.
To compete against IBM XIV's very low TCO, EMC was forced to either deeply discount their Symmetrix, or counter-offer with lower-cost CLARiiON, their midrange disk offering. An ex-EMCer that now works for IBM on the XIV sales team put it in EMC terms -- "the IBM XIV provides a Symmetrix-like product at CLARiiON-like prices."
(Note: Somewhere in 2010, EMC dropped the hyphen, changing the name from V-Max to VMAX. I didn't see this formally announced anywhere, but it seems that the new spelling is the officially correct usage. A common marketing rule is that you should only rename failed products, so perhaps dropping the hyphen was EMC's way of preventing people from searching older reviews of the V-Max product.)
This month, IBM introduced the IBM XIV Gen3 model 114. The analysts at ITG updated their analysis, as there are now more customers that have either or both products, to provide a more thorough comparison. Their latest whitepaper, titled [Cost/Benefit Case for IBM XIV Systems: Comparing Cost
Structures for IBM XIV and EMC VMAX Systems], shows that IBM maintains its substantial cost savings advantage, representing 69 percent less Total Cost of Ownership (TCO) than EMC, on average, over the course of three years.
In response, EMC announced its new VMAXe, following the naming convention EMC established for VNX and VNXe. Customers cannot upgrade VNXe to VNX, nor VMAXe to VMAX, so at least EMC was consistent in that regard. Like the IBM XIV and XIV Gen3, the new EMC VMAXe eliminated "unnecessary distractions" like CKD volumes and FICON attachment needed for the IBM z/OS operating system on IBM System z mainframes. Fellow blogger Barry Burke from EMC explains everything about the VMAXe in his blog post [a big thing in a small package].
So, you have to wonder, did IBM XIV force EMC's hand into offering this new VMAXe storage unit? Surely, EMC sales reps will continue to lead with the more profitable DMX-4 or VMAX, and then only offer the VMAXe when the prospective customer mentions that the IBM XIV Gen3 is 69 percent less expensive. I haven't seen any list or street prices for the VMAXe yet, but I suspect it is less expensive than VMAX, on a dollar-per-GB basis, so that EMC will not have to discount it as much to compete against IBM.
technorati tags: IBM, XIV, Gen3, EMC, DMX-4, VMAX, V-Max, HDS, N7900, SONAS, DS8000, CKD, FICON, TCO
Tags: 
emc
v-max
gen3
ibm
ckd
tco
ficon
dmx-4
vmax
n7900
ds8000
hds
sonas
xiv
|
Continuing my coverage of the [IBM System x and System Storage Technical Symposium], I thought I would start with some photos. I took these with cell phone, and without realizing how much it would cost, uploaded them to Flickr at international data roaming rates. Oops!
Here are some of the banners used at the conference. Each break-out session room was outfitted with a "Presentation Briefcase" that had everything a speaker might need, including power plug adapters and dry-erase markers for the whiteboard. What a clever idea!
Here is a recap of the last and final day 3:
- Understanding IBM's Storage Encryption Options
Special thanks to Jack Arnold for providing me his deck for this presentation. I presented IBM's leadership in encryption standards, including the [OASIS Key Management Interoperability Protocol] that allows many software and hardware vendors to interoperate. IBM offers the IBM Tivoli Key Lifecycle Manager (TKLM v2) for Windows, Linux, AIX and Solaris operating systems, and the IBM Security Key Lifecycle Manager (v1.1) for z/OS.
Encrypting data at rest can be done several ways, by the application at the host server, in a SAN-based switch, or at the storage system itself. I presented how IBM Tivoli Storage Manager, the IBM SAN32B-E4 SAN switch, and various disk and tape devices accomplish this level of protection.
- NAS @ IBM
Rich Swain, IBM Field Technical Sales Specialist for NAS solutions, provided an overview of IBM's NAS strategy and the three products: Scale-Out Network Attached Storage (SONAS), Storwize V7000 Unified, and N series.
- IBM System Networking Convergence CEE/DCB/FCoE
Mike Easterly, IBM Global Field Marketing Manager for IBM System Networking, presented on Network convergence. He wants to emphasize that "Convergence is not just FCoE!" rather it is bringing together FCoE with iSCSI, CIFS, NFS and other Ethernet-based protocols. In his view, "All roads lead to Ethernet!"
There are a lot new standards that didn't exist a few years ago, such as PCI-SIG's Single Root I/O Virtualization [SR-IOV], Virtual Ethernet Port Aggregator [VEPA], and [VN-Tag], Data Center Bridging [DCB], Layer-2 Multipath [L2MP], and my favorite: Transparent Interconnect of Lots of Links [TRILL].
Last year, IBM acquired Blade Network Technologies (BNT), which was the company that made IBM BladeCenter's Advanced Management Module (AMM) and BladeCenter Open Fabric Manager (BOFM). BNT also makes Ethernet switches, so it has been merged with IBM's System Storage team, forming the IBM System Storage and Networking team. Most of today's 10GbE is either fiber optic, Direct Attach Copper (DAC) that supports up to 8.5 meter length cables, or 10GBASE-T which provides longer distances of twisted pair. IBM's DS3500 uses 10GBASE-T for its 10GbE iSCSI support.
Last month, IBM announced 40GbE! I missed that one. The IT industry also expects to deliver 100GbE by 2013. For now, these will be used as up-links between other switches, as most servers don't have the capacity to pump this much data through their buses. With 40GbE and 100GbE, it would be hard to ignore Ethernet as the common network standard to drive convergence.
Fibre Channel, such as FCP and FICON, are still the dominant storage networking technology, but this is expected to peak around 2013 and start declining thereafter in favor of iSCSI, NAS and FCoE technologies. Already the enhancements like "Priority-based Flow Control" made to Ethernet to support FCoE have also helped out iSCSI and NAS deployments as well.
The iSCSI protocol is being used with Microsoft Exchange, PXE Boot, Server virtualization hypervisors like VMware and Hyper-V, as well as large Database and OLTP. IBM's SVC, Storwize V7000, XIV, DS5000, DS3500 and N series all support iSCSI.
IBM's [RackSwitch] family of products can help offload traffic at $500 per port, compared to traditional $2000 per port for IBM SAN32B or Cisco Nexus5000 converged top-of-rack switches.
IBM's System Networking strategy has two parts. For Ethernet, offer its own IBM System Networking product line as well as continue its partnership with Juniper Networks. For Fibre Channel and FCoE, continue strategic partnerships with Brocade and Cisco. IBM will lead the industry, help drive open standards to adopt Converged Enhanced Ethernet (CEE), provide flexibility and validate data center networking solutions that work end-to-end.
Well, that marks the end of this week in Auckland, New Zealand. I am off now to Melbourne, Australia for the [IBM System Storage Technical Symposium] next week.
technorati tags: IBM, EKM, TKLM, SKLM, SONAS, SAN32B-E4, Storwize+V7000, CEE, DCB, FCoE, iSCSI, NAS, CIFS, NFS, Ethernet, PCI-SIG, SR-IOV, VEPA, VN-Tag, DCB, L2MP, TRILL, BNT, BOFM, AMM, DAC, 10GBASE-T, DS3500, 40GbE, , FCP, FICON, PXE, SVC, Cisco, Nexus5000, RackSwitch
Tags: 
cee
iscsi
nas
fcp
40gbe
dcb
storwize+v7000
pxe
svc
trill
l2mp
vepa
vn-tag
dac
sr-iov
cisco
rackswitch
pci-sig
nexus5000
ibm
nfs
ekm
10gbase-t
amm
ds3500
sklm
fcoe
ficon
sonas
tklm
bnt
bofm
cifs
san32b-e4
ethernet
|
This week, I am in Melbourne, Australia for the [IBM System x and System Storage Technical Symposium]. Here is a recap of Day 1:
- IBM Tivoli Storage Productivity Center v4.2.2 Overview and Update
This was an updated version of the presentation I gave last July in Orlando, Florida (see my post [IBM Storage University - Day 1]). Since it might have been awhile since the Australian audience had heard about the latest and greatest for Tivoli Storage Productivity Center, I decided to cover the enhancements of 4.2.0, 4.2.1 and 4.2.1 combined.
IBM Tivoli Storage Productivity Center is an important part of IBM's "Storage Hypervisor" solution, combining a single pane of glass for management with non-disruptive storage virtualization with SVC and Storwize V7000.
- IBM Storwize V7000 and SVC integration with VMware
Alexi Giral from IBM Sydney presented this session on how Storwize V7000 and SVC serve as the "Storage Hypervisor" for VMware server virtualization environments. The focus was on the FCP and iSCSI block-only access modes of these devices, although one could use IBM Storwize V7000 Unified to provide NFS file-level access to VMware. Alexi covered both VMware Vsphere v4 and v5, as there are a few differences.
IBM Storwize V7000 and SVC supports thin provisioning, VMware's VAAI interface, VMware's Site Recovery Manager, and provides a storage management plug-in to Vmware's vCenter. The SVC has extended the distance for split-cluster configurations that support VMware's vMotion live partition mobility and High Availiability (HA) up to 300km using active DWDM.
Want best practices for configuring SVC or Storwize V7000 with VMware? IBM has published the [VMware Proof of Practice and Performance Guidelines on the SAN Volume Controller] redbook, and the [VMware Multipathing with the SAN Volume Controller] redpaper.
- Tape Storage Reinvented: What's New and Exciting in the Tape World?
Special thanks to Jim Fisher and Jim Karp for providing me this presentation, videos and supporting materials for me to present this session. I gave this as the first break-out session on Tuesday, and then repeated as the last break-out session on Thursday. Several of the attendees in the audience mocked my title, with taunts like "What could be NEW or EXCITING about tape?" I covered four key areas:
- The new TS1140 tape drive, including the corresponding model-JC tape that holds 4TB native (12 TB compressed!).
- The enhanced TS3500 with the Tape Library Connector Shuttle. I had a video that shows how tapes can be sent from one TS3500 tape library string to another.
- The new Linear Tape File System (LTFS), both the single drive edition and the library edition
- The new 3592-C07 FICON controller for our mainframe clients
By the end of the session, the folks that taunted me were honestly impressed that they learned a few things, and had not realized so much has been developed recently in the world of tape.
This week is also hosting [The Presidents Cup] golf tournament at the [Royal Melbourne Golf Club] featuring 24 of the top golfers, including Tiger Woods, as well as [celebrities like Michael Jordan].
technorati tags: IBM, Storage, Symposium, Melbourne, Australia, TPC, Storage Hypervisor, Storwize V7000, SVC, VAAI, vMotion, DWDM, TS1140, 3592-E07, TS3500, LTFS, Emmy, 3592-C07, FICON, The Presidents Cup, Royal Melbourne Golf Club, Tiger Woods, Michael Jordan
Tags: 
michael+jordan
vaai
vmotion
symposium
tpc
storage+hypervisor
melbourne
storwize+v7000
ficon
ltfs
ts1140
australia
ts3500
ibm
dwdm
3592-e07
royal+melbourne
presidents+cup
svc
3592-c07
storage
tiger+woods
emmy
|
This week, IBM made over a dozen announcements related to IBM storage products. Here is part 2 of my overview:
- IBM System Storage® DS8000 series microcode
-
One of the advantages of acquiring XIV as IBM's other high-end disk system, is that it allows the DS8000 team to focus on the IBM i and z/OS operating systems. As a result, IBM DS8000 has over half the mainframe-attach market share.
For both the DS8700 and DS8800 models, IBM Easy Tier now support sub-LUN automated tiering across three storage tiers: Solid-State Drives, high-performance spinning disk drives (15K and 10K RPM), and high-capacity disk drives (7200 RPM).
For System z customers, the latest DS8000 microcode has synergy with z/OS and GDPS, now supporting 4x larger EAV volumes, faster high-performance FICON (zHPF), and Workload Manager (WLM) integration with the I/O Priority Manager. IBM has a world record SAP performance of 59 million account postings per hour. DB2 v10 for z/OS queries were measured at 11x faster using the new zHPF feature.
- IBM System Storage® DS8800 systems
On the hardware side, the DS8800 now supports a fourth frame to hold a total over 1,500 disk drives. Yes, we have customers that three frames wasn't enough, and they wanted more.
IBM is now also offering new drive options. Small Form Factor (2.5 inch) drives now include 300GB 15K RPM drives, and a 900GB 10K RPM drives. But wait! There's more! The DS8800 is no longer a SFF-only box, it now allows for mixing in Large form factor (3.5 inch) drives, starting with the 3TB NL-SAS 7200 RPM drive.
- IBM XIV® Storage System Gen3
We announced the XIV Gen3 already, but we have two enhancements.
First, we now offer a model based entirely on 3TB NL-SAS drives. If you are thinking, what IBM is going to put 3TB drives into everything? Yup. Once we go through all the pain and suffering of qualifying a drive, we make sure we get our money's worth!
Secondly, we have now an iPad application to manage the XIV. This has nothing to do with Apple CEO Steve Jobs passing away last week, it was merely coincidence.
- IBM Real-time Compression Appliances™ STN6500 and STN6800 V3.8
The latest software for RtCA now supports Microsoft SMB v2, and enhanced reporting so that storage admins know exactly the benefits of the compression ratios of different file extensions.
- IBM System Storage EXP2500 Express®
The EXP2500 is for direct-attach situations, like the IBM BladeCenter. IBM adds LFF 3.5-inch 3TB NL-SAS drives, SFF 2.5-inch 300GB 15K RPM SAS drives, and 900GB 7200 RPM NL-SAS drives.
My colleague Curtis Neal refers to these as "B.F.D" announcements, which of course stands for Bigger, Faster, Denser!
technorati tags: IBM, DS8000, DS8700, DS8800, Easy Tier, FICON, zHPF, WLM, DB2, SAS, NL-SAS, , XIV, STN6500, STN6800, EXP2500, Curtis Neal
Tags: 
ds8800
zhpf
xiv
stn6500
wlm
stn6800
easy+tier
ficon
exp2500
ibm
nl-sas
curtis+neal
ds8000
db2
sas
ds8700
|
Continuing my drawn out coverage of IBM's big storage launch of February 9, today I'll cover the IBM System Storage TS7680 ProtecTIER data deduplication gateway for System z.
On the host side, TS7680 connects to mainframe systems running z/OS or z/VM over FICON attachment, emulating an automated tape library with 3592-J1A devices. The TS7680 includes two controllers that emulate the 3592 C06 model, with 4 FICON ports each. Each controller emulates up to 128 virtual 3592 tape drives, for a total of 256 virtual drives per TS7680 system. The mainframe sees up to 1 million virtual tape cartridges, up to 100GB raw capacity each, before compression. For z/OS, the automated library has full SMS Tape and Integrated Library Management capability that you would expect.
Inside, the two control units are both connected to a redundant pair cluster of ProtecTIER engines running the HyperFactor deduplication algorithm that is able to process the deduplication inline, as data is ingested, rather than post-process that other deduplication solutions use. These engines are similar to the TS7650 gateway machines for distributed systems.
On the back end, these ProtecTIER deduplication engines are then connected to external disk, up to 1PB. If you get 25x data deduplication ratio on your data, that would be 25PB of mainframe data stored on only 1PB of physical disk. The disk can be any disk supported by ProtecTIER over FCP protocol, not just the IBM System Storage DS8000, but also the IBM DS4000, DS5000 or IBM XIV storage system, various models of EMC and HDS, and of course the IBM SAN Volume Controller (SVC) with all of its supported disk systems.
Here is the [ Announcement Letter]
technorati tags: ts7680, z/OS, z/VM, FICON, 3592-J1A, SMS Tape, TS7650, ProtecTIER, mainframe, FCP, DS8000, DS5000, DS4000, XIV, SVC
Tags: 
ds5000
protectier
ds4000
ts7650
sms+tape
z/vm
z/os
3592-j1a
ds8000
xiv
ficon
ts7680
svc
mainframe
fcp
|