Continuing my coverage of the [IBM System x and System Storage Technical Symposium], I thought I would start with some photos. I took these with cell phone, and without realizing how much it would cost, uploaded them to Flickr at international data roaming rates. Oops!
Here are some of the banners used at the conference. Each break-out session room was outfitted with a "Presentation Briefcase" that had everything a speaker might need, including power plug adapters and dry-erase markers for the whiteboard. What a clever idea!
Here is a recap of the last and final day 3:
- Understanding IBM's Storage Encryption Options
Special thanks to Jack Arnold for providing me his deck for this presentation. I presented IBM's leadership in encryption standards, including the [OASIS Key Management Interoperability Protocol] that allows many software and hardware vendors to interoperate. IBM offers the IBM Tivoli Key Lifecycle Manager (TKLM v2) for Windows, Linux, AIX and Solaris operating systems, and the IBM Security Key Lifecycle Manager (v1.1) for z/OS.
Encrypting data at rest can be done several ways, by the application at the host server, in a SAN-based switch, or at the storage system itself. I presented how IBM Tivoli Storage Manager, the IBM SAN32B-E4 SAN switch, and various disk and tape devices accomplish this level of protection.
- NAS @ IBM
Rich Swain, IBM Field Technical Sales Specialist for NAS solutions, provided an overview of IBM's NAS strategy and the three products: Scale-Out Network Attached Storage (SONAS), Storwize V7000 Unified, and N series.
- IBM System Networking Convergence CEE/DCB/FCoE
Mike Easterly, IBM Global Field Marketing Manager for IBM System Networking, presented on Network convergence. He wants to emphasize that "Convergence is not just FCoE!" rather it is bringing together FCoE with iSCSI, CIFS, NFS and other Ethernet-based protocols. In his view, "All roads lead to Ethernet!"
There are a lot new standards that didn't exist a few years ago, such as PCI-SIG's Single Root I/O Virtualization [SR-IOV], Virtual Ethernet Port Aggregator [VEPA], and [VN-Tag], Data Center Bridging [DCB], Layer-2 Multipath [L2MP], and my favorite: Transparent Interconnect of Lots of Links [TRILL].
Last year, IBM acquired Blade Network Technologies (BNT), which was the company that made IBM BladeCenter's Advanced Management Module (AMM) and BladeCenter Open Fabric Manager (BOFM). BNT also makes Ethernet switches, so it has been merged with IBM's System Storage team, forming the IBM System Storage and Networking team. Most of today's 10GbE is either fiber optic, Direct Attach Copper (DAC) that supports up to 8.5 meter length cables, or 10GBASE-T which provides longer distances of twisted pair. IBM's DS3500 uses 10GBASE-T for its 10GbE iSCSI support.
Last month, IBM announced 40GbE! I missed that one. The IT industry also expects to deliver 100GbE by 2013. For now, these will be used as up-links between other switches, as most servers don't have the capacity to pump this much data through their buses. With 40GbE and 100GbE, it would be hard to ignore Ethernet as the common network standard to drive convergence.
Fibre Channel, such as FCP and FICON, are still the dominant storage networking technology, but this is expected to peak around 2013 and start declining thereafter in favor of iSCSI, NAS and FCoE technologies. Already the enhancements like "Priority-based Flow Control" made to Ethernet to support FCoE have also helped out iSCSI and NAS deployments as well.
The iSCSI protocol is being used with Microsoft Exchange, PXE Boot, Server virtualization hypervisors like VMware and Hyper-V, as well as large Database and OLTP. IBM's SVC, Storwize V7000, XIV, DS5000, DS3500 and N series all support iSCSI.
IBM's [RackSwitch] family of products can help offload traffic at $500 per port, compared to traditional $2000 per port for IBM SAN32B or Cisco Nexus5000 converged top-of-rack switches.
IBM's System Networking strategy has two parts. For Ethernet, offer its own IBM System Networking product line as well as continue its partnership with Juniper Networks. For Fibre Channel and FCoE, continue strategic partnerships with Brocade and Cisco. IBM will lead the industry, help drive open standards to adopt Converged Enhanced Ethernet (CEE), provide flexibility and validate data center networking solutions that work end-to-end.
Well, that marks the end of this week in Auckland, New Zealand. I am off now to Melbourne, Australia for the [IBM System Storage Technical Symposium] next week.
technorati tags: IBM, EKM, TKLM, SKLM, SONAS, SAN32B-E4, Storwize+V7000, CEE, DCB, FCoE, iSCSI, NAS, CIFS, NFS, Ethernet, PCI-SIG, SR-IOV, VEPA, VN-Tag, DCB, L2MP, TRILL, BNT, BOFM, AMM, DAC, 10GBASE-T, DS3500, 40GbE, , FCP, FICON, PXE, SVC, Cisco, Nexus5000, RackSwitch
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday breakout sessions.
- Private Cloud Computing at Bank of America – One Year Later
Prentice Dees, Senior VP for Systems Automation Engineering at Bank of America, did the happy dance celebrating their success implementing a private cloud. Bank of America merged with Merrill Lynch, has 29 million users residing in over 100 countries, and 5900 retail offices in 40 countries. They manage $1 billion US dollars in deposits, and $2.2 trillion in assets.
Rather than IaaS or PaaS, his team focused on Application-as-a-Service (AaaS). Their goal is to transform and move IT out of the way of the business. In his view, if a human has to touch a keyboard, then his team has failed.
He divides the work up into three layers:
- Bones: These are the physical components, such as servers, storage, switches that provide capacity and interconnect.
- Muscle: This is the translation layer, providing actions and reporting.
- Brains: This is the layer for intelligent automation
Provisioning new servers with storage involves three sets of steps. The first set of steps involves requesting approval. The second set of steps deploys the server. The third involves installing the application, loading the data and using it until End-of-Life. The second set of steps took 14 to 60 days before, and has been automated down to one to three hours.
The results is that he has improved server utilization 10x, and storage is over-provisioned 4x, and are now hosting over 11,000 server images, saving $20 million US dollars. Not only is this lower cost per application deployed, but the process allows for lower-skilled personnel. He has over 500TB of virtual storage deployed, using thin provisioning, with only 128TB of physical disk. But they have only scratched the surface. Only 15 to 20 percent are virtualized in this manner, and they want to get to 80 percent within the next three years.
What makes an application not "Cloud-ready"? Prentice is a big fan of Linux and Open Source solutions. Some applications consume the entire server. In other cases, code changes are required. If possible, try to split up large applications into smaller Cloud-ready chunks?
How many people on his team? There are currently 16 to 20 people on the team, but at its peak there were 30 people.
Rather than wasting time on capacity planning, his team focuses on a cost recovery model instead. Seed capital in combination with rock-solid recovery is the way to go. "All models are wrong," the saying goes, "but some are useful!"
A nice side benefit to this new approach is maintenance is greatly improved. Rather than rushing to fix problems, you roll the application over to another host machine, and then take your time fixing the failed hardware.
How does the team deal with requests for dedicated resources? Give them the keys to their own miniature private cloud. Let them provision from their dedicated resources using the same methods you use to provision everyone else. This allows them to get comfortable with the process, and eventually join the rest of the shared pool. Analytics can be used to find "rogue VMs" that don't play well with others.
Their automation is a mix of commercial and open source software, with home-grown scripts. They have one "Orchestration Management Data Base" (OMDB) to manage multiple disparate Configuration Management Data bases (CMDBs). The chargeback is not quite per individual pay-per-use, but more at the departmental level.
- Aging Data: The Challenges of Long-Term Data Retention
The analyst defined "aging data" to be any data that is older than 90 days. A quick poll of the audience showed the what type of data was the biggest challenge:
In addition to aging data, the analyst used the term "vintage" to refer to aging data that you might actually need in the future, and "digital waste" being data you have no use for. She also defined "orphaned" data as data that has been archived but not actively owned or managed by anyone.
You need policies for retention, deletion, legal hold, and access. Most people forget to include access policies. How are people dealing with data and retention policies? Here were the poll results:
The analyst predicts that half of all applications running today will be retired by 2020. Tools like "IBM InfoSphere Optim" can help with application retirement by preserving both the data and metadata needed to make sense of the information after the application is no longer available. App retirement has a strong ROI.
Another problem is that there is data growth in unstructured data, but nobody is given the responsibility of "archivist" for this data, so it goes un-managed and becomes a "dumping ground". Long-term retention involves hardware, software and process working together. The reason that purpose-built archive hardware (such as IBM's Information Archive or EMC's Centera) was that companies failed to get the appropriate software and process to complete the solution.
Cloud computing will help. The analyst estimates that 40 percent of new email deployments will be done in the cloud, such as IBM LotusLive, Google Apps, and Microsoft Online365. This offloads the archive requirement to the public cloud provider.
A case study is University of Minnesota Supercomputing Institute that has three tiers for their storage: 136TB of fast storage for scratch space, 600TB of slower disk for project space, and 640 TB of tape for long-term retention.
What are people using today to hold their long-term retention data? Here were the poll results:
Bottom line is that retention of aging data is a business problem, techology problem, economic problem and 100-year problem.
- A Case Study for Deploying a Unified 10G Ethernet Network
Brian Johnson from Intel presented the latest developments on 10Gb Ethernet. Case studies from Yahoo and NASA, both members of the [Open Data Center Alliance] found that upgrading from 1Gb to 10Gb Ethernet was more than just an improvement in speed. Other benefits include:
- 45 percent reduction in energy costs for Ethernet switching gear
- 80 percent fewer cables
- 15 percent lower costs
- doubled bandwidth per server
Ruiping Sun, from Yahoo, found that 10Gb FCoE achieved 920 MB/sec, which was 15 percent faster than the 8Gb FCP they were using before.
IBM, Dell and other Intel-based servers support Single Root I/O Virtualization, or SR-IOV for short. NASA found that cloud-based HPC is feasible with SR-IOV. Using IBM General Parallel File System (GPFS) and 10Gb Ethernet were able to replace a previous environment based on 20 Gbps DDR Infiniband.
While some companies are still arguing over whether to implement a private cloud, an archive retention policy, or 10Gb Ethernet, other companies have shown great success moving forward.
technorati tags: IBM, BofA, Prentice+Dees, AaaS, Linux, Open Source, OMDB, CMDB, Aging data, Archive, Retention, , InfoSphere, Optim, LotusLive, University Minnesota, , 10GbE, SR-IOV, GPFS, private cloud