Continuing my coverage of the [IBM System x and System Storage Technical Symposium], I thought I would start with some photos. I took these with cell phone, and without realizing how much it would cost, uploaded them to Flickr at international data roaming rates. Oops!
Here are some of the banners used at the conference. Each break-out session room was outfitted with a "Presentation Briefcase" that had everything a speaker might need, including power plug adapters and dry-erase markers for the whiteboard. What a clever idea!
Here is a recap of the last and final day 3:
- Understanding IBM's Storage Encryption Options
Special thanks to Jack Arnold for providing me his deck for this presentation. I presented IBM's leadership in encryption standards, including the [OASIS Key Management Interoperability Protocol] that allows many software and hardware vendors to interoperate. IBM offers the IBM Tivoli Key Lifecycle Manager (TKLM v2) for Windows, Linux, AIX and Solaris operating systems, and the IBM Security Key Lifecycle Manager (v1.1) for z/OS.
Encrypting data at rest can be done several ways, by the application at the host server, in a SAN-based switch, or at the storage system itself. I presented how IBM Tivoli Storage Manager, the IBM SAN32B-E4 SAN switch, and various disk and tape devices accomplish this level of protection.
- NAS @ IBM
Rich Swain, IBM Field Technical Sales Specialist for NAS solutions, provided an overview of IBM's NAS strategy and the three products: Scale-Out Network Attached Storage (SONAS), Storwize V7000 Unified, and N series.
- IBM System Networking Convergence CEE/DCB/FCoE
Mike Easterly, IBM Global Field Marketing Manager for IBM System Networking, presented on Network convergence. He wants to emphasize that "Convergence is not just FCoE!" rather it is bringing together FCoE with iSCSI, CIFS, NFS and other Ethernet-based protocols. In his view, "All roads lead to Ethernet!"
There are a lot new standards that didn't exist a few years ago, such as PCI-SIG's Single Root I/O Virtualization [SR-IOV], Virtual Ethernet Port Aggregator [VEPA], and [VN-Tag], Data Center Bridging [DCB], Layer-2 Multipath [L2MP], and my favorite: Transparent Interconnect of Lots of Links [TRILL].
Last year, IBM acquired Blade Network Technologies (BNT), which was the company that made IBM BladeCenter's Advanced Management Module (AMM) and BladeCenter Open Fabric Manager (BOFM). BNT also makes Ethernet switches, so it has been merged with IBM's System Storage team, forming the IBM System Storage and Networking team. Most of today's 10GbE is either fiber optic, Direct Attach Copper (DAC) that supports up to 8.5 meter length cables, or 10GBASE-T which provides longer distances of twisted pair. IBM's DS3500 uses 10GBASE-T for its 10GbE iSCSI support.
Last month, IBM announced 40GbE! I missed that one. The IT industry also expects to deliver 100GbE by 2013. For now, these will be used as up-links between other switches, as most servers don't have the capacity to pump this much data through their buses. With 40GbE and 100GbE, it would be hard to ignore Ethernet as the common network standard to drive convergence.
Fibre Channel, such as FCP and FICON, are still the dominant storage networking technology, but this is expected to peak around 2013 and start declining thereafter in favor of iSCSI, NAS and FCoE technologies. Already the enhancements like "Priority-based Flow Control" made to Ethernet to support FCoE have also helped out iSCSI and NAS deployments as well.
The iSCSI protocol is being used with Microsoft Exchange, PXE Boot, Server virtualization hypervisors like VMware and Hyper-V, as well as large Database and OLTP. IBM's SVC, Storwize V7000, XIV, DS5000, DS3500 and N series all support iSCSI.
IBM's [RackSwitch] family of products can help offload traffic at $500 per port, compared to traditional $2000 per port for IBM SAN32B or Cisco Nexus5000 converged top-of-rack switches.
IBM's System Networking strategy has two parts. For Ethernet, offer its own IBM System Networking product line as well as continue its partnership with Juniper Networks. For Fibre Channel and FCoE, continue strategic partnerships with Brocade and Cisco. IBM will lead the industry, help drive open standards to adopt Converged Enhanced Ethernet (CEE), provide flexibility and validate data center networking solutions that work end-to-end.
Well, that marks the end of this week in Auckland, New Zealand. I am off now to Melbourne, Australia for the [IBM System Storage Technical Symposium] next week.
technorati tags: IBM, EKM, TKLM, SKLM, SONAS, SAN32B-E4, Storwize+V7000, CEE, DCB, FCoE, iSCSI, NAS, CIFS, NFS, Ethernet, PCI-SIG, SR-IOV, VEPA, VN-Tag, DCB, L2MP, TRILL, BNT, BOFM, AMM, DAC, 10GBASE-T, DS3500, 40GbE, , FCP, FICON, PXE, SVC, Cisco, Nexus5000, RackSwitch
Tags: 
cee
iscsi
nas
fcp
dcb
40gbe
storwize+v7000
pxe
svc
trill
l2mp
vepa
vn-tag
dac
sr-iov
cisco
pci-sig
rackswitch
ibm
nexus5000
ekm
nfs
10gbase-t
amm
ds3500
sklm
fcoe
ficon
sonas
tklm
bnt
bofm
cifs
ethernet
san32b-e4
|
Mastering the art of stretching out a week-long event into two weeks' worth of blog posts, I continue my
coverage of the [Data Center 2010 conference], Tuesday afternoon I attended several sessions that focused on technologies for Cloud Computing.
(Note: It appears I need to repeat this. The analyst company that runs this event has kindly asked me not to mention their name on this blog, display any of their logos, mention the names of any of their employees, include photos of any of their analysts, include slides from their presentations, or quote verbatim any of their speech at this conference. This is all done to protect and respect their intellectual property that their members pay for. The pie charts included on this series of posts were rendered by Google Charting tool.)
- Converging Storage and Network Fabrics
The analysts presented a set of alternative approaches to consolidating your SAN and LAN fabrics. Here were the choices discussed:
- Fibre Channel over Ethernet (FCoE) - This requires 10GbE with Data Center Bridging (DCB) standards, what IBM refers to as Converged Enhanced Ethernet (CEE). Converged Network Adapters (CNAs) support FC, iSCSI, NFS and CIFS protocols on a single wire.
- Internet SCSI (iSCSI) - This works on any flavor of Ethernet, is fully routable, and was developed in the 1990s by IBM and Cisco. Most 1GbE and all 10GbE Network Interface Cards (NIC) support TCP Offload Engine (TOE) and "boot from SAN" capability. Native suppot for iSCSI is widely available in most hypervisors and operating systems, including VMware and Windows. DCB Ethernet is not required for iSCSI, but can be helpful. Many customers keep their iSCSI traffic in a separate network (often referred to as an IP SAN) from the rest of their traditional LAN traffic.
- Network Attached Storage (NAS) - NFS and CIFS have been around for a long time and work with any flavor of Ethernet. Like iSCSI, DCB is not required but can be helpful. NAS went from being for files only, to be used for email and database, and now is viewed as the easiest deployment for VMware. Vmotion is able to move VM guests from one host to another within the same LAN subnet.
- Infiniband or PCI extenders - this approach allows many servers to share fewer number of NICs and HBAs. While Infiniband was limited in distance for its copper cables, recent advances now allow fiber optic cables for 150 meter distances.
Interactive poll of the audience offered some insight on plans to switch from FC/FICON to Ethernet-based storage:
Interactive poll of the audience offered some insight on what portion storage is FCP/FICON attached:
Interactive poll of the audience offered some insight on what portion storage is Ethernet-attached:
Interactive poll of the audience offered some insight on what portion of servers are already using some Ethernet-attached storage:
Each vendor has its own style. HP provides homogeneous solutions, having acquired 3COM and broken off relations with Cisco. Cisco offers tight alliances over closed proprietary solutions, publicly partnering with both EMC and NetApp for storage. IBM offers loose alliances, with IBM-branded solutions from Brocade and BNT, as well as reselling arrangements with Cisco and Juniper. Oracle has focused on Infiniband instead for its appliances.
The analysts predict that IBM will be the first to deliver 40 GbE, from their BNT acquisition. They predict by 2014 that Ethernet approaches (NAS, iSCSI, FCoE) will be the core technology for all but the largest SANs, and that iSCSI and NAS will be more widespread than FCoE. As for cabling, the analysts recommend copper within the rack, but fiber optic between racks. Consider SAN management software, such as IBM Tivoli Storage Productivity Center.
The analysts felt that the biggest inhibitor to merging SAN and LANs will be organizational issues. SAN administrators consider LAN administrators like "Cowboys" undisciplined and unwilling to focus on 24x7 operational availability, redundancy or business continuity. LAN administrators consider SAN administrators as "Luddites" afraid or unwilling to accept FCoE, iSCSI or NAS approaches.
- Driving Innovation through Innovation
Mr. Shannon Poulin from Intel presented their advancements in Cloud Computing. Let's start with some facts and predictions:
Today:
- About 25 percent of the
[world's population] is connected to the Internet
- There are over 2.5 billion photos on Facebook, which runs on 30,000 servers
- 30 billion videos viewed every month
- Nearly all Internet-connected devices are either computers or phones
|
By 2015:
- An additional billion people on the Internet
- Cars, televisions, and households will also be connected to the Internet
- The world will need 8x more network bandwidth, 12x more storage, and 20x more compute power
|
To avoid confusion between on-premise and off-premise deployments, Intel defines "private cloud" as "single tenant" and "public cloud" as "multi-tenant". Clouds should be
automated, efficient, simple, secure, and interoperable enough to allow federation of resources across providers. He also felt that Clouds should be "client-aware" so that it know what devices it is talking to, and optimizes the results accordingly. For example, if watching video on a small 320x240 smartphone screen, it makes no sense for the Cloud server to push out 1080p. All devices are going through a connected/disconnected dichotomy. They can do some things while disconnected, but other things only while connected to the Internet or Cloud provider.
An internal Intel task force investigated what it would take to beat MIPS and IBM POWER processors and found that their own Intel chips lacked key functionality. Intel plans to address some of their shortcomings with a new chip called "Sandbridge" sometime next year. They also plan a series of specialized chips that support graphics processing (GPU), network processing (NPU) and so on. He also mentioned Intel released "Tukwilla" earlier this year, the latest version of Itanium chip. HP is the last major company to still use Itanium for their servers.
Shannon wrapped up the talk with a discussion of two Cloud Computing initiatives. The first is [Intel® Cloud Builders], a cross-industry effort to build Cloud infrastructures based on the Intel Xeon chipset. The second is the [Open Data Center Alliance], comprised of leading global IT managers who are working together to define and promote data center requirements for the cloud and beyond.
- Fabric-Based Infrastructure
The analysts feel that we need to switch from thinking about "boxes" (servers, storage, networks) to "resources". To this end, they envision a future datacenter where resources are connected to an any-to-any fabric that connects compute, memory, storage, and networking resources as commodities. They feel the current trend towards integrated system stacks is just a marketing ploy by vendors to fatten their wallets. (Ouch!)
A new concept to "disaggregate" caught my attention. When you make cookies, you disaggregate a cup of sugar from the sugar bag, a teaspoon of baking soda from the box, and so on. When you carve a LUN from a disk array, you are disaggregating the storage resources you need for a project. The analysts feel we should be able to do this with servers and network resources as well, so that when you want to deploy a new workload you just disaggregate the bits and pieces in the amounts you actually plan to use and combine them accordingly. IBM calls these combinations "ensembles" of Cloud computing.
Very few workloads require "best-of-breed" technologies. Rather, this new fabric-based infrastructure recognizes the reality that most workloads do not. One thing that IT Data Center operations can learn from Cloud Service Providers is their focus on "good enough" deployment.
This means however that IT professionals will need new skill sets. IT administrators will need to learn a bit of application development, systems integration, and runbook automation. Network adminis need to enter into 12-step programs to stop using Command Line Interfaces (CLI). Server admins need to put down their screwdrivers and focus instead on policy templates.
Whether you deploy private, public or hybrid cloud computing, the benefits are real and worth the changes needed in skill sets and organizational structure.
technorati tags: IBM, FCoE, iSCSI, NAS, NFS, CIFS, DCB, CEE, CNA, TOE, SAN, LAN, Convergence, FICON, Ethernet, BNT, Cisco, HP, 3COM, Intel, ODCA, CLI
Tags: 
cli
intel
nfs
fcoe
dcb
ethernet
odca
hp
lan
nas
bnt
ibm
convergence
3com
cee
san
ficon
cna
iscsi
cifs
toe
cisco
|
Continuing my saga regarding my [New Laptop], I managed on
[Wednesday afternoon] to prepare my machine with separate partitions for programs and data. I was hoping to wrap things up on day 2 (Thursday), but nothing went smoothly.
Just before leaving late Wednesday evening, I thought I would try running the "Migration Assistant" overnight by connecting the two laptops with a REGULAR Ethernet cable. The instructions indicated that in "most" cases, two laptops can be connected using a regular "patch cord" cable. These are the kind everyone has, the connects their laptop to the wall socket for wired connection to the corporate intranet, or their personal computers to their LAN hubs at home. Unfortunately, the connection was not recognized, so I suspected that this was one of the exceptions not covered.
(There are two types of Ethernet cables. The ["patch cord"] connects computers to switches. The ["crossover" cable] connects like devices, such as computers to computers, or switches to switches. Four years ago, I used a crossover cable to transfer my files over, and assumed that I would need one this time as well.)
Thursday morning, I borrowed a crossover cable from a coworker. It was bright pink and only about 18 inches long, just enough to have the two laptops side by side. If the pink crossover cable were any shorter, the two laptops would be back to back. I kept the old workstation in the docking station, which allowed it to remain connected to my big flat screen, mouse, keyboard and use the docking stations RJ45 to connect to the corporate intranet. That left the RJ45 on the left side of the old system to connect via crossover cable to the new system. But that didn't work, of course, because the docking station overrides the side port, so we had to completely "undock" and go native laptop to laptop.
Restarting the Migration Assistant, I unplug the corporate intranet cable from the old laptop, put one end of the pink cable into each Ethernet port of each laptop. On the new system, Migration Assistant asks to setup a password and provides an IP address like 169.254.aa.bb with a netmask of 255.255.0.0 and I am supposed to type this IP address over on the old system for it to reach out and connect. It still didn't connect.
We tried a different pink crossover cable, no luck. My colleague Harley brought over his favorite "red" crossover cable, that he has used successfully many times, but still didn't work. The helpful diagnostic advice was to disable all firewall programs from one or both systems.
I disabled Symantec Client Firewall on both systems. Still not working. I even tried booting both systems up in "safe" mode, using MSCONFIG to set the reboot mode as "safe with networking" as the key option. Still not working. At this point, I was afraid that I would have to use the alternate approach, which was to connect both systems to our corporate 100 Mbps system, which would be painfully slow. I only have one active LAN cable in my office, so the second computer would have to sit outside in the lobby.
Looking at the IP address on the old system, it was 9.11.xx.yy, assigned by our corporate DHCP, so not even in the same subnet of the new computer. So, I created profiles on ThinkVantage Access Connections on both systems, with 192.168.0.yy netmask 255.255.255.0 on the old system, and 192.168.0.bb on the new system. This worked, and a connection between the two systems was finally recognized.
Since I had 23GB of system files and programs on my old C: drive, and 80GB of data on my old D: drive, I didn't think I would run out of space on my new 40GB C: drive and 245GB D: drive, but it did! The Migration Assistant wanted to my D:\Documents on my new C: drive and refused to continue. I had to turn off D:\Documents from the list so that it could continue, processing only the programs and system settings on C: drive. It took 61 minutes to scan 23GB on my C: drive, identify 12,900 files to move, representing 794MB of data. Seriously? Less than 1GB of data moved!
It then scanned all of the programs I had on my old system, and decided that there were none that needed to be moved or installed on the new system. The closing instructions explained there might be a few programs that need to be manually installed, and some data that needed to be transferred manually.
Given the performance of Migration Assistant, I decided to just setup a direct Network Mapping of the new D: drive as Y: on my old system, and just drag and drop my entire folder over. Even at 1000 Mbps, this still took the rest of the day. I also backed up C:\Program Files using [System Rescue CD] to my external USB drive, and restored as D:\prog-files, just in case. In retrospect, I realize it would have been faster just to have dumped my D: drive to my USB drive, and restore it on the new system.
I'll leave the process of re-installing missing programs for Friday.
technorati tags: , Ethernet, Patchcord, Crossover, firewall, SysRescCD, USB
Tags: 
patchcord
ethernet
usb
sysresccd
crossover
firewall
|
It seems everyone is talking about stacks, appliances and clouds.
On StorageBod, fellow blogger Martin Glassborow has a post titled [Pancakes!] He feels that everyone from Hitachi to Oracle is turning into the IT equivalent of the International House of Pancakes [IHOP] offering integrated stacks of software, servers and storage.
Cisco introduced its "Unified Computing System" about a year ago, [reinventing the datacenter with an all-Ethernet approach]. Cisco does not offer its own hypervisor software nor storage, so there are two choices. First, Cisco has entered a joint venture, called Acadia, with VMware and EMC, to form the Virtual Computing Environment (VCE) coalition. The resulting stack was named Vblock, which one blogger had hyphenated as Vb-lock to raise awareness to the proprietary vendor lock-in nature of this stack. Second, Cisco, VMware and NetApp had a similar set of [Barney press releases] to announce a viable storage alternative to those not married to EMC.
On StorageMojo, fellow blogger Robin Harris presents [A deep dive into Cisco’s UCS]. Here is an excerpt:
"Only when it makes sense. Oracle/Sun has the better argument: when you know exactly what you want from your database, we’ll sell you an integrated appliance that will do exactly that. And it’s fine if you roll your own.
But those are industry-wide issues. There are UCS/VCE specific issue as well:
- Cost. All the integration work among 3 different companies costs money. They aren’t replacing existing costs – they are adding costs. Without, in theory, charging more.
- Lock-in. UCS/Vblock is, effectively, a mainframe with a network backplane.
- Barriers to entry. Are there any? Cisco flagged hypervisor bypass and large memory support as unique value-add – and neither seems any more than a medium-term advantage.
- BOT? Build, Operate, Transfer. In theory Vblocks are easier and faster to install and manage. But customers are asking that Acadia BOT their new Vblocks. The customer benefit over current integrator practice? Lower BOT costs? Or?
- Price. The 3 most expensive IT vendors banding together?
- Longevity. Industry “partnerships” don’t have a good record of long-term success. Each of these companies has its own competitive stresses and financial imperatives, and while the stars may be aligned today, where will they be in 3 years? Unless Cisco is piloting an eventual takeover."
Similar questions and concerns were raised by others, including Greg Ferro's [Questions I Have About Cisco] and Gestalt IT's [One Year Later: Questioning Cisco UCS].
Fellow blogger Bob Sutor (IBM) has an excellent post titled
[ Appliances and Linux]. Here is an excerpt:
"In your kitchen you have special appliances that, presumably, do individual things well. Your refrigerator keeps things cold, your oven makes them hot, and your blender purees and liquifies them. There is room in a kitchen for each of these. They work individually but when you are making a meal they each have a role to play in creating the whole.
You could go out and buy the metal, glass, wires, electrical gadgets, and so on that you would need to make each appliance but it is is faster, cheaper, and undoubtably safer to buy them already manufactured. For each device you have a choice of providers and you can pay more for additional features and quality.
In the IT world it is far more common to buy the bits and pieces that make up a final solution. That is, you might separately order the hardware components, the operating system, and the applications, and then have someone put them all together for you. If you have an existing configuration you might add more blades or more storage devices.
You don’t have to do this, however, in every situation. Just from a hardware perspective, you can buy a ready-made machine just waiting for the on switch to be flicked and the software installed. Conversely, you might get a pre-made software image with operating system and applications in place, ready to be provisioned to your choice of hardware. We can get even fancier in that the software image might be deployable onto a virtual machine and so be a ready made solution runnable on a cloud.
Thus in the IT world we can talk about hardware-only appliances, software-only appliances (often called virtual software appliances), and complete hardware and software combinations. The last is most comparable to that refrigerator or oven in your kitchen."
If your company was a restaurant, how many employees would you have on hand to produce your own electricity from gas generators, pump your own water from a well, and assemble your own toasters and blenders from wires and motors? I think this is why companies are re-thinking the way they do their own IT.
Rather than business-as-usual, perhaps a mix of pre-configured appliances, consisting of software, server and storage stacked to meet a specific workload, connected to public cloud utility companies, might be the better approach. By 2013, some analysts feel that as many as 20 percent of companies might not even have a traditional IT datacenter anymore.
Fellow blogger David Salgado (Microsoft) rips into the IT industry for [marketing these "stacks" of components as "private clouds"]. Fellow blogger Mary-Jo Foley (Microsoft) asks ['Private cloud' = just another buzzword for on-premise datacenter?"] adds more attention to the confusion over the terms private and public cloud. Here's an excerpt that shows Microsoft's thinking in this area:
- "The private cloud:
- “By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.
- The public cloud:
- “Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”
Finally, I saw this from fellow blogger, Barry Burke(EMC), aka the Storage Anarchist, titled [a walk through the clouds] which is really a two-part post.
The first part describes a possible future for EMC customers written by EMC employee David Meiri, envisioning a wonderful world with "No more Metas, Hypers, BIN Files...."
The vision is a pleasant one, and not far from reality. While EMC prefers to use the term "private cloud" to refer to both on-premises and off-premises-but-only-your-employees-can-VPN-to-it-and-your-IT-staff-still-manages-it flavors, the overall vision is available today from a variety of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) providers.
A good analogy for "private cloud" might be a corporate "intranet" that is accessible only within the company's firewall. This allowed internal websites where information to be disseminated to employees could be posted, using standard HTML and standard web browsers that are already deployed on most PCs and workstations. Web pages running on an intranet can easily be moved to an external-facing website without too much rework or trouble.
The second part has Barry claiming that EMC has made progress towards a "Virtual Storage Server" that might be announced at next month's EMC World conference.
Seriously?
When people hear "Storage Virtualization" most immediately think of the two market leaders, IBM SAN Volume Controller and Hitachi Data Systems (HDS) Universal Storage Platform (USP) products. Those with a tape bent might throw in IBM's TS7000 virtual tape libraries or Oracle/Sun's Virtual Storage Manager (VSM). And those focused on software-only solutions might recall Symantec's Veritas Volume Manager (VxVM), DataCore's SANsymphony, or FalconStor's IPStor products.
But what about EMC's failed attempt at storage virtualization, the Invista? After five years of failing to deliver value, EMC has so far only publicised ONE customer reference account, and I estimate that perhaps only a few dozen actual customers are still running on this platform. Compare that to IBM selling tens of thousands of SAN Volume Controllers, and HDS selling thousands of their various USP-V and USP-VM products, and you quickly realize that EMC has a lot of catching up to do. EMC's first delivered Invista about 18 months after IBM SAN Volume Controller, similar to their introduction of Atmos being 18 months after our Scale-Out File Services (SoFS) and their latest CLARiiON-based V-Max coming out 18 months after IBM's XIV storage system.
So what will EMC's Invista follow-on "Virtual Storage Server" product look like? No idea. It might be another five years before you actually hear about a customer using it. But why wait for EMC to get their act together?
IBM offers solutions TODAY that can make life as easy as envisioned here. IBM offers integrated systems sold as ready-to-use appliances, customized "stacks" that can be built to handle particular workloads, residing on-premises or hosted at an IBM facility, and public cloud "as-a-service" offerings on the IBM Cloud.
technorati tags: StorageBod, Martin Glassborow, IHOP, Hitachi, Oracle, Cisco, UCS, Ethernet, VMware, VCE, NetApp, Barney, StorageMojo, Robin Harris, IBM, Bob Sutor, Linux, Appliances, Stacks, Private Cloud, Public Cloud, Cloud Computing, IaaS, PaaS, SaaS, Barry Burke, EMC, Invista
Tags: 
appliances
invista
stacks
hitachi
public+cloud
bob+sutor
oracle
ethernet
emc
vce
robin+harris
barry+burke
iaas
saas
storagebod
vmware
cloud+computing
ihop
paas
ibm
private+cloud
storagemojo
barney
linux
cisco
netapp
martin+glassborow
ucs
|
Well, it's Tuesday again, and that means IBM announcements! Today we had a major launch, with so many products, services and offerings
that I can't fit them all into a single post, so I will split them up into several posts to give the attention they deserve. So, in this
post, I will focus on just the networking gear.
- IBM Converged Switch B32
The "Converged" part of this switch refers to Converged Enhanced Ethernet (CEE), which is just a lossless Ethernet that meets certain standards to allow Fibre Channel over Ethernet (FCoE) that are still being discussed between Brocade and Cisco. Thankfully, IBM demanded both Brocade and Cisco stick to open agreed-upon standards, and the rest of the world gets to benefit from IBM's leadership in keeping everything as open and non-proprietary as possible.
The B32 ("B" because it was made by Brocade) starts with 24 10Gb Converged Enhanced Ethernet (CEE) ports, and then you can add eight Fibre Channel ports, for a total of 32 ports, hence the name B32. These are designed to be Top-of-Rack (TOR) switches. Basically, instead of having expensive optical cables for Ethernet and/or Fibre Channel out of each server, you have cheap twinax copper cables connecting the server's Converged Network Adapters (CNA) to this TOR switch, and then you can have the 10Gb Ethernet go to your regular Ethernet LAN, and your 8Gbps FC traffic go to your regular FC SAN. In other words, the CNA serves both the role of an Ethernet Network Interface Card (NIC) as well as a Fibre Channel Host Bus Adapter (HBA) card.
(You might see 8Gbps Fibre Channel represented as 8/4/2 or 2/4/8, this is just to remind you that these 8Gb FC ports can auto-negotiate down to 2Gbps and 4Gbps legacy hardware, but not 1Gbps. If you are still using 1Gbps FC, you need 4Gpbs SFP transceivers instead, shown often as 1/2/4 or 4/2/1.)
Here is the IBM [Press Release].
- New SSN-16 module for Cisco directors and switches
When I present SAN gear to sales reps, I often get the question, "What is the difference between a switch and a director?" My quick and simple answer is that switches have fixed ports, but directors have slots that you can slide in different blades or expansion modules. The Cisco MDS9500 series are directors with slots, the three models provide a hint to their capacity. The last two digits represent the number of total slots, but the first two slots are already taken. In other words, model 9513 has 11 slots, model 9509 has seven slots, and model 9506 has four slots. You can have a 48-port blade in a slot, so in theory, you can have a maximum of 528 ports on the biggest model 9513.
However, if you want FCIP for disaster recovery, or I/O Acceleration (IOA) for remote e-vaulting tape libraries, you need a special 18/4 blade. This has 18 FC ports, four 1GbE ports and a special service processor that speaks FCIP or IOA. If you wanted two service processors for FCIP and two for IOA, you would need four of these blades, and that takes up slots that could have been used for 48-port blades instead. The solution? The new SSN-16 has sixteen 1GbE ports and four service processors, so with one slot, you can handle the FCIP and IOA processing that you previously used four cards, giving you three slots back to use with higher port-density cards.
Even better, you can put this new SSN-16 in the Cisco 9222i. The model 9222i is a "hybrid" switch with 22 fixed ports (18 FC ports, four fixed 1GbE ports, and a service processor, so basically the fixed port version of the 18/4 blade above), but it also has one slot! That one slot can be used for the SSN-16 to give you added FCIP or IOA capability.
Here are the IBM press releases for the [2054 models], and for the
[2062 models].
- FICON package for Cisco 9513 directors
For our mainframe clients, the FICON package includes four 24-port FICON blades and 96 SFP 4Gbps transceivers to fully populate them. Here is the IBM [Press Release].
- Cisco Nexus 5000 series for IBM System Storage
The Cisco Nexus 5000 series is Cisco's entry into the Converged Enhanced Ethernet world, although Cisco sometimes refers to this as Data Center Ethernet (DCE), IBM will continue to use CEE when referring to either Brocade and Cisco gear. These are also Top-of-Rack aggregators that support CNA connections over cheaper twinax copper wires. Model 5010 has 10 ports that can be configured for either 1GbE or 10Gb CEE, 10 ports that are 10Gb CEE, and a slot for an expansion module. The Model 5020 has basically twice as much of everything, including two slots instead of one. Since 10Gb Ethernet does not auto-negotiate down to 1GbE, half the ports can be configured to run 1GbE instead. Frankly, that can be seen as wasting your precious Nexus ports with 1GbE connections, so you might find a 1GbE-to-10GbE aggregator that combines a dozen or more 1GbE to a few 10GbE links instead.
Today's announcement is that in addition to 10GbE and 4Gbps FC expansion modules, there is now an expansion module that supports 8Gbps Fibre Channel. Here is the IBM [Press Release].
Whether you choose Brocade or Cisco, nearly all of IBM System Storage disk and tape products can work today with Converged Enhanced Ethernet environments, either directly using iSCSI, NFS or CIFS, or using the FCoE methodology.
As you can see, it took me a whole post just to cover just our networking gear announcements, and I haven't even covered our disk, tape and cloud storage offerings. I'll get to these in later posts.
technorati tags: IBM, B32, CEE, FCoE, Brocade, Cisco, iSCSI, NFS, CIFS, TOR, CNA, LAN, Fibre Channel, SAN, Ethernet, SFP, transceivers, DCE
Tags: 
tor
brocade
cifs
transceivers
b32
fibre+channel
cisco
dce
sfp
fcoe
iscsi
ethernet
lan
cee
san
cna
ibm
nfs
|