
- Sort by:
- Date
- Title
- Likes
- Comments ▼
- Views
Day5 Hospitality Suites Data Center Conference 2009
|
|
Visits (13638)
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Tags:  san768b computer+associates brocade gdc09 datadomain vmware lsc28 cisco |
||||||||||||||||||||
Day5 Solution Showcase Data Center 2009
|
|
Visits (11908)
![]() ![]() ![]() ![]() ![]()
Tags:  lsc28 gdc09 |
||||||||||||||||||||
Day4 Solid State Evolution
|
|
Visits (12580)
Continuing my coverage of last week's Data Center Conference 2009, my last breakout session of the week was an analyst presentation on Solid State Drive (SSD) technology. There are two different classes of SSD, consumer grade multi-level cell (MLC) running currently at $2 US dollars per GB, and Enterprise grade single-level cell (SLC) running at $4.50 US dollars per GB. Roughly 80 to 90 percent of the SSD is used in consumer use cases, such as digital cameras, cell phones, mobile devices, USB sticks, camcorders, media players, gaming devices and automotive. While the two classes are different, the large R&D budgets spent on consumer grade MLC carry forward to help out enterprise grade SLC as well. SLC means there is a single level for each cell, so each cell can only hold a single bit of data, a one or a zero. MLC means the cell can hold multiple levels of charge, each representing a different value. Typically MLC can hold 3 to 4 bits of data per cell. Back in 1997, SLC Enterprise Grade SSD cost roughly $7870 per GB. By 2013, Consumer Grade 4-bit MLC is expected to be only 24 cents per GB. Engineers are working on trade-offs between endurance cycles and retention periods. FLASH management software is the key differentiator, such as clever wear-leveling algorithms. SSD is 10-15 times more expensive than spinning hard disk drives (HDD), and this price difference is expected to continue for a while. This is because of production volumes. In 4Q09, manufacturers will manufacturer 50 Exabytes of HDD, but only 2 Exabytes of SSD. The analyst thinks that SSD will only be roughly 2 percent of the total SAN storage deployed over the next few years. How well did the audience know about SSD technology?
SSD does not change the design objectives of disk systems. We want disk systems that are more scalable and have higher performance. We want to fully utilize our investment. We want intelligent self-management similar to caching algorithms. We want an extensible architecture. What will happen to fast Fibre Channel drives? Take out your Mayan calendar. Already 84mm 10K RPM drives are end of life (EOL) in 2009. The analyst expects 67mm and 70mm 10K drives will EOL in 2010, and that 15K will EOL by 2012. A lot of this is because HDD performance has not kept up with CPU advancements, resulting in an I/O bottleneck. SSD is roughly 10x slower than DRAM, and some architectures use SSD as a cache extension. The IBM N series PAM II card and Sun 7000 series being two examples. Let's take a look at a disk system with 120 drives, comparing 73GB HDD's versus 32GB SSD's.
There are various use cases for SSD. These include internal DAS, stand-alone Tier 0 storage, replace or complement HDD in disk arrays, and as an extension of read cache or write cache. The analyst believes there will be mixed MLC/SLC devices that will allow for mixed workloads. His recommendations:
Tags:  mlc hdd sata das ssd fc slc disk+systems |
||||||||||||||||||||
Day4 Rapid Access Computing Environment
|
|
Visits (14634)
Continuing my coverage of last week's Data Center Conference 2009, I attended another "User Experience" that was very well received. This time, it was Henry Sienkiewicz of the Department Information Systems Agency (DISA) presenting a real-world example of the business model behind a private cloud implementation. DISA is the US government agency that develops and runs software for the Army, Navy and Air Force.
Being part of the military presents its own unique set of challenges:
Using Cloud Computing simplifies provisioning, encourages the use of standards, and provides self-service. DISA has several solutions.
In their traditional approach, a software project would take six months to procure the hardware, another 6-12 months code and test, and then another 6 months in certification, for a total of 18-24 months. With the new Cloud Computing approach that DISA adopted, procurement was down to 24-72 hours with RACE, code test took only 2-6 months with Forge.Mil, and certification could be done in days on RACE, resulting in a new total of only 3-6 months. Some challenges they found:
Some lessons learned from this two-year experience:
Tags:  itil disa forge.mil race mips gcds ibm henry+sienkiewicz mainframe |
||||||||||||||||||||
Day4 Tapping the Cloud for Storage Infrastructure
|
|
Visits (15090)
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I find some of the best sessions are those "user experiences" by the CIO or IT directors that successfully completed a project and showed the benefits and pitfalls. Matt Merchant, CTO of General Electric (GE), gave an awesome presentation on tapping Cloud Storage to reduce their backup and archive costs.
They were concerned over their lack of e-Discovery tools, the high fixed cost and large administrator personnel load of their Veritas NetBackup software environment, the possibility of corrupted tape media, new compliance and regulatory issues, and the risk of moving unencrypted cartridges to remote vaulting facilities like Iron Mountain. I found it interesting their backup/archive approach is that backups are re-classified as archive after they are 35 days old.
GE's Disk
General Electric had a long list of requirements:
The end result? They now have Cloud-based backups and archive for their GE Corp, NBC Universal and GE Asset Management divisions running at only 32 cents per GB/month, representing a 40-60 percent savings over their previous methods. This includes backups of their external Web sites, archives of their digital and production assets, RMAN backups including development/staging databases. They plan to add out-of-region compliance archive in 2010. They also plan to monetize their intellectual property by offering "CloudStorage Manager" as a software offering for others.
Tags:  emc ge nirvanix md5 s3 nbc |
||||||||||||||||||||
Day4 Return of OS Wars
|
|
Visits (13772)
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I attended an interesting session related to the battles between Linux, UNIX, Windows and other operating systems. Of course, it is no longer between general purpose operating systems, there are also thin appliances and "Meta OS" such as cloud or Real Time Infrastructure (RTI).
One big development is "context awareness". For the most part, Operating Systems assume they are one-to-one with the hardware they are running on, and Hypervisors like PowerVM, VMware, Xen and Hyper-V have worked by giving OS guests the appearance that this is the case. However, there is growing technology for OS guests to be "aware" they are running as guests, and to be aware of other guests running on the same Hypervisor. The analyst divided up Operating Systems into three categories:
The analyst indicated that what really drove the acceptance or decline of Operating Systems were the applications available. When Software Development firms must choose which OS to support, they typically have to evaluate the different categories of marketplace acceptance:
For the UNIX world, there is a three-legged stool. If any leg breaks, the entire system falls apart.
Of these, the analyst consider IBM POWER running AIX to be the safest investment. For those who prefer HP Integrity, consider waiting until "Tukwilla" codename project which will introduce new Itanium chipset in 2Q2010. For Sun SPARC, the European Union (EU) delay could impact user confidence in this platform. The future of SPARC remains now in the hands of Fujitsu and Oracle. What platform will the audience invest in most over the next 5 years?
A survey of the audience about current comfort level of Solaris:
The analyst mentioned Microsoft's upcoming Windows Server 2008 R2, which will run only on 64-bit hardware but support both 32-bit and 64-bit applications. It will provide scalability up to 256 processor cores. Microsoft wants Windows to get into the High Performance Computing (HPC) marketplace, but this is currently dominated by Linux and AIX. The analyst's advice to Microsoft: System Center should manage both Windows and Linux. Has Linux lost its popularity? The analyst indicated that companies are still running mission critical applications on non-Linux platforms, primarily z/OS, Solaris and Windows. What does help Linux are old UNIX Legacy applications, the existence of OpenSolaris x86, Oracle's Enterprise Linux, VMware and Hyper-V support for Linux, Linux on System z mainframe, and other legacy operating systems that are growing obsolete. One issue cited with Linux is scalability. Performance on systems with more than 32 processor cores is unpredictable. More mature operating systems like z/OS and AIX have stronger support for high-core environments. A survey of the audience of which Linux or UNIX OS were most strategic to their operations resulted in the following weighted scores:
The analyst wrapped up with an incredibly useful chart that summarizes the key reasons companies migrate from one OS platform to another:
Certainly, all three types of operating system have a place, but there are definite trends and shifts in this marketspace.
Tags:  rti windows linux aix ibm z/os nonstop solaris hp-ux |
||||||||||||||||||||
Day3 Mountains Hiding in the Mist
|
|
Visits (14724)
Continuing my coverage of the Data Center Conference 2009, held Dec 1-4 in Las Vegas, the title of this session refers to the mess of "management standards" for Cloud Computing.
The analyst quickly reviewed the concepts of IaaS (Amazon EC2, for example), PaaS (Microsoft Azure, for example), and SaaS (IBM LotusLive, for example). The problem is that each provider has developed their own set of APIs. (One exception was [Eucalyptus], which adopts the Amazon EC2, S3 and EBS style of interfaces. Eucalyptus is an open-source infrastrcture that stands for "Elastic Utility Computing Architecture Linking Your Programs To Useful Systems". You can build your own private cloud using the new Cloud APIs included Ubuntu Linux 9.10 Karmic Koala termed Ubuntu Enterprise Cloud (UEC). See these instructions in InformationWeek article [Roll Your Own Ubuntu Private Cloud].) The analyst went into specific Virtual Infrastructure (VI) and public cloud providers.
If you prefer a common management system independent of cloud provider, or perhaps across multiple cloud providers, you may want to consider one of the "Big 4" instead. These are the top four system management software vendors: IBM, HP, BMC Software, and Computer Associates (CA). A survey of the audience found the number one challenge was "integration". How to integrate new cloud services into an existing traditional data center. Who will give you confidence to deliver not tools for remote management of external cloud services? Survey shows:
Some final thoughts offered by the analyst. First, nearly a third of all IT vendors disappear after two years, and the cloud will probably have similar, if not worse, track record. Traditional server, storage and network administrators should not consider Cloud technologies as a death knell for in-house on-premises IT. Companies should probably explore a mix of private and public cloud options.
Tags:  citrix ibm aws xen amazon vmware+go ec2 eucalyptus lotuslive ubuntu c3 eucalyptis cloudwatch vcloud+express ebs hp s3 vmware paas saas uec microsoft ca linux iaas azure bmc |
||||||||||||||||||||
Day3 Emerging Storage Technologies
|
|
Visits (12362)
Continuing my coverage of the Data Center Conference, held Dec 1-4 in Las Vegas, an analyst presented the challenges of managing the rapid growth in storage capacity. Administrators ability to manage storage is not keeping up with the growth. His recommendations:
A survey of the audience found:
Throughout the industry, storage vendors are following IBM's example of using commodity hardware parts. This is because custom ASICs are expensive, and changes take a minimum of three months development time. Software-based implementations can be updated more quickly. In terms of technologies deployed of SAN, NAS, Compliance Archive (such as the IBM Information Archive), and Virtual Tape Library (VTL) such as the IBM TS7650 ProtecTIER data deduplications solution, here was the survey of the audience:
Cost reduction techniques including thin provisioning, compression, data deduplication, Quality of Service tiers, and archiving. To reduce power and cooling requirements, switch from FC to SATA disk wherever possible, and move storage out of the data center, such as on tape cartridges or cloud storage. For emerging technologies, the following survey:
My take-away from this is that many companies are still "exploring" into different options available to them. Fortunately, IBM offers a broad portfolio of complete end-to-end solutions to make acquiring the right mix of technologies that are optimized for your workloads possible.
Tags:  protectier information+archive vtl cloud+storage nas san xiv |
||||||||||||||||||||
Day3 Reshaping the Data Center
|
|
Visits (11088)
Continuing my coverage of the Data Center Conference 2009, we had a keynote session on Wednesday, Dec 2 (Day 3) that focused on the key technologies to watch for the data center.
Tags:  lsc28 data+center gdc09 |
||||||||||||||||||||
Day2 Data Protection Strategies
|
|
Visits (10550)
Continuing my coverage of the Data Center Conference, Dec 1-4, 2009 here in Las Vegas, this post focused on data protection strategies.
Two analysts co-presented this session which provided an overview of various data protection techniques. A quick survey of the audience found that 27 percent have only a single data center, 13 percent have load sharing of their mission critical applications across multiple data centers, and the rest use a failover approach to either development/test resources, standby resources or an outsourced facility. There are basically five ways to replicate data to secondary locations:
A question came up about the confusion between "Disaster Recovery Tiers" and Uptime Institute's "Data Center Facilities Tiers". I agree this is confusing. Many clients call their most mission critical applications as Tier 1, less critical as Tier 2, and least critical as Tier 3. In 1983, IBM User Group GUIDE came up with "Business Continuity Tiers" where Tier 1 was the slowest recovery from manual tape, and Tier 7 was the fastest recovery with a completely automated site, network, server and storage failover. However, for Data Center facility tiers, Uptime has the simplest least available (99.3 percent uptime) data center as Tier 1, and the most advanced, redundant, highest available (99.995 percent) data center as Tier 4. This just goes to show that when one person starts using "Tier 1" or "Tier 4" terminology, it can be misinterpreted by others.
Tags:  data+replication svc data+protection |