Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Well, it's Tuesday again, and that means more IBM announcements!
Today, IBM announced the enhanced IBM System Storage DS3200 disk system.It is in our DS3000 series, the DS3200 is SAS-attach, DS3300 is iSCSI-attach, and DS3400 is FC-attach. All of them support up to 48 drives, which can be a mix of SAS and SATA drives.
The DS3200 supports the following operating environments (see IBM's [Interop Matrix] for details):
Linux (both Linux-x86 and Linux on POWER)
With today's announcements, the DS3200 can be used to boot from, as well as contain data. This is ideal to combine with IBM BladeCenter. With the IBM BladeCenter you can have 14 blades, either x86 or POWER based processors, attached to a DS3200 via SAS switch modules in the back of the chassis.
Let's take an example of how this can be used for a Scale-Out File Services[SoFS] deployment.
First, we start with servers. We can have either three [IBM System x3650] servers, but this would use up all six of the direct-attach ports. Instead, we'll choose the [BladeCenter H chassis], with three HS21 blades for SoFS, and that leaves us with eleven empty blade slots we could put in a management node, or other blades to run applications.
SAS connectivity modules
The IBM BladeCenter [SAS Connectivity Module] allows the blade servers to connect to a DS3200. Two of them fit right in the back of the BladeCenter chassis, providing full redundancy without consuming additional rack space.
DS3200 and EXP3000 expansion drawers
We'll have one DS3200 controller with twelve internal drives, and three expansion EXP3000 drawers with twelve drives each, for a total of 48 drives. Using 1TB SATA, this would be 48 TB raw capacity.
The end result? You get a 48TB NAS scalable storage solution, supporting up to 7500 concurrent CIFS and NFS users, with up to 700 MB/sec with large block transfers. By using BladeCenter, you can expand performance by adding more blades to the Chassis, or have some blades running SAP or Oracle RAC have direct read/write access to the SoFS data.
Just another example on how IBM can bring together all the components of a solution to provide customer value!
Fellow blogger Chuck Hollis from EMC has a post titled[Whither Frankenstorage] causing quite a stir in the [Stor-o-Sphere]. He is not the firstEMC blogger to use this phrase, I credit [BarryB] for coining the term back in September 2008.Frankenstein serves as the ideal icon for EMC's FUD machine. In the novel, Dr. Frankenstein wasattempting to do something nobody else had ever attempted, to create human life from variousdead body parts, a process full of uncertainty and doubt, with frightful results.
Perhaps it was a coincidence that I discussed IBM's storage strategy in my post[Foundations and Flavorings] on January 28, shortly followed by NetApp's announcing V-series gateway [support of Texas Memory Systems' RamSan-500] on February 3. These two events mighthave been the trigger that pushed ChuckH over the edge to put pen to paper, .. finger to keyboard.
Flinging FUD in all directions was ChuckH's not-so-subtle way to remind the world that EMC is the only major storage vendor to not offer a successful storage virtualization product. Withoutfirst-hand experience with well-designed storage virtualization, ChuckH conjectures that a configuration matching intelligent front-ends to reliable back-ends might be more expensive, might be more difficult to manage, or might be harder to support.
(Note: Rest assured, IBM can demonstrate that a modular approach, combining intelligent front-ends to reliableback-ends can help reduce costs, be easier to manage, and be fully supported. Contact yourlocal IBM Business Partner or storage sales rep for details.)
My favorite was from Nigel Poulton's post on[Ruptured Monkey]. Here's an excerpt:
In fact, I'm fairly certain that EMC don't back away from customers who run HP or IBM servers and say "sorry we cant help you here, an end to end HP or IBM solution would be much better for you when it comes to troubleshooting……. putting our storage in would only add extra layers of complexity and make things messy….."
On most other days, ChuckH has well-written, insightful blog posts that show that EMC brings some value to the industry. I could have made a snarky reference to[Dr Jekyll and Mr Hyde], or indicate this post proves that nobody at EMC is editing or reviewingChuck's thoughts before they get posted. But it's too late, Chuck already got the message, and added the following to bring the discussion back to civility:
When considering the broad range of storage media service levels available today (flash, FC, SATA, spin-down, etc.) what's the best way to offer these media choices in an array? Is the answer (a) combine smaller arrays from different vendors together behind a virtualization head, or (b) invest the time and effort to build arrays that can directly support all of these media types?
Would anyone like to try a cogent response to the question posed, please?
To address ChuckH's question, Nigel's post gave me the idea to use today's 200th year celebration of [Charles Darwin].
Over millions of years, Charles Darwin argued, evolution results in change in the inherited traits of a population of organisms from one generation to the next.A key component of this is a biological process called [mitosis] that allows a single cell to split and become two cells. In some cases, these individual daughter cells can then specialize to specific functions, such as nerve cells, muscle cells or bone cells. Over time, adaptations that work well carry forward, and thosethat don't get left behind.
I find it interesting that before [On the Origin of Species] was published in 1859, works of fiction like Mary Shelley's[Frankenstein] had monsters being"created", and afterward, monsters were the result of mutation or selective adaptation.
Nigel compares EMC's monolithic approach to placing an intelligent front-end with a reliable back-end as "One man band, where one guy is trying playing all the instruments himself" versus the "Philharmonic Orchestra". I would take it one step further, comparing single-cell organisms to multi-cell life forms.
Innovative companies like Google and Amazon can't wait for a completely integrated solution from a major IT vendor to meet their needs. Why should they? There are open standards, and ways to interconnect the best intelligence into a [dynamic infrastructure®.].You don't need to wait another million years to see which way the IT marketplace considers the better approach. Just look at the last 60 years. Back then, computer systems were all integrated, server, storage, and the wires that connected them were all inside a huge container. Then, mitosis happened, and IBM created external tape storage in 1952, and external disk storage in 1956. Open standards for interfaces allowed third party manufacturers like HDS, StorageTek and EMC to offer plug-compatible storage devices.
On the server side, it didn't take long for functionality in mainframes to split off. Mitosis happened again, with front-end UNIX systems processing incoming data, and mainframes handling the back-end data bases and printing. The client-server era replaced dumb terminals with more intelligent desktops and workstations, and these could handle the front-end processing to display information, with the back-end storage and number-crunching being handled by the UNIX and mainframe systems they connected to.Connections between desktops and servers, and from servers to storage, have also evolved. From thousands of direct-attach cables to networks of switches and directors.
Charles Darwin was particularly interested in cases where evolution happened faster or slower than in other cases. While IBM and Microsoft encouraged third-party innovations on the PC side, Apple resisted mitosis, trying to keep its machines pure single-cell, integrated solutions.For the same reasons that you can't fight the laws of nature, Apple ended up having to support I/O ports to external devices. Thanks to open standards like USB and Firewire, you can connect third-party storage to Apple computers. My little Mac Mini at home has more devices hanging off it than any of my Windows or Linux boxes! And Apple's iPod is successful because its iTunes software runs on both Windows and Mac OS operating systems.
Every time mitosis happens in the IT industry, it opens up opportunities to specialize, to innovate, to adapt to a dynamically changing world. When mitosis is suppressed, you get limiting products and frustratedengineers leaving to form their own start-up companies.But when mitosis is encouraged, you get successful products, solutions and partnerships positioned for a smarter planet.
Now that IBM XIV has proven that 1TB SATA are safe for high-end tier-1 enterprise class use, we extended DS8000 support to include SATA support also. DS8000 supports RAID-6 and RAID-10 for these.
Intelligent Write Caching
IBM Research conducts extensive investigations into improved algorithms for cache management. Intelligent Write Caching boosts performance for both temporal and spatial locality.
Remote Pair FlashCopy®
This allows you to FlashCopy volume A to volume B, with Volume B remotely mirrored to Volume C at a secondary location, via Metro Mirror. This allows you to have a consistent copy of your data at both locations.
IBM was the first in the industry to deliver tape-drive encryption, so it makes sense that IBM is also the first in the industry to deliver disk-drive encryption. These are 15K rpm drives in standard 146GB, 300GB and 450GB capacities. As with tape, encrypting at the disk device eliminates the huge overhead from server-based encryption methods.
Solid State Drive (SSD)
You can also have Solid State Disk drives in your DS8000, in 73GB and 146GB capacities, protected by RAID-5.If you are wondering what data to put on these much-faster drives, IBM has taken the work and worry out by havingintelligence in DB2 to optimize what gets placed on SSD to get the most performance improvement.
IBM System Storage XIV
Continuing the incredible marketplace excitement over its Cloud-Opimized Storage[XIV series], IBM now has announced[new capacity options]. The IBM XIV R2 that we announced last August 2008 was a fixed 15 module configuration. In thenew configurations, you can start with as little as six modules, representing a 40% partial rack of the originalfull model. Here is a table that shows the details:
Useable Capacity (TB)
Fibre Channel Ports
Cache Memory (GB)
IBM System Storage N series
And last, but not least, we have two new models in IBM's[N6000 series].The [N6060]has model A12 (single controller) and model A22 (dual controller). These are disk-less controllers thatyou can configure in either appliance mode or gateway mode. In appliance mode, you can attachdisk drawers such as the EXN1000, EXN2000 or EXN4000. In gateway mode, you attach external disk systems, suchas the IBM DS8000 or XIV above.
It's ruggedized to handle earthquakes. IBM brings a feature that we've had for a while on other disk systems to the N series with a collection of bolts and anchors to secure the rack from physical tremors.
It's instrumented for IBM Active Energy Manager, a component of IBM Systems Director. New iPDUs are designed to help measure and monitor energy management components. As companies get more concerned about thefate of the planet, monitoring energy consumption can help reduce carbon footprint.
I'll cover the rest of the announcements tomorrow!
These disk capacities can have up to 25x times their effective capacity with IBM's HyperFactorin-line deduplication capability. So the smallest 7TB model could be as effective as 175TB of traditionaldisk storage.
IBM Tivoli Storage Manager (TSM) v6
After years and years in development, IBM announces[TSM v6]. Here's a quick summary of the key features:
DB2 instead of an internal database
For years, people have complained that IBM used its own internal relational database. This was becausewhen TSM was first launched back in 1993, the DB2 did not have all the features on all of the various server platforms that TSM needed. Today, DB2is the leading relational database on all the key platforms that TSM server runs on, and therefore good enough for use within Tivoli Storage Manager. If you don't already have DB2, it is included for use with TSM v6.1 at no additional charge. Do you have to become a DB2 expert to use TSM? No! The TSM administration commands have been updated to hide all the complexity of DB2 away, behind the scenes. You now just use TSM commands to administer the database,as you did before. IBM will provide conversion utilities to help existing TSM customers migrate to thisnew database environment.
Better Operational Reporting
Another big complaint was that TSM had fixed reporting, and administrators that wanted customized reportsoften had to resort to purchasing third party products. With the change over to DB2, TSM now enables youto create your own reports using Eclipse's Business Intelligence and Reporting Tools[BIRT]! If you haven't used BIRT, you can downloada free open source copy and start playing around with its capabilities. This is combined with a revamped GUI that provides a customizable dashboard using IBM's Integrated Solutions Console (ISC)infrastructure.
Lastly, IBM has incorporated deduplication capability within the TSM v6.1 software for its own diskstorage pools. This is done in a post-process manner so as to dedupe all of your legacy backup dataas well, not just the new stuff, without impacting the current TSM server performance.
At this point, you might be thinking "Wait, what about IBM TS7650 ProtecTIER deduplication?" which is really two questions.
Can I use TSM v6.1 with IBM TS7650 ProtecTIER?
Yes, however since TSM progressive incremental method is vastly more efficientthan other backup products like Veritas NetBackup or EMC Legato NetWorker, the TS7650 may only get 10x reductionof TSM backups, versus up to 25x with full-backups-every-night backup schemes. TSM only dedupes itsdisk storage pools, so it won't dedupe data directed at tape systems like the TS7650 or othertape libraries. This avoids the "double dedupe" concern.
When should I use TSM's software version versus TS7650's hardware deduplication?
This is a positioning question. For now, the cut-over point is about 10TB per night backup processing. If youbackup more than 10TB per night, TS7650 hardware may be the better approach. If you are a smaller customer nowhere near that volume of data, then using TSM v6.1 software deduplication may be a morecost-effective solution. If you start small, and grow beyond 10TB per night, it is easy to bring ina TS7650 into an existing TSM environment and migrate the data over.
If you run TSM server on a logical partition (LPAR) or virtual guest OS under VMware ESX, Xen or Microsoft'sHyper-V environment, why should you have to license it for the whole box? With TSM v6.1, you nowcan pay for only the amount of processors you use, down to a single core even.If you currently run TSM v5 on z/OS, you can migrate over to TSM v6.1 server for Linux on System z totake advantage of cost savings using IFL engines.
IBM Tivoli Key Lifecycle Manager (TKLM) v1.0
Don't let the "v1.0" scare you, this is the successor to IBM's Encryption Key Manager (EKM) that hasthousands of clients using today with IBM encrypting tape drives. The new TKLM adds support for full disk encryption (FDE) drives--like those for the DS8000 I mentioned in [yesterday's post]--as well as new features to support key rotation for compliance and business controls.
IBM Tivoli Storage Productivity Center
Last, but not least, we have IBM Tivoli Storage Productivity Center [TSPC]. No, that is not a typo. IBM is renaming IBM TotalStorage Productivity Center to Tivoli Storage Productivity Center toavoid trademark conflicts with the [Professional Golfer's Association].
This is not just renaming existing product. Here some key improvements:
TSPC brings back together Productivity Center Standard Edition (Disk, Tape, SAN and Data) with Productivity Center for Replication, which were separate at birth a few years ago.
TSPC adds support for IBM's Storage Enterprise Resource Planner[SERP] from theNovusCG acquisition.
End-to-end view for EMC storage devices connected to supported servers via EMC Powerpath multipathing driver. As customers switch away from EMC Control Center over to IBM's Productivity Center, IBM can continue to provide support for existing EMC gear.
Of course, IBM will still offer IBM System Storage Productivity Center[SSPC] which is a piece of hardware pre-installed with Productivity Center software.
Hopefully, you can now see why I had to split up all these announcements into separate posts acrossmultiple days!
An avid reader of this blog pointed me to a blog post [A Small Tec DIGG on IBM XIV], byGowri Ananthan, a System Engineer in Singapore.Basically, she covers past battles, er.. discussions between me and fellow blogger BarryB from EMC, and [blegs] foranswers to three questions.
Gowri, here are your answers:
Q1. Does IBM offer a Pay-as-you-Go [PAYGO] upgrade path for its IBM XIV disk storage system?
The concern was expressed as:
PAYGO also requires the customer to purchase the remaining capacity within 12 months of installation. So it is More of a 12-month installment plan than pay-as-you-grow.
A1. Actually, IBM offers several methods for your convenience:
With IBM's Capacity on Demand (CoD) plan, you get the full framewith 15 modules installed on your data center floor, but only pay for the first four modules 21 TB, then pay for 5.3TB module increments as you need them over the next 12 months. This is ideal for companies that don't know how fast they will grow, but do not want to wait for new modules to be delivered and installed when needed.
With IBM's Partial Rack offering, you can get a system with as little as six modules (27TB),and then over time, add more modules as you need. This does not have to be done within 12 months, you can stay at six modules for as long as you like, and you can take as long asyou want to add more modules. When you are ready for more capacity, the drawer or drawerscan be delivered, and installed non-disruptively.
Neither of these are "payment installment plans", but certainly if you want to spread yourcosts into regularly-scheduled monthlypayments across multiple years, IBM Global Financing can probably work something out.
Q2. Does IBM consider the XIV as green storage?
The concern was expressed as:
You are powering (8.4KW) and cooling all 180 drives for the whole duration, whether you're using the capacity or not. is it what you called Greener power usage..?
A2. Yes. IBM considers the IBM XIV as green storage. The 8.4KW per frame is lessthan the 10-plus KW that a comparable 2-frame EMC DMX-950 system would consume. Theenergy savings in IBM XIV comes from delivering FC-like speeds using slower SATA disks that rotate slower, and therefore take less energy to spin.
In the fully-populated or Capacity on Demand configuration, you would spin all 180disks. However, using the partial rack configuration, the 6-module has only 40 percent ofthe disks, and therefore consumes only 40 percent of the energy. If you don't plan to storeat least 20-30 TB, you might consider the DS3000, DS4000, DS5000, or DS8000 disk system instead.
Q3. How do you connect more than 24 host ports to an IBM XIV?
The concern was expressed as:
And finally do not forget my question on 24-FC Ports… Up to 24 Fiber Channel ports offering 4 Gbps, 2Gbps or 1 Gbps multi-mode and single-mode support.Stop.. stop.. how you gonna squeeze existing bunch of FC cables in 24 ports?
A3. Best practices suggest that if you have ten or more physical servers, each with two separate FC ports, then you should use a SAN switch or director in between. If you require four ports per server, then you would need a SAN switch beyond six servers to connect to the IBM XIV. If you consider that 24 FC ports, at 4Gbps, represents nearly 10 GB/sec of bandwidth, you will recognize that this is not a performance bottleneck for the system.
While the rest of Americans were glued to their televisions watching President Obama explain his plan for recovery, my colleagues and Ihad dinner with clients from Canada.
One in particular claimed her father was known as the kingpin of[Flin Flon]. She lives in Ontario now, but she grew up in this smallmining town in Manitoba made famous for winning a government contractto grow crops for medicinal purposes.
Shown at left is the town's mascott, Flinty. Yes, apparently thetown was named after a fictional character of a paperback novel.
Of course, in conversations with clients, it is best to avoid topics like politics or drugs,but the intersection of government health care and implications on IT can't be disregarded.Since Canada has a more efficient healthcare process, the government enjoys a lower costper citizen. President Obama has suggested that the United States should adopt reforms to make the American system more efficient, including electronic medical records.
Not surprisingly, [smarter healthcare] is part of IBM's latest set of strategic initiatives.Digitizing medical information has a variety of benefits:
Information isn't stranded on islands
If there is any situation that needs to deliver the right information, to the right people,at the right time, healthcare is certainly one of them. Having the right information canhelp reduce medical mistakes.
Physicians spend time with their patients, not paperwork
I personally know some doctors here in Tucson, and they are the first to admit that theywould prefer to focus on their core strengths, which they spent many years in medical school,and leave the administrative details to someone else. Focusing on core strengths is acommon theme for successful businesses, and this is no different.
Expertise needs no passport
Medical emergencies do not always happen near the hospital or clinic that your medical records are stored at.An exciting feature of digital information is that it is easy to transport to where it isneeded, unlike paper records or X-ray film.
To learn more about IBM's strategy and vision, see IBM's[Smarter Planet] Web site.
With mixed emotions, Jon Peake announced he will retire from IBM next week. Jon is known as thefather of IBM Virtual Tape Server (VTS), the industry's first virtual tape system, announced in 1996and generally available in 1997.One of my 19 patents was for the VTS pre-migration capability, and as lead architect for DFSMS, I worked closely with Jon and his tape systems team to ensure its success.
From left to right:
Chris Telford, IBM Development manager for Enterprise Tape Integration
Jon Peake, IBM Distinguished Engineer and Master Inventor
Annette Estelle, Jon's global admin assistant
At his retirement celebration, Jon was awarded the coveted "Project Bulldog" jacket, which has an interesting history.
In response to IBM's 1996 VTS announcement, the top StorageTek (STK) tape sales teams and most of the dedicated tape technicians were invited to a global assembly at a fancy resort in Winter Park, CO (about 90 miles west of STK's Louisville headquarters) in early 1997. The gathering was named Project Bulldog, after Ron Korngiebel, STK's director of competitive marketing, who I am told had voice and facial resemblance to justify the project moniker. Ron had recruited Fred Moore, Steve Blenderman, and other prized engineers as speakers. I have seen both Fred and Steve speak at various conferences such as SHARE and GUIDE, and agree they are high quality speakers.
The goal was to have STK's brightest in Louisville go down in the trenches, work the field guys into a frenzy, defend STK Tape at any cost, and send IBM packing. At the end of the two day fest, many participants received the coveted Project Bulldog jacket.
Former STKers who now work at IBM can remember this meeting involved:
Bashing of the [IBM Seascape] architecture approach. The use of commodity servers and componentsto build storage systems continues today in the IBM System Storage DS8000, SAN Volume Controller,XIV, and TS7650 Deduplication solutions.
Explanations how and why IBM's VTS would never work, and how only STK virtual tape would make it in the market. Today, IBM is the leader in storage virtualization, both for disk and tape.
Mock interview videos with claims that IBM could never figure out how to attach IBM drives to the STK Silo. I was a big proponent of this, having visited customers who specifically asked for IBM to sell its better, faster IBM drives into their existing STK silos. At first, upper management was hesitant to do this, but the IBMengineers worked out what changes were needed, and today many STK tape automation libraries run with IBM tape drives.
While some analysts frowned on Sun's [2005 acquisition of StorageTek], IBM was delighted, given Sun's previous track record in storagecompany acquisitions. I joke that we are still picking up confetti in the hallways of IBM's Tucsonlab. I was in New York city when I heard Sun's announcement, and it didn't take long for STKemployees offering me their resumes.Since then, many STK engineers, technicians and sales team have left Sun, many coming over to IBM.Back then, there were many intelligent and talented people working for StorageTek, and IBM is gladto have hired them.
With the resurgence of interest in tape systems, from dealing with new legislation for long term retention of electronic data to a focus on energy efficiency, Jon leaves much like a champion retiring at the top of his game.
Jon, I am going to miss you! Enjoy your retirement!
I’ve just returned from the IBM Tivoli Pulse conference in Las Vegas – a meeting of over 4000 customers, partners, and IBM employees. ... There was a lot to digest, but three of the major themes caught my attention, and my imagination. ... First, IBM put a huge push behind their Dynamic Infrastructure initiative. Sounds like so many other automation and autonomic initiatives of the past, right? Well, things are getting better, and “dynamic” is becoming more of a realistic possibility, especially with the emergence of cloud computing and cloud services models. ... Second, a lot of time was spent on IBM’s Service Management Industry Solutions. When I first heard of this, my thought was that IBM was creating solutions for the Service Management industry (i.e. food services, janitorial services, hospitality services). But this is much larger than that – much, much larger. IBM is taking their unique ability to pair business (non-IT) expertise with IT consulting, planning, and technology delivery, and constructing (careful – here comes the “f” word) frameworks for several vertical industry segments. ... IBM is perhaps the only organization in the world that can take this on fully and hope to deliver a meaningful result. But beyond that, this represents a huge opportunity for IT professionals to become the transformation agents within their own organizations, contributing at a whole new level. ... Lastly, I was really impressed by IBM’s Smarter Planet initiative. The primary thought here was that the key to a greener planet is to take inefficiencies out of just about every form of business through the intelligent application and deployment of technology. At first I was thinking this was just another marketing initiative, but in the course of this event, listening to the keynotes and talking to a number of IBM execs, it became apparent that this is a substantial cultural shift within IBM itself. Just think about that for a moment – when 400,000 employees all change their direction and focus, their sheer mass is going to make a noticeable difference. ... Magic (Johnson) gave an excellent talk, and reminded the audience that you should do two things no matter what your job or role. First, service starts with knowing your customers – not just who they are, but what they do and what is important to them. And second – always over-deliver. Go that extra step. Exceed expectations. The boost in loyalty, goodwill, and improved customer relationships will be well worth the effort. Good thoughts to keep with us….
If you missed Pulse 2009, perhaps because your company has put a clamp down on travel expenses, you are in luck! IBM is hosting the "Dynamic Infrastructure Forum" March 3-4, 2009, on your computer. This is an IBM Virtual event, no travel required! [Register Today!]
Wrapping up this week's theme on IBM's Dynamic Infrastructure® strategic initiative, we have a few more goodies in the goody bag.
First item: Dave Bricker shows off the XIV cloud-optimized storage at Pulse 2009
Second item: Rodney Dukes discusses the latest features of the DS8000 disk system at Pulse 2009
Third item: IBM launches the [Dynamic Infrastructure Journal]. You can read the February 2009 edition online, and if you find it useful and interesting, subscribe to learn from IBM's transformation experts how to reduce cost, manage risk and improve service.
Whether or not you attended the IBM Pulse 2009 conference, you might enjoy looking at the rest of the series of videos on [YouTube] and photographs on [Flickr].
It seems like [only yesterday] I was talking about IBM's strategic initiatives for the New Enterprise Data Center, including the launch of asset and service management at [Pulse 2008] in Orlando, Florida.
This week, my colleagues are at [Pulse 2009] in Las Vegas, Nevada. (I'm not there this time, so stop asking all my colleagues where I am!)Obviously, a lot has change in the last 12 months: the world's financial economy has collapsed, our delicate environment continues to unravel, and a new US President was elected to fix all that was broken by the former occupant. As a result, IBM's strategy has evolved beyond just data centers for large enterprises.
I can't think of a better time to emphasize the need for a more dynamic infrastructure. And this is not just focused on IT operations, but smarter business infrastructure as well, as the two now are very much intertwined. Everything from smarter healthcare, smarter telecom, smarter retail, smarter distribution, smarter transportation, and smarter financial services. IBM's [Dynamic Infrastructure@reg;] is one of four strategic initiatives to help build a smarter planet.
Let's take a quick look at the key benefits:
Do you remember back to the days that the IT department was like the accounting department in the back office, merely recording what happened in a series of transactions? Not anymore! Today, IT is front and center of most businesses, helping to generate revenue, drive innovation, and provide better customer service. We are finding a convergence between the physical world of running business with the digital world of IT. Intelligence is everywhere, embedded in systems and operations throughout, not just in a data center.
Imagine only 10-15 years ago the primary concern for IT operations was the cost of hardware. Now, thanks to[Moore's law], hardware is cheaper, but other IT budget costs like labor, management software, power and cooling costs are growing faster and becoming more predominant factors. IBM recognizes that you must consider thetotal cost of ownership, not just the acquisition cost of new hardware. But again, this isn't just reducing the costs of IT, but making more effective use of IT resources to reduce costs everywhere else, in schedulingtransportation, in managing manufacturing assets, and so on.
While the world feels much safer now that Barack Obama has taken over, there are still risks and threats out there, and businesses large and small have to manage them. Economic swings like we have experienced lately help weed out those companies that had fixed costs and static infrastructures, in favor of those with more variable costs and dynamic infrastructures. When the marketplace slows down, can your business "dial down" its operations to match? And when the recession is over and business is booming again, can your business "ramp up" fast enough to take on new opportunity? With IBM's Cloud Computing, companies can minimize their fixed investments and use a variable amount of computing as business needs change dynamically.
To learn more about Dynamic Infrastructure, read the IBM [Press Release].
When I was a kid, I used to love old spy movies where they would hide a small microchip or microfiche behind the stamp on a letter or postcard. "Yeah right," I would think to myself, "how much information could that little thing possibly hold."On their post[Bringing the "New Intelligence" Down to Earth: Intro to Semantic Web, Internet-of-Thing], My fellow IBM bloggers Jack Mason and Adam Christensen pointed me to a crazy new product called "Mir:ror" that connects to your PC or laptop.
At first, I thought it was a another product spoof, like Onion News Networks'video of the [Apple MacBook Wheel] that eliminatesthe need for a keyboard.But no, this product is real, from a company called [Violet]. The mir:ror, the internet-connected rabbits, and the tiny postage stamps called "ztamps" with embedded RFID chips that allow everything to be interconnected.I can see a lot of interesting uses for the ztamps. Squishing CD-romsor memory sticks inside presentation folders was always awkward. Butthese are small, flat and discrete. I don't know how many GBs of storage each ztamp holds, but they look cool, don't they?
Just another example of becoming a smarter planet!
IBM's emphasis on "Information Infrastructure" is to help organizations get the right information, to the right people at the right time. This helps them to have the right insights, make the right decisions, and develop the right innovations needed for the challenges at hand.
As the planet got smaller and flatter, IBM led the way. Now, as the planet needs to get smarter--with more efficient health care, energy distribution, financial institutions, and IT infrastructures--IBM will once again take the lead.