This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
At IBM, our standard is to have a limit of 200MB per user mailbox. A few of us get exceptions and have up to500MB limit because of the work we do. By comparison, my personal Gmail account is now up to 6500MB. Whenthis limit is exceeded, you are unable to send out any mail until it is brought down below the limit, and a request to be "re-enabled for send" is approved, a situation we call "mail jail".
The biggest culprit are attachments. Only 10 percent of emails have attachments, but those that do take up 90percent of the total space! People attach a 15MB presentation or document, and copy the world ondistribution list. Everyone saves their notes with these attachments, and soon, the limits are blown. Not surprisingly, deduplication has been cited as a "killer app" to address email storage, exactly for this reason.If all the users have their mailboxes all stored on the same deduplication storage device, it might find theseduplicate blocks, and manage to reduce the space consumed.
A better practice would be to avoid this in the first place. Here are the techniques I use instead:
Point to the document in a database
We are heavy users of Lotus Notes databases. These can be encrypted and controlled with Access Control Lists (ACL)that determine who can create or read documents in each database. Annually, all the database ACLs are validatedso that people can confirm that they continue to have a need-to-know for the documents in each database. Sendinga confidential document as a "document link" to a database entry takes only a few bytes, and all the recipientsthat are already on the ACL have access to that document.
Point to the document on a web page
If the document is available on an internal or external website, just send the URL instead of attaching the file.Again, this takes only a few bytes. We have websites accessible only to all internal employees, websites thatcan be accessed only by a subset of employees with special permissions and credentials based on their job role, and websites that are accessible to our IBM Business Partners.
In my case, if I happen to have a blog posting that answers a question or helps illustrate an idea, I will sendthe "permalink" URL of that blog post in my email.
Point to the document on shared NAS file system
Internally, IBM uses a "Global Storage Architecture" (GSA) based on IBM's Scale-Out File Services [SoFS] with everyone getting initially 10GB of disk space to store files, with the option to request more if needed. The system has policy-based support for placing and migrating older data to tape to reduce actual disk usage, and combines a clustered file system with a global name space.
My SoFS space is now up to 25GB, and I store a lot of presentationsand whitepapers that are useful to others. A URL with "ftp://" or "http://" is all you need to point to a filein this manner, and greatly reduces the need for attachments. I can map my space as "Drive X:" on my Windows system,or as a NFS mount point on my Linux system, which allows me to easily drag files back and forth.
Departments that don't need to offer "worldwide access" use NAS boxes instead, such as the IBM System Storage N series.
Pointing to files in a shared space, rather than as attachments in email, may take some getting used to. I've hada few recipients send me requests such as "can you send that as an attachment (not a URL)" because they plan toread it on the airplane or train, where they won't have online connectivity.
"Have you invested in the latest and greatest in collaboration technology but still feel people are still not collaborating? How many Microsoft Sharepoint servers and IBM Quickplaces remain relatively untouched or only used by the organization's technorati? I think it's a big problem because this narrow view of collaboration starts to get the concept a bad name: "yeah, we did collaboration but no one used it." And then there the issue of the vast amount of money wasted and opportunities lost. We can't afford to loose faith in collaboration because the external environment is moving in a direction that mandates we collaborate. The problems we face now and into the future will only increase in complexity and it will require teams of people within and across organizations to solve them."
Well, sending pointers instead of attachments works for me, and has kept me out of "mail jail" for quite some timenow.
Well, tomorrow is the Winter solstice, at least for those of us in the Northern hemisphere of the planet.As often happens, I have more vacation days left than I can physically take before they evaporateat the end of the year, so next week I will be off, going to see movies like the new["Golden Compass"]or perhaps read the latest book from [Richard Dawkins].
Next week, I suspect some of the kids on my block will be playing with radio-controlled cars orplanes. If you are not familiar with these, here's a [video on BoingBoing]that shows Carl Rankin's flying machines that he made out of household materials.
Which brings me to the thought of scalability. For the most part, the physics involvedwith cars, planes, trains or sailboats apply at the toy-size level as well as the real-world level. One human operator can drive/manage/sail one vehicle. While I have seen a chess master play seven opponents on seven chess boards concurrently, itwould be difficult for a single person to fly seven radio-controlled airplanes at the same time.
How can this concept be extended to IT administrators in the data center? They have to deal withhundreds of applications running on thousands of distributed servers.In a whitepaper titled [Single System Image (SSI)], the threeauthors write:
A single system image (SSI) is the property of a systemthat hides the heterogeneous and distributed nature of theavailable resources and presents them to users and applicationsas a single unified computing resource.
IBM has some offerings that can help towards this goal.
Even in the case where yourvehicle is being pulled by eight horses--(or eight reindeer?)--a single operator can manage it, holding the reins in both hands. In the same manner,IBM has spent a lot of investment and research into supercomputers, where hundreds of individualservers all work together towards a common task. The operator submits a math problem, for example,and the "system system image" takes care of the rest, dividing the work up into smaller chunksthat are executed on each machine.
When done with IBM mainframes, it is called a Parallel Sysplex. The world's largest business workloadsare processed by mainframes, and connecting several together and working in concert makes this possible.In this case, the tasks are typically just single transactions, no need to divide them up further, justbalance the workload across the various machines, with shared access to a common database and storageinfrastructure so they can all do the work equally.
Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 Intel-based servers onto 33 mainframes. This not only saves lots of electricity, but makes it much easier for the IT administratorsto manage the environment.
Parallel Sysplex configurations often require thousands of disk volumes, which would have been quitea headache dealing with them individually. With DFSMS, IBM was able to create "storage groups" wherea few groups held the data. You might have reasons to separate some data from others, put them inseparate groups. An IT administrator could handle a handful of storage groups much easier than thousandsof disk volumes. As businesses grow, there would be more data in each storage group, but the numberof storage groups remains flat, so an IT administrator could manage the growth easily.
IBM System Storage SAN Volume Controller (SVC) is able to accomplish this for other distributed systems.All of the physical disk space assigned to an SVC cluster is placed into a handful of "managed diskgroups". As the system grows in capacity, more space is added to each managed disk group, but few IT administrators can continue to manage this easily.
The new IBM System Storage Virtual File Manager (VFM) is able to aggregate file systems into one globalname space, again simplifying heterogeneous resources into a single system image. End users have a singledrive letter or mount point to deal with, rather than many to connect to all the disparate systems.
Lastly we get to the actual management aspect of it all. Wouldn't it be nice if your entire data centercould be managed by a hand-held device with two joysticks and a couple of buttons? We're not quite there yet, but last October we announced the [IBM System Storage Productivity Center (SSPC)]. This is a master consolethat has a variety of software pre-installed to manage your IBM and non-IBM storage hardware, includingSAN fabric gear, disk arrays and even tape libraries. It lets the storage admin see the entire data centeras a single system image, displaying the topology in graphical view that can be drilled down using semanticzooming to look at or manage a particular device or component.
Customers are growing their storage capacity on average 60 percent per year. They could do this by havingmore and more things to deal with, and gripe about the complexity, or they can try to grow theirsingle system image bigger, with interfaces and technologies that allow the existing IT staff to manage.
Technology Review has a great 6-minute video showing how the PowerTune system works in the ['self-tuning' guitar].
As with any self-tuning equipment, there are three essential parts.
Measurement. In the case of the guitar, small sensors identify the current note based on string tension.
Response. Based on the measurement, the self-tuning system either decides that there is no more to do, or to take specific action. In the case of this guitar, the action would be to loosen or tighten the string.
Action. The action taken that is expected to get closer to the desired result. In this case, tiny motorsinside the handle turn the thumbscrews to loosen or tighten the strings accordingly.
These are part of a "closed-loop design", as it is called in [Control Theory].After the action in step 3 is taken, goes back to step 1, takes a new measurement, and determines a new response. Thiscould mean that the string is tightened and loosened by ever smaller amounts until it is close enough to the desiredaccuracy, in this case an impressive two [cent].
On the server side, IBM has offered this for years. For example, for z/OS applications on System z mainframes, the[Workload Manager (WLM) offers a "goal mode"] that allows you to set desired results for your business applications, for example, how quickly they respond in processing transactions. WLM measures the response time of the transactions, determines anappropriate response if any, and takes action to shift processor cycles (MIPS) or RAM to help out the workloads with the highest priority, in some cases stealing cycles and RAM away from lesser priority tasks.
For storage, we have IBM TotalStorage Productivity Center. It can scan for file systems over 90 percent full, for example, determine an appropriate response based on policies, and take action to expand the file system to a larger size.This may involve dynamically expanding the LUN that the file system sits on, a feature available on IBM SAN VolumeController, DS8000 series, DS4000 series and N series disk systems.This is the kind of closed loop design that can help eliminate those pesky phone calls at 3am.
But why focus on just storage alone? Combining servers and storage into a higher-level closed loop design is accomplished with [IBM Tivoli Intelligent Orchestrator] and [IBM Tivoli Provisioning Manager]. In thiscombo, Orchestrator measures and responds, and can invoke Provisioning Manager workflows to take action. Workflows are like scripts on steroids. Unlike normal scripts which run on a single machine, workflows can communicate with multiple servers, storage and even networking gear to take the appropriate actions on each of those machines, like install updated software, carve a new LUN, or define a new SAN zone.
The products are well integrated with TotalStorage Productivity Center for the storage aspects.
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
This week, I presented at the "IBM TechU Comes to You" event in beautiful Dubai, United Arab Emirates. This was a three-day event, so here is my recap of Day 3.
IBM Spectrum Control Family - The right products for your storage management needs
Mike Griese (IBM Spectrum Storage Evangelist) presented the IBM Spectrum Control family. There are now four editions of IBM Spectrum Control:
IBM Spectrum Control Based Edition -- comes included with specific IBM Storage products to provide Cloud APIs such as those required for VMware.
IBM Spectrum Control Standard Edition -- Includes "Based Edition" and adds monitoring, provisioning and troubleshooting for IBM and non-IBM storage devices. Also includes IBM Copy Services Manager for select IBM storage devices.
IBM Spectrum Control Advanced Edition --Includes "Standard Edition" and adds Spectrum Protect Snapshot to take application-aware snapshots, and the Storage Analytics Engine to optimize data placement.
IBM Spectrum Control Storage Insights -- A "Software-as-a-Service" subset of "Advanced Edition" for IBM storage products.
Implementation of Incremental Forever Backup and Deduplication with Spectrum Protect
This was combination of an overview of IBM Spectrum Protect plus an update of the latest v7.1.5 release. For those who use alternative backup software like Veritas NetBackup or Commvault Simpana, I explained how to implement "Incremental Forever" backup, which has been shown repeatedly by analysts and studies as being far more efficient than traditional backup methods like Full+Incremental or Full+Differential.
For those who may be using an earlier version of IBM Tivoli Storage Manager, I presented the new "Dedupe 2.0" features, including the new concept of "Container Pools" that can either be "Directory Pools" on SAN or NAS-based disk storage, or "Cloud Pools" on object storage, like IBM Cleversafe or IBM SoftLayer.
Spectrum Control Storage Insights - Redefining storage management simplicity
Mike Griese presented the newest member of the IBM Spectrum Control. Storage Insights is a Software-as-a-Service offering, that was recently reduced in price: only $250 per month for the first 50TB. Increasing amounts of storage capacity monitored are tiered at lower and lower prices.
Real-time Compression in Database environment
When it comes to compression, should you compress at the database level, or in the storage device? Database management systems like IBM DB2 and Oracle DB offer row-level or page-level compression.
IBM Real-time Compression available in IBM XIV and all of the latest Spectrum Virtualize products: SAN Volume Controller, Storwize V7000, Storwize V7000 Unified, Storwize V5000, FlashSystem V9000, as well as any of these in the VersaStack converged system from IBM and Cisco.
IBM ran tests that compared volume with database uncompressed, database-based compression, compression on IBM Storwize V7000 with IBM Real-time Compression, and a test run that does both database and storage-based compression together. The results might surprise you!
I explained the pros and cons of each method of compression, and why you might choose one or the other.
Be Ready for Object Storage with CleverSafe
Eric Forestier (IBM Montpelier) presented a quick overview of IBM Cleversafe, then did a live demo of the PUT and GET features. For example, he used the Linux CURL command to upload a video file as an object in his IBM Cleversafe cluster back in France. Then he used a regular browser to stream the video back.
Was Dubai too far away for you to attend? Want to hear the latest technical information about IBM Storage, but not willing to wait until the big [IBM Edge Conference] this September? We will have several more "IBM TechU Comes to You" events in May and June.
Twenty years ago, I flew to Atlanta for the semi-annual SHARE conference. I was a lead architect for DFSMS, the storage management software for mainframe servers. When I got to the hotel, I realized that I had forgotten to pack my saline solution for my contact lenses. I went to the hotel gift shop, and picked the first one I found. I took my contacts in the solution and went to bed.
The next morning, I put on my contacts, got dressed, and participated in meetings. One of my colleagues noticed my eyes were quite red, and suggested I switch from contact lenses to glasses. I went back to my hotel room, saw to my horror that what I thought was saline solution was actually hydrogen peroxide intended for hard lenses. When I removed the lenses, all I could see was white light.
I managed to find my way to the elevator, and feel for the button with the star that indicated the lobby on the ground floor. I asked a hotel staffer to call me an ambulance, but instead, they put me in a cab, and sent me to Emory Hospital. On arrival, all I could do was hand over my wallet to my cabbie, and let him take out what he felt was fair, since I could not see him, the meter, or his license number.
After bumping my knees into dozens of cars in the parking lot, I finally made it to the ER, only to have receptionist give me a form to fill out and a pen. At this point, I lost it. I gave her my wallet and said that any information she may need should be in there.
Thankfully, a doctor noticed this exchange, and took care of me right away. I had chemically burned off both corneas. He injected some green fluid into both eyeballs, and sent me off in a cab to the Pharmacy. At least I had both eyes were bandaged in gauze, so people were kind enough to take me to get to the counter to get my pain killers, Percocet.
The pharmacist provided me the pills, and warned me NOT to operate any heavy machinery under the influece of this medication. Seriously? I can't see, both eyes covered, and he tells me that?
I got back to the hotel, got ready for bed, took the pills and brushed my teeth. I woke up the next morning on the bathroom floor, still clutching the toothbrush, and vertical and horizontal lines across my right cheek which were made by the one-inch tiles of the bathroom floor. These pills really knocked me out.
That day, I had to present a full hour in front of hundreds of people. I had a colleague flip my transparencies for me, while I spoke to each one, my eyes still covered in gauze. That evening, I was one of the experts on the panel for a "Birds of a Feather", or BOF session, answering a variety of questions. People could see that I was blind, but I could still hear the questions, and I could still answer them as well.
If you are going to Edge 2013 in Las Vegas, please consider attending my BOF session on Security for PureSystems, System x and Storage products, scheduled for Thursday afternoon, June 13. I will be moderating a distinguished panel of experts to answer your questions! I have listed them here alphabetically:
Jack Arnold, US Federal. Jack has worked decades in the storage industry, and will provide insight into security issues related to the government.
Tom Benjamin, Development Manager for Key Lifecycle Management and Java Cryptography. Tom will bring his expertise in both TKLM and ISKLM for managing encryption keys, and how to communicate these between security and storage administrators.
Paul Bradshaw, Chief Storage Architect for Clouds. A research scientist from IBM's Almaden Research Lab, Paul will provide insight in how to deal with security issues related to private, hybrid and public cloud deployments.
Ajay Dholakia, Solution Center of Excellent. Ajay will cover server-side considerations for security deployments, including System x and PureSystems.
Jim Fisher, Advanced Technical Skills. Jim brings expertise related to deploying data-at-rest encryption.
Not sure what kind of questions to ask? Here is a series of Questions and Answers we had at a Storage event in 2011 that might give you a good idea: [2011 Storage Free-for-All].
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Here is my recap of the lunch-time sessions Wednesday afternoon.
5663A Beyond Hyperconvergence to a Hyperscale Converged Infrastructure
Bernard "Bernie" Spang, IBM, presented. Organizations continue to face challenges with efficiently managing unprecedented volumes and varieties of data. Meanwhile, new frameworks such as Spark and Hadoop are emerging to efficiently exploit that data. These offerings have the potential to deliver significant benefits, but they can also increase data center complexity and cluster sprawl.
Bernie covered the evolution of Hyperconvergence to a Hyperscale converged technology. By extending software-defined infrastructure concepts to a converged application- and data-optimized fabric, IBM is enabling organizations to reduce costs and accelerate time to insight by efficiently storing, analyzing and protecting their data.
Hyperconvergence is the concept of running hypervisor software on storage-rich servers. Software-only versions include IBM Spectrum Accelerate and VMware VSAN, whereas pre-built systems are available from Nutanix, Simplivity and others.
But not everything is x86 or Hypervisor based. Some applications are better served on bare metal, while others might be better served on containers like Docker or LXC. IBM Spectrum Scale provides for all of these additional platforms, works on both x86 and POWER systems, and can handle storage tiering from flash to disk to tape. It can work across locations, representing any mix of on-premises and off-premises facilities.
1841A IBM Cloud Storage Options
I was pleased to have a standing-room only crowd attend my session!
The term "Cloud Storage" can be misleading. I spell out four unique types of storage:
Ephemeral Storage - storage that exists only as long as the Virtual Machine using it is running. This is ideal for boot volumes and temporary work space.
Persistent Storage - typically block/transactional/high-speed storage that continues to live beyond the life of the Virtual Machine.
Hosted Storage - files, documents and backup copies that are read/written in the Cloud
Reference Storage - files and objects that are written once, and never modified thereafter, such as archives, financial records, and photographs. Since the term Write-Once-Read-Many (WORM) applies only to tape and optical media, the IT Industry now uses Non-Erasable-Non-Rewriteable (NENR) to include flash and disk media protected in some manner through software to avoid tampering.
The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also discuss the differences between block, file and object access, and why different Cloud storage types use different access methods.
I wrapped up the session covering the various IBM storage solutions that we offer for all four Cloud Storage types.
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
Some CEOs still consider IT departments as "cost centers". Rather than exploiting technology to help drive the rest of the business, they are seen as a necessary evil, an extension of the accounting department, for example.
Some CEOs consider IT's role as basically "keeping the lights on". They only notice IT when the lights go out, or other business outages caused by disruptions in IT.
IT departments measure themselves in technology terms, not business terms. CEOs and the rest of the senior management team may not be "tech savvy", and the CIO and IT directors may not be "business savvy", resulting in failure to communicate IT's role and value to the rest of the business.
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Bob Moffat, Senior VP and Group Executive, IBM
Bob Moffat (my fifth-line manager, or if you prefer, my boss's boss's boss's boss's boss) is the Senior VP and Group Executive of IBM's Systems and Technology Group that manufactures storage and other hardware. He presented how IBM is helping our clients deploy smarter solutions. Globalization has changed world business markets, has changed the reach of information technology, and has changed our client's needs.To support that, IBM is focused on making the world a smarter planet, instrumented with appropriate sensors, interconnected over converging networks, and intelligent to provide visibility, control and automation.
It's time to rethink IT in light of these new developments, to think about IT in client terms, with business metrics. Bob gave several internal and customer examples, here's one from the City of Stockholm:
Covering nine square miles of Stockholm Sweden, IBM led [the largest project of its kind] for traffic congestion in Europe. To reduce congestion caused by 300,000 vehicles, the City of Stockhold enacted a "congestion fee" with real-time recognition of license plates and a Web infrastructure to collect payments. The analytics, metrics and incentives have paid off. Since August 2007, traffic is reduced 18 percent, a reduction of travel time on inner streets, and a 9 percent increase in "green" vehicles.
In addition to smarter traffic, IBM has initiatives for smarter water, smarter energy, smarterhealthcare, smarter supply chain, and smarter food supply.
Dave Barnes, Senior VP and CIO, United Postal Service (UPS)
Dave Barnes must act as the "trusted advisor" to the rest of the senior management team. UPS delivers packages worldwide. They put sensors on all of the vehicles, not just to know how fast they were driving,but also how often they drove in reverse gear, and sensors on the engines to determine maintenance schedules.Analytics found that driving in reverse was the most dangerous, and by providing this information to the drivers themselves, the drivers were able to come up with their own innovative ways to minimize accidents.This is one role of IT, to provide employees the information they need to enable them to be better at their own jobs.
Dave also mentioned the importance of collaborating across business units. Their "Information Technology Steering Committee (ITSC)" has 15 members, of which only three are from the IT department. This helped deploy social media initiatives within UPS. For example, Twitter has been adopted so that senior management can get unfiltered customer feedback. This is perhaps another key role of IT, to flatten an organization from cultural hierarchies that prevent top brass up in the ivory tower from hearing what is going wrong down on the street. Too often, a customer or client complains to the nearest employee, and this may or may not get passed up accurately along the chain of command. Twitter allowed executives to see what was going on for themselves.
Dave also covered the "Best Neighbor" approach. If you were going to build a deck in your back yard, you might ask your neighbors that have already done this, and learn from their experience. Sadly, this does not happen enough in IT. To address this, UPS has a "Tech Governance Group" that focused on business process across the organization. For example, they improved "package flow", reducing 100 million miles in the past few years.
Lastly, he mentioned that many technologists are "loners". They have a few like that, but try to hire techies who look to team across business units instead. Likewise, they try to hire business people who are somewhat tech savvy. For example, they have encouraged business employees to write their own reports, rather than requesting new reports to be developed by the IT department. The end result, the business people get exactly the reports they want, faster than waiting for IT to do it. Another role for IT is to provide end-users the tools to make their own reports.
(Dave didn't mention what tools these were, but it sounded like the Business Intelligence and Reporting Tools [BIRT] that IBM uses.)
These two sessions were a great one-two punch to the audience of 600 CIOs and IT professionals. First, IBM sets the groundwork for what needs to be done. Then, UPS shows how they did exactly that, adopting a dynamic infrastructure and got great results. This is going to be an interesting week!
[R&D Magazine] recently conducted a survey that prompted readers to identify the world's most successful Research and Development (R&D) companies. The results are in: IBM was recognized as the best R&D company in the world when several different categories were evaluated, including:
R&D spending as a percentage of revenue
the number of patents
new products in development
The survey considered additional information on more than 130 companies such as data on intellectual property, community service and financial growth trends. Readers were also asked five distinct questions, including the following:
Where would you like to work based on their R&D?
What companies have the most improved R&D in the past five years?
What companies are the leaders in R&D?
Which company's R&D has the strongest influence on society?
Which company's R&D is the most proactive in high tech challenges?
Since it is often 5-15 years between when a scientist in one of our many research labs comes up with a clever idea, to when it is a market success, it is good to have external recognition for the R&D efforts we are doing right now.Here is a link to a [four-page PDF] of the magazine article.
Take for example IBM's recent breakthrough in Silicon photonics. Supercomputers that consist of thousands of individual processing nodes, typically running Linux on dual-core or quad-core processors, connected by miles of copper wires could one day fit into a laptop PC. And while today’s supercomputers can use the equivalent energy required to power hundreds of homes, these future tiny supercomputers-on-a-chip would expend the energy of a light bulb, so this solution is more "green" for the environment.According to the [IBM Press Release]:
The breakthrough -- known in the industry as a silicon Mach-Zehnder electro-optic modulator -- performs the function of converting electrical signals into pulses of light. The IBM modulator is 100 to 1,000 times smaller in size compared to previously demonstrated modulators of its kind, paving the way for many such devices and eventually complete optical routing networks to be integrated onto a single chip. This could significantly reduce cost, energy and heat while increasing communications bandwidth between the cores more than a hundred times over wired chips.
“Work is underway within IBM and in the industry to pack many more computing cores on a single chip, but today’s on-chip communications technology would overheat and be far too slow to handle that increase in workload,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “What we have done is a significant step toward building a vastly smaller and more power-efficient way to connect those cores, in a way that nobody has done before.”
Today, one of the most advanced chips in the world -- IBM’s Cell processor which powers the Sony Playstation 3 -- contains nine cores on a single chip. The new technology aims to enable a power-efficient method to connect hundreds or thousands of cores together on a tiny chip by eliminating the wires required to connect them. Using light instead of wires to send information between the cores can be 100 times faster and use 10 times less power than wires.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
4955A IBM and Box: Delivering Hybrid Solutions for Enterprise Content Management
Rich Howarth, IBM VP of Enterprise Content Management, and Rand Wacker, Vice President at Box, co-presented this session on the [IBM-and-Box partnership], integrating content management, social and analytics products with the Box cloud content management offering to enable enterprise customers to deploy hybrid solutions leveraging the best of their existing on-premise technologies along with new cloud technologies.
IBM and Box are partnering to re-imagine content management, case management and governance in the cloud. For example, IBM StoredIQ that scans various data sources to find documents and evidence needed to defend yourself against lawsuits can be run against files uploaded to Box.
On a personal note, the IBM Tucson Executive Briefing Center where I work now uses Box to upload presentation files that are then sent to the client attendees.
6524A The Role of Tape in a Cloud-Based World for Economical and Secure Data Retention
This was a 50/50 session. The first half was presented by Shawn Brume, IBM, that covered Linear Tape File System (LTFS) and IBM Spectrum Archive.
Like the cloud, tape has made great strides -- evolving independently in capacity, durability and data access capability while maintaining its economic benefits. As a result, today's tape is just as well suited to cloud service providers as it is to the enterprises and midsize organizations that rely on it to support their production and data protection strategies.
If a cloud service provider does not use tape, the provider and its customers are almost guaranteed to experience a long-term cost outlay that is higher than necessary, and will be putting their oldest and most compliance-sensitive data at risk, thanks to a disk-only based MSP model. See how incorporating tape into your storage strategy can reduce costs and improve MSP margins.
How does tape compare to disk for Cloud providers? A [Zettabyte] of data would cost $41 billion USD per year on disk, but only 8 billion USD per year on tape. Electricity for a Zettabyte of data requires 1.2 Gigawatt for disk, but only 300 Megawatt for tape.
For access to files that require a tape mount, an access time to first byte averages 45 seconds, with a worst case around 75 seconds. After that, tape can stream data as fast as the Internet can deliver it, so performance is not an issue beyond first byte access.
The second half was presented by Michael Piltoff, from value-added reseller Champion Solutions Group, covering their latest product called EchoLeaf. This can run on Windows or Linux that attaches to any IBM tape library, and exports the files on those cartridges as NFS or CIFS/SMB.
In other words, the entire library appears as a single mount point or drive letter, and each tape cartridge appears as a sub-directory. This uses IBM Spectrum Archive Library Edition under the covers.
4759A Cloud Storage Success: MSPs and Enterprises Reveal their Secrets
How do you distinguish fact from fiction when it comes to claims made by vendors about storage for cloud? Eric Herzog, IBM Vice President Marketing for IBM Storage Systems, served as emcee for a panel of experts using IBM Storage solutions across different industries for their Hybrid Cloud deployments.
The panel shared their experiences in using various technologies to get the most out of their private and hybrid cloud, discussed how they are building out their next-gen data centers to cope with today's business needs, talked about how they are using flash and software defined storage to place them where they need to be to succeed in the future.
On the panel were:
Richard Spurlock, Cobalt Iron, using PB of storage on Spectrum Scale and Cleversafe
Paul Rafferty, IBM Silverpop, using Spectrum Accelerate with different Cloud providers
Johnny Oldenburg, Tieto Sweden AB, using SVC, Storwize V7000 and FlashSystem
Keith Dobbins, Time Warner Cable/Navisite, over 30 fully-populated XIV storage systems
Here were some of the nuggets of wisdom:
Eliminate the debate between private or public cloud. Consider everything to be a unique shade of Hybrid Cloud.
Get the network right, all data and management control is done through the network in the Cloud
Take an "Outside-In" approach, focusing on the business problems being solved, rather than trying to exploit specific technologies.
Workloads are unpredictable in the Cloud. Cloud can sometimes be unreliable in their response to workload changes. Partner with vendors like IBM to provide support and scalability to handle the unexpected.
Ensure that you comply with government and industry regulations. For example, Payment Card Industry Data Security Standard [PCI-DSS] for credit card transactions.
Use VMware Storage Vmotion and VVols to migrate data from one Cloud to another.
Software defined network (SDN) and Software Defined Storage (SDS) greatly automate the provisioning process, pushing many storage admin tasks down to NOC personnel.
Use tools like Spectrum Control to provide a single-pane-of-glass management of your entire environment.
Build abstraction layers at touch points to avoid being impacted by external changes, and use documented reference architectures to ensure success.
Educate your clients and end-users on what is possible, and what is probable, in the Cloud.
Use "Flash Cache" technologies, such as IBM XIV, Oracle, Spectrum Scale, and VMware.
Analytics can help with "data rationalization", which identifies the business value of the data.
Object Store is a first-class citizen and should seriously be considered for new projects.
5467A My Data is Out of Control! Managing the Lifecycle of Your Data with "Big Storage" Cloud Archive
Jeff Karmiol and Quaid Nasir, both from IBM, presented a technology preview of a deep archive to be launched later this year.
A staggering 80 percent of data is never touched after 90 days of capture or creation. However, the data may need be kept for business, compliance or regulatory reasons.
"Big Storage" offers cloud storage for customers who need to store large amounts of data and retrieve it on-demand at the lowest cost possible. This easy-to-use cloud service provides fast retrieval times with affordable, transparent pricing and retrieval rates.
This service uses standard OpenStack Swift and POSIX interfaces so you don't need to learn any new APIs. Files and objects remain visible while archived, making it easy and affordable to continue to extract business value from your archived data.
This deep archive is located in a secure, IBM-managed data center. How deep? The facility is 350 feet deep under a mountain, which allows that tape cartridges to be kept at constant humidity and 40 degree Fahrenheit temperature.
Multiple resiliency and data protection options will be available. The data can be part of a global namespace, with some data on premises, connected to data migrated to the archive. Data movement can be either manually-initiated or policy-managed.
7256: Blogging 301: The Art of Opinion
"Turbo" Todd Watson and I started blogging 10 years ago, and we have both been ranked in the top-10 bloggers for IBM. He presented a series covering the basics of blogging. This session was a deeper dive into best writing practices and structures for being confident, engaging, and convincing in their writing.
Here are some of his bits of wisdom:
Base your opinions on fact and well-research information.
Educate your readers, without being "preachy"
Generate interest and enthusiasm, and encourage readers to participate
Don't equivocate, pick a position or side of a debate and stick with it
Leave your reader with the next logical step, a call to action, or pointer to additional information
It seems like [only yesterday] I was talking about IBM's strategic initiatives for the New Enterprise Data Center, including the launch of asset and service management at [Pulse 2008] in Orlando, Florida.
This week, my colleagues are at [Pulse 2009] in Las Vegas, Nevada. (I'm not there this time, so stop asking all my colleagues where I am!)Obviously, a lot has change in the last 12 months: the world's financial economy has collapsed, our delicate environment continues to unravel, and a new US President was elected to fix all that was broken by the former occupant. As a result, IBM's strategy has evolved beyond just data centers for large enterprises.
I can't think of a better time to emphasize the need for a more dynamic infrastructure. And this is not just focused on IT operations, but smarter business infrastructure as well, as the two now are very much intertwined. Everything from smarter healthcare, smarter telecom, smarter retail, smarter distribution, smarter transportation, and smarter financial services. IBM's [Dynamic Infrastructure@reg;] is one of four strategic initiatives to help build a smarter planet.
Let's take a quick look at the key benefits:
Do you remember back to the days that the IT department was like the accounting department in the back office, merely recording what happened in a series of transactions? Not anymore! Today, IT is front and center of most businesses, helping to generate revenue, drive innovation, and provide better customer service. We are finding a convergence between the physical world of running business with the digital world of IT. Intelligence is everywhere, embedded in systems and operations throughout, not just in a data center.
Imagine only 10-15 years ago the primary concern for IT operations was the cost of hardware. Now, thanks to[Moore's law], hardware is cheaper, but other IT budget costs like labor, management software, power and cooling costs are growing faster and becoming more predominant factors. IBM recognizes that you must consider thetotal cost of ownership, not just the acquisition cost of new hardware. But again, this isn't just reducing the costs of IT, but making more effective use of IT resources to reduce costs everywhere else, in schedulingtransportation, in managing manufacturing assets, and so on.
While the world feels much safer now that Barack Obama has taken over, there are still risks and threats out there, and businesses large and small have to manage them. Economic swings like we have experienced lately help weed out those companies that had fixed costs and static infrastructures, in favor of those with more variable costs and dynamic infrastructures. When the marketplace slows down, can your business "dial down" its operations to match? And when the recession is over and business is booming again, can your business "ramp up" fast enough to take on new opportunity? With IBM's Cloud Computing, companies can minimize their fixed investments and use a variable amount of computing as business needs change dynamically.
To learn more about Dynamic Infrastructure, read the IBM [Press Release].
But ITSM is more than just a better way to manage operational tasks, it is focused on the best practices of the IT Infrastructure Library (ITIL) which has been adopted bythe European Union, and now being adopted worldwide by both government agencies and private enterprises as a smartway to run your IT environment.
Of course, we've designed our solutions to apply to your entire IT environment, supporting both IBM and non-IBM equipment, so even if not all of your servers and storage come from IBM, at least your software can be.[Read More]
Today we watched Barack Obama get inaugurated as the 44th President of the United States, and he reminded all Americans that the power and strength of this country comes through its diversity.To some extent, this is also what gives IBM its power and strength as well. While not quite the orator of President Obama, IBM's own CFO, Mark Loughridge, gave a rousing speech about IBM's 4Q08 and year-end financial results.
In 2008, IBM was not just successful because it had a wide diversity of servers and storage hardware products, but also a diversity of software, and a diversity of service offerings.And lastly, IBM sells to a diversity of clients in different industries, throughout a diversity of markets. While the current economic meltdown might have affected businesses focused on the US and other major markets, IBM did particularly well last year in growth markets, including the so-called BRIC countries (Brazil, Russia, India and China).
IBM's approach to invest in R&D and its nearly 400,000 employees for long-term success continues to pay off. Where "Cash is King", IBM can also afford all those acquisitions and strategic initiatives, positioning the company for a brighter future.
Where there are challenges, IBM finds opportunity.
IBM doesn't publicly report subset numbers on individual product lines, but we are growing, albeit single-digit growth, on the high-end with our IBM System Storage DS8000 and DS6000 series products. Single digit growth is not "booming", but it is what we expected in this space, so it is not like we are"feeling the chill" as Robin stated.Obviously, if the U.S. market overall is doing poorly, then it must be from something else. IBM's success appears to be from organic growth in our Asia and Europe markets, and taking marketshare away from the top two contenders, EMC and HDS. Here are my thoughts why:
EMC is remodeling its kitchen
Not happy with its status as #1 disk hardware specialty shop, EMC is admirably trying to redefine itself as an ["information infrastructure"] company, buying up software companies and introducing new storage services. [Byte and Switch] reports onEMC's recent acquisitions:
EMC is the latest vendor to pin its colors to the SaaS mast, revealing its plan to offer SaaS-based archiving services during its recent Innovation Day in Boston.
EMC gave another clear indication of its SaaS intentions last month, when it spent $76 million to acquire online backup specialist Mozy.
IBM has offered[Managed Storage Services] foryears through our Global Technology Services (GTS) division. Gartner recognized IBM as the #1 leader in storageservices, with three times more revenues than EMC in this space.
As with a restaurant that is remodeling its kitchen, it can expect a temporary drop inrevenue. If it is done right, customers will come back to a bigger brighter restaurant. If not, the restaurant re-opens as a much smaller lesser version of itself. Recent events this year might incent EMC to get that kitchen done quickly:
A recent [class-action lawsuit]might result in having EMC's "86 percent male" sales force goes to sexual harassment sensitivity training, takingtime away from selling high-end storage arrays in the field. Analysts consider "high-end" boxes as those costingover $300,000 US dollars. Because of the money involved, there is a lot of competition for high-end storage, so face-to-face time with prospective customers is crucial to making the sale.Anytime any vendor is mentioned in a lawsuit (andcertainly IBM has had its share in the past, as Chuck Hollis correctly points out in the comment below), priorities get shifted, and there is potential dip in revenues.
Dell acquires EMC's rival EqualLogic. Dell resold EMC midrange storage, like CLARiiON, so this should notimpact their high-end storage sales. While Dell will be allowed to sell EMC until 2011, this new acquisition mightmean Dell leads with the EqualLogic offerings, and that could potentially reduce EMC revenues in the midrange space.
IBM went through a similar phase in the 1990's, redefining itself from an "IT Technology" company, intoa "Systems, Software and Services" company. These transitions can't be done in a quarter, or even a year, theytake several years. IBM lost business to EMC in the 1990s, but is back with a stronger portfolio in the 2000's, and so IBM's kitchen remodeling effort appears to be paying off. We will see what happens with EMC in a few years.
HDS puts on the white lab coats
Meanwhile, HDS appears interested in taking over as #1 disk hardware specialty shop.For years, Hitachi was the stereotypical JCM (Japanese IBM-compatible manufacturer) that made well-engineered"me, too" storage arrays. They would see what innovators like IBM and EMC were doing, and copy them. Recently,however, they seemed to have changed strategy, introducing new featuresand functions on their high-end USP-V device, like[Dynamic Provisioning].
The problem is that customers don't want to feel like [Guinea pigs] in an experimental lab, especially withmission-critical data that they trust to their most-available, most-reliable high-end disk storage systems.Like IBM and EMC and the rest of the major storage vendors, Hitachi has top-notch engineers making quality products, but new features scare people, and so there is a lag in the adoption of new technologies.
In our youth, we might have preferred beer with recent born-on dates, and tequila aged less than 90 days. But as weget older, we switch to drinks like wine and whiskey, aged years, not weeks. The same is true for themarketplace. New start-ups and other "early adopters"might be willing to try fresh new features and functions on their storage systems, but more established enterprises prefer storage with more mature and stable microcode.Storage admins want to leave at the end of the day, knowing that the data will still be there the next morning. In tough financial times, many established companies want the technological equivalent to ["comfort food"], nothing spicy or exotic, but simplehearty fare that fills the belly and keeps you satisfied.
Recognizing this, IBM often introduces new features and functions on its midrange lines first, and position them accordingly. Once customers are comfortable with the concepts, IBM then can consider moving them into the high-end lines. For example, dynamic volume expansion was introduced on the DS4000 and SAN Volume Controller first, and once proven safe and effective, brought over to the DS8000 series. This strategy has served us well.
Well those are my theories. If you have a different explanation of why storage vendors are not doing well in thehigh-end, drop me a comment!
Well,This is completely off-topic, but now that I have a bluetooth-enabled Thinkpad T60, I have been interested in this new wireless technology. I have a bluetooth cell phone, a bluetooth wireless headset, and my thinkpad, and they all work together seemlessly. I am able to speak on my cell phone through my headset, listen to music and videos on my laptop through my headset, and even dial in to the IBM network through my cell phone, all without any cables!
A variation of the Wi-Fi soup-cantenna has emerged to intercepting bluetooth signals. Check out this coolBlueSniper Rifle
I am saddened to learn that one of my favorite comedians, [George Carlin],passed away yesterday. He was famous for a skit about "seven words" you could not say on Television.A few of those came to mind in the response I got from my post[Yes, Jon,There is a mainframe that can help replace 1500 x86 servers, which attempted to provide an answerto a simple question about the IBM System z10 Enterprise Class (EC) mainframe.
Jon: So, where is the 1500 number coming from? Tony: I’ll investigate and get back to you.
My post tried to explain how IBM estimated that number. However, my fellow blogger from Sun, Jeff Savit, posted on his blog [No, there isn't a Santa Claus] in response. (If Sun'sshareholders are expecting anything other than a [lump of coal] under the tree this year, they should probablyread Sun's press release about their last [financial results].)A few others contacted me about this also, from a bunch of rather different angles, from reverse-engineering emulation of other company's chipsets to my use of internal codenames. (There are now MORE than seven words I can't type in this blog!) Jon is just trying to gather information, but his [head hurts] from all of this debate.
This week I will try to clarify some of the confusion.
Did you miss your chance to attend Storage Networking World last week? IBM has some upcoming conferences that might be of interest to you.
IBM Systems Conference 2009
In this inaugural event, IBM executives, developers and industry experts reveal the latest innovations, trends and directions. In the span of three full days, you will hear and see technologies demonstrated that are needed to transform and respond effectively in these economic times.
There will be three tracks:
IBM Systems -- Including storage, mainframe, POWER and x86 systems
Solutions for a Dynamic Infrastructure
Professional Development -- including negotiation skills, project management and TCO analysis
IBM System Storage and Storage Networking Symposium
If the above conference is too broad, we have a more storage-specificconference. The [IBM System Storage and Storage Networking Symposium] brings IBM storage developers, architects, technical experts, solution providers and customer speakers together in one place to show you how to address the growing challenge of managing and securing retention managed data. You'll also learn about the latest IBM System Storage™ portfolio product announcements.
I have spoken at these perhaps 12 of the last 14 years. The list of presenters has not yet finalized, so I do not yet know if I will actually be there this year.
Two exciting things are new this year. First, instead of being in San Diego or Las Vegas, it will be held in Chicago, Illinois instead!Secondly, you get a two-for-one with the [IBM System x and BladeCenter Technical Conference]. That's right, they are co-located there in Chicago so that you can attend sessions from both! Perhaps you spend 80 percent of your time on storage, and 20 percent on x86 servers, or 80 percent servers and 20 percent storage, now you can register for one price, and decide when you get there.
If you act soon, you can save money with the early-registration discount by May 31.
Hopefully, this will give you enough time to plan and make travel arrangements!
Today in the USA, we honor [Martin Luther King, Jr.] This year marks the 50th anniversary of the largest political demonstration to date in American history. Over 250,000 people went to Washington DC to hear Dr. King give his now famous "I have a dream" speech.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
The last day of the conference had fewer people. Many stayed for the Elton John concert then left. I am glad to be one of the few that squeezed out every last value of learning from the money it cost for my employer to send me here.
2419A Enhance the Agility of Your Cloud with IBM FlashSystem
Kristy Ortega and Shaluka Perera, IBM FlashSystem Solutions team, presented. Cloud Service Providers (CSP) and Managed Service Providers (MSP) are leveraging flash technology for a variety of reasons:
To meet Service Level Agreements (SLAs)
To handle unpredictable workloads
To minimize noisy neighbor interference
To offer premium performance as an up-sell feature
To be able to scale faster to meet incoming requests
To reduce server count
To keep custonmers delighted and reduce customer churn
To offer data-rich features without sacrificing performance
Kristy gave three practical client use cases:
IP-Only -- an MSP in the Nordic countries, employed IBM FlashSystem and Storwize V5000. They achieved five times VMware density on their servers and 300 percent improved application performance. Nearly all of the cost of the new storage hardware was offset by the savings in VMware license costs!
Cageka -- an MSP in Europe, employed IBM FlashSystem and SAN Volume Controller. They achieved 66 percent reduced SAP ERP response time, 97 percent reduction in floorspace, and 95 percent reduced power and cooling costs.
COCC -- formerly the Connecticut On-Line Computer Center, a CSP for bank and credit unions, employed IBM FlashSystem with IBM POWER servers. They achieved 10x faster OLTP transaction processing times, 80 percent reduction in power and cooling costs. The payback period for this was less than 3 months!
IBM sells SAN switches featuring Brocade Gen5 "Fabric Vision" technology, and resells Cisco MDS switches like the 9396S model. Both of these have been enhanced to handle the lower latency and higher throughput that IBM FlashSystem provides.
IBM Data Engine for NoSQL employs Redis with Coherent Accelerator Processor Interface (CAPI) that allows POWER8 servers to connect directly to IBM FlashSystem as an extension of memory rather than bus-attached external storage. This reduces the code path length to read/write to IBM FlashSystem by 97 percent, resulting in solutions that use six times less rack space, and three times less costs. This solution reduces CPU core requirements by 20-30 cores for every 1M IOPS of workload!
Spectrum Scale supports IBM FlashSystem in a variety of configurations. First, IBM FlashSystem can serve as a high-speed cache when Spectrum Scale virtualizes other NFS storage devices. Second, IBM FlashSystem can serve as a low-latency storage pool to direct new or hot data to. Third, Spectrum Scale can separate its metadata from the content of files and objects, putting the metadata on IBM FlashSystem. This greatly improves searching through directory structures or for specific object attributes.
Last year, IBM, Hewlett-Packard, and VMware launched Project Capstone to "leave no application behind". They made a concerted effort to make sure that all relevant applications that run on bare metal can also run on VMware hypervisor. IBM FlashSystem has support for VMware features, including VAAI, VASA, and VVols.
IBM has partnered with Atlantas ILIO to offer in-line data deduplication for Virtual Desktop Infrastructure (VDI). A single 2U IBM FlashSystem can support 5,000 users and 10,000 virtual desktops, running at 382 IOPS per desktop.
Lastly, Healthcare provider Trizetto has used IBM FlashSystem to reduce OPEX by 90 percent, shrinking from a 20U disk system array to a 2U IBM FlashSystem device.
4331A Leverage zOS and Cloud Storage for Backup/Archive Efficiency and Cost Reduction
Eddie Lin, IBM Senior Technical Staff Member for DS8000 development team, presented this technology preview. Taking advantage of cloud storage is not limited to the distributed storage world alone. The ability to connect existing archive and backup solutions in z/OS to on-premise object storage platforms provides huge efficiency gains, enabling clients to do more during their critical batch windows.
IBM is integrating cloud gateway software into its DS8870 and DS8880 Enterprise Disk Systems in conjunction with DFSMShsm and DFSMSdss for a complete end-to-end solution to optimize this space. We will show a live demonstration of this capability during this session.
This solution uses the Storage-as-the-Storage-Cloud methodology I mentioned in my session yesterday. DS8000 is #1 storage provider for mainframe environments. Eddie explained the current inefficient process of moving cold data to tape, using 37-year-old DFSMShsm functionality.
A new approach involves moving data directly from DS8870 storage systems to object storage, either on-premises or off-premises. This eliminates MIPS used for data movement, and reduces the record-keeping normally done by DFSMShsm. z/OS Data sets migrated to the Cloud will continue to be designated as MIGRAT in the ICF Catalog. Similar recall times from tape or Cloud.
There will also be options for DFSMSdss to invoke the function. However, you will need to provide in the DFSMSdss command parameters all of the information needed to connect to the Cloud that would normally be handled by DFSMShsm.
To make this all happen, you will need a certain level of DFSMS, and a certain level of DS8000 firmware. No new hardware is required, as it uses 1GbE Ethernet ports that already exist in DS8870 and DS8880 models. If you still have DS8100, DS8300, DS8700 or DS8800 models, now is a good time to start upgrade!
Internal tests on a 5GB data set were done to compare MIPS consumption. With DFSMShsm, 0.127 CPU, versus this new "Transparent Cloud Storage Tiering" method was only 0.068 CPU, indicating a 46 percent reduction in MIPS. DFSMShsm is often the #2 biggest consumer of MIPS (DB2 is #1), so any reduction here is a big deal.
IBM plans to support Spectrum Scale, Cleversafe, IBM SoftLayer, Amazon S3, Rackspace, Microsoft Azure. Full encryption data-in-flight is included, with keys managed using IBM SKLM. This capability will be fully supported by z/OS Security products (RACF, Top Secret, etc.) and z/OS audit logging.
Eddie wrapped up with a live demo.
7341A IBM Storage and Catalogic: Software Defined Solutions for Hybrid Cloud and DevOps
Third party Catalogic ECX software supports IBM, NetApp and EMC storage devices. I was hoping to hear how it works specifically with IBM storage models, but instead the speaker explained why Copy Data Management (CDM) was helpful for Bi-Modal environments.
Basically, copies of data taken to protect production data sit idle until needed. With Copy Data Management, the copies are available to development and test personnel. While traditional production IT operations are like Marathon runners, the new DevOps is like short-distance sprinters, needing to be agile in developing and testing new applications. Having ready access to copies of production data can speed this process.
4921A Radical Storage Simplicity for Your Cloud and How it Can Impact Your Customers
Diane Benjuya and Yafit Sami, both from IBM, presented IBM Spectrum Accelerate, the software "de-coupled" from traditional XIV hardware.
The XIV grid architecture automatically distributes data, eliminates hot-spots, and provides enterprise-class features like thin provisioning, VMware support, snapshots and remote mirroring. It's "Distributed RAID-10" capability can rebuild after the loss of a 6TB disk drive failure in less than an hour.
Spectrum Accelerate has nearly the same set of features, minus Microsoft Hyper-V integration, FCP host access support, VMware vSphere v6 VVol support, Real-time Compression, and Encryption. Spectrum Accelerate adds a feature not available to XIV called Hyperconvergence. This allows application Virtual Machines to run on the same servers used for Spectrum Accelerate. Spectrum Accelerate can run on-premises on customer-choice hardware, or in the Cloud, such as IBM SoftLayer.
In response to complaints that IBM XIV was a single-frame storage array, IBM introduced Hyper-Scale, a series of features that allow up to 144 XIV Gen3 frames as a single system. With the introduction of Spectrum Accelerate, Hyper-Scale Manager can now manage any combination of XIV Gen3 and Spectrum Accelerate clusters, on-premises or off-premises, up to 144 total.
Hyper-Scale Mobility can migrate volumes from one XIV to another without the need for external virtualization such as IBM SAN Volume Controller. For iSCSI volumes, Hyper-Scale Mobility can migrate data between XIV and Spectrum Accelerate, or from one Spectrum Accelerate cluster to another, on-premises with off-premises.
Hyper-Scale Consistency allows Snaphots to be taken of a group of volumes across multiple XIV frames. Now, snapshots can be taken of a group of volumes across both XIV and Spectrum Accelerate clusters.
Remote Mirroring is fully supported. You can replicate data from XIV to Spectrum Accelerate, Spectrum Accelerate to XIV, or from one Spectrum Accelerate cluster to another.
The IBM XIV Mobile Dashboard for Apple and Android phones can support any mix of XIV and Spectrum Accelerate clusters. This includes monitoring your environment, as well as push notifications.
IBM has also introduced flexible licensing options. With newly purchased XIV boxes and Spectrum Accelerate, you can choose to buy the software license as "perpetual", allow you to move it to new hardware when your hardware kicks the bucket. This license can be moved to new XIV hardware, or to Spectrum Accelerate cluster deployment.
For Spectrum Accelerate, an additional license option is "monthly", allowing you to elastically add or reduce the amount of storage you manage, either on-premises or off-premises.
Like the idea of Spectrum Accelerate but don't want to build it yourself? Third party SuperMicro offers hardware pre-certified and pre-installed with Spectrum Accelerate. You license Spectrum Accelerate directly from IBM, and SuperMicro will take care of the rest.
Spectrum Accelerate is a component of the Spectrum Storage suite. A single flat-TB pricing for all six Spectrum Storage products.
Want to try IBM Spectrum Accelerate yourself? Here are three options:
Free 90-day trial with self-destruct. After 90 days, the code stops working. You can download this and try it out.
90-day evaluation copy. Your authorized IBM Seller works with you to install, and if you like it, you buy it after 90 days to continue to use it.
Special promotion before June 30, 2016 -- Purchase IBM Spectrum Accelerate for production, and your first 20TB are free. No strings attached.
IBM's Silverpop uses IBM Spectrum Accelerate to deploy their market analytics solution. They can spin up a new customer with 250TB of capacity in 24-48 hours on IBM SoftLayer. They found they use half as many storage admins managing storage with IBM Spectrum Accelerate than their previous method.
Well, that's the end of the conference. I have to go back and submit all of my survey responses, which I should have done every day all along, but was too busy writing blog posts!
The presentations are also now available for download for those who attended the conference. (Go to Session Preview on the IBM InterConnect attendee website and hit the Download Presentation button)
IBM announced the industry's first corporate-led initiative to enable clients to earn energy efficiency certificates for reducing the energy needed to run their data centers.For the first time, this provides a way for businesses to attain a certified measurement of their energy use reduction, a key, emerging business metric. The certificates can be traded for cash on the growing energy efficiency certificate market or otherwise retained to demonstrate reductions in energy use and associated CO2 emissions. The Efficiency Certificates initiative engages Neuwing Energy Ventures, a leading verifier of energy efficiency projects and marketer of energy efficiency certificates.
How it works:
The Neuwing Energy assessments are a two-part evaluation to 1) determine the initial energy draw from the data center or IT equipment identified for consolidation based on industry accepted energy estimates for the servers in use and the power and cooling profiles of the data center, and 2) a second review of energy draw after steps are taken that are designed to reduce energy consumption.
Neuwing Energy will issue customers an Efficiency Certificate for the total megawatt-hours of energy no longer needed to power and cool their data center or operate IT equipment. Neuwing Energy will keep a portion of each customer's earned certificates or charge a per MWH saved fee in exchange for the assessment.
Customers can trade earned Efficiency Certificates on the energy efficiency certificate market or they can retain their certificates, using them to demonstrate reductions in energy use and associated CO2 emissions.
IBM intends to make the Efficiency Certificates program available across its entire line of server and storage offerings.