The Internet of Things - Rambling Thoughts by Jim Fletcher
Well despite months of desire, and a total lack of spare time - I finally become an official blogger today. Thanks to Jeff Jenkins for his help in getting this going.
Over the last two years, as I have pioneered the energy management space for Tivoli, I have seen leading organizations begin to recognize that the historical organizational structure around datacenters does not represent well the needs for improved energy management. Unfortunately, in most datacenters, the team responsible for cooling and power, and the team responsible for IT (servers, applications, storage, etc.) report into different lines of the business. Even more unfortunately from an energy management perspective, neither organization is responsible for paying the power bill, and in most organizations, neither team is even aware of the power bill.
As a result, there is no natural incentive to reduce overall power consumption, unless some external factor like availability of power comes into play. This "green organizational disfunctionality" results in wasted spending on energy, and operational inefficiencies given that there is also limited integration between the multiple organizations responsible for the datacenter. Even when knowledge does exist within the IT organization with regards to power consumption, I have yet to see a datacenter, where the IT team is measured in any way on power consumption - instead, availability and performance are the two measurements that matter.
So how can we expect energy efficient datacenters if organizationally there is little focus, and no incentives are provided to reduce spending on energy, Thats the challenge that organizations need to address. I am seeing an emergence of limited discussions between these multiple teams, and I am seeing an occasional "incentive" from the c-level exec to begin looking at how to reduce energy costs, but only occasionally. Instead, most energy reduction today is coming from "tangential" changes such as virtualization.
For those customers who have made a focus on looking at the entire energy consumption lifecycle, significant cost reductions -- sometimes approaching 40% have been seen.
Many customers continue to measure temperature of their server racks using the "back of the hand" method. Unfortunately this is exactly what it says -- they walk the aisle with the back of their hand extended and when they feel a warmer than normal area, that is an area to be looked at further. Well, yes, that's not exactly scientific, but it has worked for years. Likewise power consumption was pre-determined from manufacturer's specs which generally means it was grossly overvalued.
But as we look at better optimizing our overall energy consumption, even a degree or two difference can make a big difference in our overall energy efficiency. The "back of the hand" method cannot provide that level of accuracy, so newer methods need to be implemented. Over the last few years, IBM has introduced direct measurement within their server family for both power consumption, as well as temperature reporting. With the direct reporting of this information, immediate and accurate information can be available and leveraged.
With the availability of more accurate information in a timely matter, datacenters can reduce their "energy buffer" . Typically customers have over-cooled, and over-powered. With the ability to detect even small deltas quickly and accurately, these buffers can be reduced and therefore overall energy consumption can be reduced.
But how does one get access to this information? Tivoli's ITM for Energy Management collects this server information from its embedded Active Energy Manager component. The data can be thresholded with events generated automatically when measured values exceed expected values. Reports can be generated or the information can be visualized in an operations console.
Having accurate and detailed information is just one element of an effective overall datacenter energy strategy -- but a very important one for sure.
Today was the beginning of a new era for IBM - we've been working with our industry partners to improve the energy and operational efficiency of buildings, and today we announced the availability of a bundled software solution that allows us to "listen to the building, and hear what it is telling us". From there we used our analytics to predict problems before they occur, or recognize problems when they occur while providing a mashup-based dashboard to visual the state of the monitored buildings.
Here's some press from the announcement:
IBM Unleashes Advanced Software
Solution for Smarter Buildings
IBM formally introduces its Intelligent Building Management software today -- an advanced solution that's being put to work at Tulane University's School of Architecture, The Cloisters of the Metropolitan Museum of Art in New York, and the company's 35-building facility in Minnesota.
The software is designed to be an analytics and automation powerhouse that can help ramp up the environmental performance of any building, even ones that are 100 years old or more.
The product is the latest in a steady stream of solutions that IBM has unleashed in recent months to make the management of buildings, the energy and resources they use, and the transportation and virtual networks that connect them more efficient, more effective and more intelligent.
The software and its applications, which are being detailed today in an IBM Smarter Buildings Forum in New York, also are the results of the company's steadily increasing collaborative projects, partnerships and acquisitions -- all of which are aimed at positioning IBM as a dominant player in a nascent field that brings together IT, the built environment, vehicles and energy.
Here is an early look at the projects that will be featured during the forum:
While technology advancements in building management systems have made it possible to cull an immense amount of data on structures, the challenge has been to organize, analyze and present it swiftly to building owners and operators so they can proactively manage their properties -- as IBM Smarter Buildings Vice President David Bartlett said at GreenBiz Group's State of Green Business Forum this year.
The new software, which is supposed to be the most comprehensive product thus far in IBM's smarter buildings arsenal, is intended to address that need.
Earlier this week, IBM introduced its
Intelligent Operations Center for Smarter
Cities. The plug-and-play,
smarter-cities-in-a-box solution is expected to deliver high-powered systems and
network management capabilities to communities without the high price tag that
usually affixed to such technology.
As I watch more attention be focused on the "green datacenter", I was amazed that I had not yet seen someone talk about a "Smarter Datacenter". A smarter datacenter would reflect the wide range of improvements that one can make within the datacenter, whether it be improved processes, more effficient equipment, facilities improvements, or virtualization. While many of these areas are not even mentioned under energy management solutions, they are all part of making a datacenter smarter, and a "Smarter Datacenter is a Greener Datacenter".
So as we look at quantifying the energy impact of datacenter improvements, it isnt just about more efficient servers, or improved cooling -- its an aggregation of all we do as we work to improve datacenter efficiency, and thus reduce our overall energy impact.
Today is a great day for the datacenter. IBM and Emerson have announced a partnership which combines IBM's IT Service Management (ITSM) with Emerson's Trellis offering which was recently recognized as the industry leader in Data Center Infrastructure Management (DCIM). Gartner has said that the DCIM market is an estimated $450 million market today, and expected to grow to $1.7 billion by 2016.
But why all the excitement about this announcement? Anyone that has been in this industry recognizes that the datacenter has often been operated as a series of seemingly disconnected silos. One team manages power distribution, another manages the cooling infrastructure, another manages the physical placement of machines into racks, etc etc etc. When an operational problem occurs, we fall back into that siloed mentality with "twelve people on a bridge call" trying to determine where the problem actually originated. There is little automated "root cause" analysis and even less automated action across the silos. I recently heard of a major customer who had a chiller issue on a Sunday afternoon -- the IT team discovered the issue when the applications began to fail because the servers were failing from overheating.
Why weren't the systems connected? Why didn't a chiller failure signal the IT team and indicate which racks would be impacted and perhaps automate an action to move the workloads or throttle down the servers until the chiller issue was resolved? The answer unfortunately is simply because the operations of the systems were not connected.
With the IBM Emerson partnership, we've establish a base system for interlocking power management, cooling management, and traditional IT management. We're now providing a system which enables automated awareness of each slot in a rack -- what is in the rack? What is its power draw? What applications are running on the rack in that slot? We're now getting information, that can be integrated and leveraged to turn the datacenter into a "smarter datacenter".
Lots of great possibilities ...