Data Center Transformation
nedcuser 2700017566 621 Visits
One of the topics I regularly discuss is security as it pertains to virtualization. This is of course an extremely hot topic, and there are multitude of different aspects of this topic. Rather than try and cover them all (if thats even possible) I will hit some of these one at a time.
Virtualization comes largely in two flavors (yes this is a simplistic view, but for the purpose here we don't need to have more detail). Add on SW (such as VMware's products, Xen, virtual box.. the list goes on), by this I'm referring to any product which one acquires, and INSTALLS as a separate entity. Just as one buys an operating system and installs it on a platform, these products follow this method. I won't bother going into the differentiation of the hypervisor types (those which install a hypervisor as an extension to the OS (I regularly use one of these on my MAC to work with Linux), or those which get installed natively). The key element here is that these products are NOT part of the platform (ie. provided by the system manufacturer) which they execute on. The second flavor, what I'll call "intrinsic" virtualization, are those where the hypervisors are built into the system as part of the native firmware. Examples of this are IBM's POWER architecture and the IBM Mainframe (System Z).
So why is this relative to the security of the virtualized environment. Both models have to be maintained, firmware maintenance has to be applied and managed on the intrinsic case, and SW maintenance for the "add on" case. The concern from a security perspective is that the updates (or the original code) somehow get tainted such that the fundamental separation function of the hypervisor is affected (we're not talking about bugs found in the SW/firmware, but malicious code planted). How easy is it for an attacker to get a malicious hypervisor loaded on to the system, or to patch the system with code which opens back doors etc. I'll assert that it all depends on the vendor and how they manage their code. Within IBM, we've made efforts to ensure that only IBM developed firmware can be installed on to the machine, which is done through various techniques, as well as through a controlled set of interfaces to where the firmware is loaded. Is this mechanism perfect? Anyone who claims perfection with respect to anything related to security should be immediately discounted, there are no absolutes in this business. Does this mean you are more secure? Thats a matter of your perception, not for me to judge. I know what we in IBM have done, but as I only have a passing involvement, at best, with other virtualization products I really can;'t do anything more than draw an analogy. Note, I'm focusing on SERVERS, or Enterprise Data Center questions, the desktop/client are entirely separate issues.
Whats the analogy? One way to view this is to look at the "add on" model as an operating system (OS) in a non-virtualized environment. In my experience, enterprises treat every OS patch and update very carefully, usually with strict policies for deploying into test environments and evaluating the overall impact of the "fix" on their existing applications and business processes. Only then do they schedule a phased deployment of the new patch. AS part of this process, they often validate that the updates are indeed from the vendor of the OS, validating hash values (if the vendor provides them), ensuring they download them via a SSL connection with a validated certificate for that particular vendor. One would expect that an enterprise would treat any "add on" virtualization product in the same manner (note, I'm focusing on those enterprise virtualization products which run on the "bare metal", since those which run as part of a normal OS have to contend with the hosting OS in addition to the virtualization layer).
In my opinion, there is one case where the "intrinsic" model *might* be considered better. Again, I can only speak to what I know. On the IBM platforms referred to above, one has no choice but to run the hypervisor. Its part of the firmware, it gets loaded as part of the power on cycle of the system. There is no booting something off of a disk, there are no processor instructions to allow one to load a hypervisor after an OS has been booted. One has no choice, just as on an x86 platform, one has no choice but to let the BIOS run. In this case, if you are concerned with things like "bluepill", or virtualization rootkits, then the only way to get this onto the system would be to somehow get it into the IBM code base, get IBM to build the code and to ship an update to the system that would be accepted by the firmware update process. Is this really an issue? Thats for you to judge. I know I'm not loosing sleep because of such "rootkits". Why? Quite honestly, if an attacker wants to gain access to systems (and the data tracked on attacks bears this out) to obtain data (or for some other nefarious purpose) there are other easier ways to go about achieving these goals. Yes there are certain classes of environments which this is of a great concern, but those environments have multiple layers of controls, defenses, and processes which layer together to mitigate this type of attack to an acceptable level of risk. To my knowledge (please correct me and give references), the "bluepill" type of rootkit has been focused on the fact that it "could" be installed (though I expect that the enterprise hypervisors shipped as SW products would disable the mechanimss immediately upon booting), not on the mechanisms by which an attacker would somehow get this to be run (again, I'm ignoring those virtualization products which run as extensions to a general purpose OS). Note, I do try and keep up with this particular avenue of research, largely because I find it extremely interesting from an academic perspective.
Ultimately, I believe that system vendors will end up incorporating virtualization as intrinsic components on the platform. How they do this, while still enabling consumers to choose remains a question for another day (likely for another blogger).
---Steven BadeSTSM System and Technology Group"All opinions expressed are my own, and do not necessarily reflect those of the IBM Corporation"[Read More]
nedcuser 2700017566 603 Visits
Hello everyone on the blog community. Wanted to introduce myself and chat a little about where I can help. My name is Rodrigo Samper; I am a Senior Technical Staff Member (STSM) in the IBM Design Center. My primary focus is to help clients become more energy efficient by helping them implement best practices and introduce them to IBM's vision of the New Enterprise Data Center. My background includes over 25 years experience in the packaging and cooling. Today I focus on helping clients meet their power/cooling needs in their data centers. IBM is offering a new workshop free to the clients called “New Enterprise Data Center (NEDC)”. The purpose of this workshop is to provide the client a new model for efficient IT delivery in a changing landscape. This transformation spans across people, process and technology. This workshop is run by SME's in IBM STG Lab Services, the Design Center and GTS. In addition our Design Center offers an extensive three day workshop to assist clients with their Data Center facilities and IT infrastructure. The workshop is called Enterprise Infrastructure Transformation Assessment (EITA); it takes a holistic approach to improving energy efficiency. Here are a couple of links with regards to the NEDC and Green Initiative.
IBM Systems Director Active Energy Manager™ (AEM) V3.1.1, an extension of IBM Director, helpsmonitor and manage the power and thermal usage of systems in IT environments and is available on Linux®on POWER™, Linux on x86, Linux on System z™ and Microsoft Windows®. Originally designed to supportIBM BladeCenter® and System x™, this enhanced energy management technology now supports additionalIBM systems, as well as storage devices and non-IBM servers through IBM Power Distribution Unit (PDU+)support. This energy management software tool can provide a single view of the actual power usage acrossmultiple platforms in the infrastructure, as opposed to the benchmarked power consumption. Active EnergyManager can effectively allocate, match, and cap power limits in the datacenter at the system, chassis, orrack level while retrieving temperature and power information via wireless sensors and collecting alerts andevents from facility providers related to power and cooling equipment. By enabling energy managementtechnologies, users can effectively manage their datacenter while improving their cost of computing.TAKE ADVANTAGE OF THE FREE 60-DAY TRIAL. Go to:http://www.ibm.com/systems/management/director/extensions/actengmrg.html[Read More]
nedcuser 2700017566 590 Visits
IBM Systems Director Virtual Availability Manager™ V1.1, an extension of IBM Director, provides asimplified availability solution for customers using Xen-based virtualization. Virtual Availability Managercan be used to help manage and respond to planned and unplanned system outages, leveraging virtualserver checkpoint/restart and mobility to avoid or recover from virtual server and host system failures.It predicts and reacts to server hardware failures before they occur and reacts to unplanned hostsystem and virtual server failures. Virtual Availability Manager then takes actions as defined withinsimplified high availability policies. The Management server runs on AIX®, Windows on x86 platforms,Linux® on x86 and Power™ Systems, managing the availability of Xen virtualized host systems.services discussed in this document in other countries.To take advantage of the FREE 90-DAY TRIAL, go to: http://www.ibm.com/systems/management/director/extensions/vam.html[Read More]
My name Mike Buzzetti. I currently work for the IBM Design Center based in the wonderful town of Poughkeepsie, New York.I am an IT specialist and a bit of a "techie".My experiences and interests in virtualization are as diverse and the technologies behind it. Over the last few years I have been exposed to many different approaches to solving the power, cooling and capacity needs of the IT industry. I have need focused more on machine/operating system virtualization as opposed to network or application virtualization. IBM and other vendors offer different technologies to accomplish this, like Xen, Vmware, z/VM, the System p hyper-visor and others. I often direct people to wikipeida when they are starting out with virtualization. One of my favorite charts is located here:
It depicts the differences between a very large number of virtual machines. Lately though, I have been interested in some of the gaps when dealing with a virtualized environment. The security, monitoring, and metering of "virtualized" resources is still very difficult for even a small number of items. I hope to provide some helpful tips, tricks, and general best practices as I investigate the possible solutions to these gaps.[Read More]
nedcuser 2700017566 588 Visits
Just a short introduction here. I am Steven (Steve) Bade, Senior Technical Staff Member, and one of the lead security architects for systems and technology group. While from an org chart perspective I sit in the Design Center, my role extends across all of STG and many other parts of the IBM organization. My primary customer facing efforts center around helping customer security teams understand how virtualization technologies affect their security posture, understanding how to leverage cryptography within the enterprise, as well as deployment of various hardware cryptography technologies with different applications and middleware.
I'd like to encourage people to post their questions related to any aspect of security. I will do my best to either answer, or find the appropriate party to answer. Of course security is a rapidly changing area both in IBM and in the industry, so I may not be able to answer in such a public forum all the time.[Read More]
nedcuser 2700017566 596 Visits
Security is not a Binary State
The security world is not black and white. The states of Secure vs Insecure are relative to ones perception of risk. Consider this very simple not IT industry perspective. In a metropolitan area everyone locks their doors. Why? Because the threat/risk of an unlocked door is not acceptable to their policy. On the other hand, in many small rural towns people never lock their doors. Why? Because they perceive the risk of a break in to be inconsequential. Ask either person if they are "Secure", and they will both likely say yes. So what the heck does this example have to do with IT security. Simply that an enterprises perception of their state is relative to their willingness to accept risk.
Products which say they make you "secure", really provide controls to manage risks associated with a set of threats, one should understand those threats. There are no silver bullets in security, rather an enterprises ability to manage risks is based on interlocking and layerd controls (the traditional defense in depth concepts) that protect or attenuate risks/threats. Security is about people, process and technology. The people and process component define security policies and the technology implements those policies. Poicies need to be based on the business process and the requirements of those process.
When someone says XYZ will make you "more secure", unless the person has intimate knowledge of your policies and other security controls this staement contains no weight. "More secure" is a value judgement, and others should not be speaking to your values, rather products/technologies should be presented on the basis of the risks that they mitigate, and how they operate to mitigate those risks. Not based on vague assertions.
--Steve BadeSTSM IBM Systems and Technology groupAll opinions are my own, and do not necessarily represent the views of the IBM Corporation.
nedcuser 2700017566 616 Visits
I wanted to take a second to do a short introduction (an elevator pitch) on who I am and what my team does. I’ll also link to a forum posting I have out where I’d love some help and discussion. I’ll try and be brief and stay out of my “sales pitch” mode.
My name is Michael Daubman and I’m one of the 3 system z managers within IBM’s System Technology Group Lab Services. Our group (and my peers across all IBM brands) have a focus to help support emerging and niche technologies for our clients. Additionally, we enable, are subcontracted by, and transfer skills to our business partners to allow support clients in areas that are sometimes hard to skills for.
One of my personal goals has been to find gaps in support from our clients and business partners and build my team to fill those gaps in the areas of Security and Consolidation. As a result, our “Compliance Mitigation Services, “Green” and “Consolidation” business has grown well with significant demand. Now, I’m trying to make sure we are ready with support for areas that are newly emerging (or that may just be showing signs of needing support) . This blog represents a GREAT way for me to explain what I’m doing, where we’re growing and look for some specific feedback from the “world”. I have some interesting topics that I’m planning to post about in the coming weeks. Stay tuned!
If you’re ambitious, take a hop over to our forum. There is already a question out and I’d love to hear from everyone and anyone!
Enterprise Systems Infrastructure Offerings Forum
Michael DaubmanManager: IBM STG Lab Services for System z[Read More]
nedcuser 2700017566 688 Visits
I have been working with IBMers and our clients on Energy Efficiency full time since June of 2007. This week at a large Data Center conference, AFCOM, I presented The New Enterprise Data Center to 2 well attended sessions. NEDC was well received as 100% of the participants are going green. The urgency or need for green is high. All participants viewed Green as a cost improvement catalyst. The steps of adoption(Simplify, Share, Dynamic)resonated with the audiences. Almost 80% of the conferences presentations had some aspect of greening the data center in them. NEDC with its cost savings and flexibility is a vision for Data Center evolution that everyone can begin implementing NOW! Key technologies that all data center managers were looking to implement in the next year was virtualization and measuring energy within their data center. The new ASHRAE standards which now recommend Data Centers running up to 81 degrees F have shaken Data Center managers into the realization they do not know the energy consumption or the temperatures within their data centers.
David Andersondfa@us.ibm.com[Read More]
Hi – I'm Alan Robertson – I've been working on business resilience issues for over 10 years. I founded the Linux-HA project back in 1998, and have had a variety of roles in IBM since I joined in 2001.
The computing industry has lots of trends, numerous buzzwords, and a number of hot topics. Sometimes these are in conflict with each other, or at least start out that way... But, in the end, there are often good ways to harmonize all these various things.Let's wander into virtual machine territory today. If you have gone to the trouble to create a bunch of virtual machines, the chances are you hope to do a little server consolidation - because when that's properly done it can save you some money. This sounds good, and indeed has lots of good things going for it. It's buzzword compliant, it's green, it saves you green (money). What's not to like?
To see what you might not like if this is all you do, let's take an example to make it obvious...If you put all your virtual machines on one physical server, then if that server fails, you lose all your virtual machines. If you put ten virtual machines on one server, then the impact of that server crashing is roughly ten times as great as if a single server crashed. If you work at it, you might be able to consolidate the ten most critical virtual machines onto a single server - and bring your entire data center to a halt with just one crash.
This is not typically what people are looking for in their data center - and could easily be one of those career-limiting mistakes that you'd like to avoid - unless you already have your next job lined up.This falls under the "putting all your eggs into one basket" way of doing business. This part of a famous quote - but not the whole quote. Mark Twain said "Put all your eggs in the one basket and --- WATCH THAT BASKET". So, to follow Mark Twain's advice, we need to not just put our eggs into one basket, we also need to watch that basket. Of course, if you have the chance you'd choose a reliable basket you can get - like a mainframe (system Z) or system P. Nevertheless, things happen no matter how you plan, and it isn't always just the basket that fails.
As some of you already know, watching servers and services is most commonly done by high-availability software - something like Linux-HA, HACMP, or GDPS. A properly configured HA system will watch the basket for you, and keep the worst from happening to your basket, your servers or your career.
As you can see, doing virtualization for reasons of consolidation doesn't make much sense unless you also add management software (HA software or otherwise) to watch your basket of virtual machines for you.In the end, it's easy to see that all these things are connected - virtualization, server consolidation, power savings (green computing), availability management, and you want to manage them all at once. This is the vision of IBM's New Enterprise Data Center initiative – to integrate them all. And as you can see, it actually makes sense – lots of sense.