Here are a few basic steps to get the most out of the the VMware VI agent:
1) Install the latest vCenter from VMware.
The best release is the latest release, vCenter 4.1, and it has shown superior performance over earlier releases. If you are still using vCenter v2.5, be sure to install the latest level, Update 6 (build 227637). The early update levels of vCenter v2.5 will not work with the agent, and VMware currently supports only Update 6.
2) Install the latest VMware VI agent.
If you haven't already, install the new v6.2.3, available since February, 2011. Details are here:
VMware Agent v6.2.3 Details
If you are still running v6.2.2, upgrade to v6.2.3 or the latest Interim Fix, which fixes a number of APARs:
VMware VI Interim Fixes
3) Set Validate SSL Certificates to No.
When configuring the agent, most users want to communicate via SSL, but getting the right certificate can be a challenge. Most users set "Use SSL Connection to Data Source" to Yes, but set "Validate SSL Certificates" to No. The data still uses SSL, but certificates are ignored.
4) Install Microsoft hotfix on Windows 2003 SP2.
If you run the agent on the same Windows 2003 SP2 machine as the vCenter, you may need a Microsoft hotfix to avoid TCP problems. Go here to find out more:
TCP Hotfix Information
5) Increase the heap size for large environments.
You might need to increase the java heap size for the agent's data provider if you have a large VMware environment, such as more than 100 ESX hosts and more than 1000 virtual machines. For details, see the User's Guide (the recommendation applies to v6.2.2 and v6.2.3):
Setting Java Heap for VMware VI Agent
Still having a problem with the VMware VI agent? Look at this section on the support wiki on how to search the logs for information that might help:VMware VI Agent Common Tracing
And you can always contact IBM Support - we'll be glad to help!
Since February, I have been speaking to many customers about our new capacity management and planning reports that are available in ITM for Virtual Servers v6.2.3 (click here for demos & whitepapers on the subject
). One common theme that has been popping up is in the form of this question - How do I take this information and automate the change I need to make to optimize my system? For example, one customer said they loved the report that shows a list of LPARs that could be right sized (LPARs that have been over allocated CPU, Memory), but they have 3,500 LPARs in their environment - who is going to take the time to make that adjustment - even if it is just 10% of the workloads that need to be adjusted, that is 350 LPARs that need to be reconfigured. Good point I thought. Computers are supposed to do the work for us, aren't they?
Thank goodness our smart PhDs are actively working on that problem now - by taking information and transforming it into recommendations. Using the above example, this customer wants to right size their VMs to optimize their environment - i.e. the goal is to free up space to add additional workloads in the future, or to lengthen the time to purchase additional hardware. Taking this business policy into account, the software will provide recommendations for configuration changes and where to place workloads in the environment to optimize hardware utilization.
This approach to solving the problem will require different thinking on behalf of our IT personnel from "give me the data" to thinking in terms of compliance to business policies - is the goal to reduce hardware costs, energy costs or to ensure application A does not sit on the same server as Application B.
If this topic is interesting, I invite you to join in the conversation by adding to the list of what other kinds of business policies would you like actionable recommendations. If you are really, really interested in this topic, I invite you to join our beta program (contact our beta manager Gary @ email@example.com).
IBM has entered several papers to be presented at VMworld 2011. If you are attending VMworld and would like to hear what IBM has to offer to manage, automate and optimize VMware virtual and cloud environments, go to this link to vote for the IBM sessions (search IBM): www.vmworld.com
The new capacity planning tool, now available in Beta, will unlock the value of Tivoli Monitoring and Warehouse and enable a rich set of analytics on the existing data. This new capability will enable you to
- Understand and plan your virtual environments pro-actively, initially for VMWare
- Minimize risks in your plan
- Optimize how you use capacity in the environment with intelligent workload sizing and placement
- Apply business and technical policies to keep your environment efficient and risk-free
- Make changes in a what-if analysis framework and view the impact of change.
The tool leverages Tivoli Integrated Portal (TIP) and Tivoli Common Reporting (TCR) with embedded Cognos reporting engine. It integrates with the ITM and TDW infrastructure to get configuration and usage data from your virtual infrastructure.
Here's a quick overview of the advanced planning scenarios you can now implement in your virtual environment using this tool.
Key Scenarios for a Capacity Analyst
- Planning for capacity growth: Let's suppose your business provides a forecast that will increase the load on the IT infrastructure in the coming months. The capacity analyst can model the increase in resource requirements from the existing VMs in the what-if planning tool, scope the part of the infrastructure to analyze, and automatically generate a plan to fit the increasing demand. If required, new servers can be added to handle the growth.
- Ensure compliance with defined capacity planning policies: The LOB and application owners often provide a list of their requirements to the capacity analyst in terms of how their workloads should be placed on the IT infrastructure. These are typically business guidelines to improve efficiency, reduce cost, respect organizational boundaries, or cut risks on a virtual infrastructure. For e.g. The Finance and Payroll apps may not share common hosts or may not want to share hosts among apps with different downtime requirements. There may also be technical policies that guide planning. For e.g. reduce license cost by putting OS images on fewer hosts or the DBA may want to keep some headroom for the database VMs. The tool can help to centralize the creation of such policies and select a subset to guide a what-if planning scenario.
- Avoid bottlenecks in your environment: IT Administrators can predict a bottleneck in a VMWare cluster that may not be fixed by dynamic allocation within the cluster. These are often long term issues as the cluster may be running VMs that are not the right combination to share resources dynamically. The planning capability may be used to recommend how VMs can be moved “across” clusters or clusters can be restructured to remove bottlenecks and optimize resources in a broader scope.
- Plan for new users into a Cloud environment: The Cloud administrators are often challenged with planning for new users on the shared infrastructure and do what-if analysis planning. With this tool, they can simulate new VMs on the discovered Cloud, add information regarding users, create policies specific to such users, and create a recommended new environment plan. The policies may simulate users that want dedicated hosts for their VMs, or some images that need specific types of hardware etc. The recommended plan can help them to understand how and where to add new hardware or consolidate VMs to free up fragmented Cloud resources.
- Plan for retiring or re-purposing hardware: The planning capability enables the user to add new information for the discovered environment. For example, a user can add warranty date information about the discovered hardware, often contained in spreadsheets or other tools, and then select hosts that are more than 5 years old in the planning tool. They can add new hardware from the catalog for a what-if scenario. The tool can then automatically generate an optimized plan on how the workloads from the old hardware will fit on the new hardware and how many new machines of what type are required.
There may be several other scenarios that one can come up with on this tool framework.
The planning tool also provides a workflow-driven UI with both fast-path and expert-mode options. The main workflow page is shown below with a 5-step approach to create optimized virtual environment plans with default options for several steps. One can iterate through these steps to reach the desired results.
Load the latest configuration data of the virtual environment for analysis
Set the time period to analyze historical data
Define the scope of hosts to analyze in the virtual environment
Size Virtual Machines in scope
Generate a placement plan for the virtual machines on the physical infrastructure in scope
An example recommendation output of the tool is shown below with interactive topology navigation capability, summary views, and risk scores assigned to the infrastructure elements. This is an actionable recommendation as one can take this structured output in an XML and write an adapter to trigger automation workflows that implement the recommendations. The example screen shows how we analyzed a cluster with 4 hosts and recommended a consolidation on 3 hosts.
The topology view is interactive as it allows the user to click on various nodes and visualize the summary of the infrastructure levels below the node. Risk levels of the nodes are shown as node colors.
Hope this will be an exciting set of functions to start with and we look forward to suggestions on feature improvements and scenarios. Please contact Gary Forghetti (firstname.lastname@example.org) to schedule a demo or sign up for the Beta version of the tool. We'll keep updating this forum with more and more details, such as demo videos, white papers etc.
Anindya Neogi, PhD | Senior Technical Staff Member | Tivoli Software | IBM Master Inventor
Ok, I admit, I was among the early adopters of the late nineties to
get hooked on VMWare. In fact, as an open source advocate I remember
playing with "
freemware", qemu, bochs, openVZ, and several
other x86 virtualization technologies. Likewise, I was among the first
to start using Amazon's Elastic Compute Cloud (EC2). I've been hooked
by x86 commodity hardware virtualization for a long time, and I thank
VMWare and Ed Bugnion in particular for that. But why choose VMWare
Ten years ago when the
CPUs made it hard to virtualize efficiently, VMWare was great. After
2003 if you were mostly interested in linux (king of the cloud) Xen was
an excellent open source alternative to virtualize x86 commodity
servers. In 2006 Amazon launched their EC2 service which would become
the defacto cloud standard. EC2 is built on Xen and is probably the
single biggest x86 virtualization environment in the world. Several
hundred thousand of my closest friends have found EC2 to be a fantastic
compute platform that goes beyond server virtualization, all without a
trace of VMWare. So why choose VMWare now?
modern CPUs include specific support for virtualization making it
easier to deliver efficient virtulaization without Xen's paravirt trick
or VMWare's innovative code patching. Current linux kernels include
support for KVM and I believe upstream kernels will again support Xen
natively. I remember when RedHat bought Qumranet,
developer of KVM, SPICE, and SolidICE (a desktop virtualization
technology) in 2008. Back then KVM didn't compare to VMWare. It
certainly was not "
back then. Three years later, KVM has matured extremely well. I think it really is "
for commodity OS virtualization. In my cloud development efforts I've
run hundreds of thousands of VMs on Xen and KVM during the past 2 1/2
years. While I really respect Xen, I've come to like and appreciate KVM
on modern CPUs since it's just so simple and easy to use. Today there
are so many "
choices for x86 virtualization from
Xen, KVM, and VirtualBox to Hyper-V, which Microsoft is practically
giving away just to keep Windows relevant in the datacenter. So why
choose VMWare now?
Is low end disruption
a threat for VMWare? Linux and Apache are certainly well established
in the datacenter preventing Microsoft's dominance over the desktop to
spill into the datacenter. Ten years ago when Windows had 90-something
percent market share of desktop computers, I myself considered Microsoft
an untouchable giant. Today, however, I think they're doomed because
Apple is cooler, all the kids have 'em along with iphones and and
ipads. By analogy, VMWare should be very concerned. IMHO, they can and
will lose their dominance and I think they'll do so by the classic Innovator's Dilemma
VMWare continues to cater to their traditional high end customers.
Meanwhile, nearly three quarters of a million developers are using
Amazon's cloud as their platform for new software applications and
services. And the best part is Amazon's cloud doesn't even need or use
VMWare. In fact, neither does Google's AppEngine or Microsoft's Azure.
Sense a pattern? If you believe, as I do, that we're on the cusp of a
new platform war to deliver the next generation of applications and
services, then the key to success is the application development
community. VMWare may have operations teams sold, but developers love
the cloud. Interestingly, they may not even have the ops guys sold
after all. Here's a forum thread titled "VMWare, a falling giant
"According to Ars Technica, 'A new survey seems to show that VMware's iron grip on the enterprise virtualization market is loosening,
with 38 percent of businesses planning to switch vendors within the
next year due to licensing models and the robustness of competing
hypervisors.' What do IT-savvy Slashdotters have to say about moving
away from one of the more stable and feature rich VM architectures
survey found that VMware is the primary hypervisor for server
virtualization in 67.6 percent of shops, followed by Microsoft's Hyper-V
with 16.4 percent and Citrix with 14.4 percent. Wow, this doesn't even
compare to Microsoft's former dominance for which I recall seeing
numbers as high as 98% market share!
So why choose VMWare now? Maybe the question should be, "
Have you tried an open source hypervisor lately?"
Or better yet, "
have you tried a public cloud yet"
Frankly, I don't even like using hypervisors directly anymore as I find
clouds much more powerful and easier to use. Why don't you give ISAAC a try
? You can see what a real cloud is like while also trying out open source hypervisors.
New VMware Additional Disk Extension gives the customer the ability to:
* Map one or more VMware datastores to a TSAM Cloud Storage Pool for provisioning additional disks
* Associate those Storage Pools with one or more customers
* Apply a per customer quota at a Storage Pool level
* Control whether disks are thin provisioned at a Storage Pool level
* Create and automatically format/mount extra disks (Windows drives or Linux mount points) from a Cloud Storage Pool when provisioning a new virtual machine
* Backup and restore server images including any additional disks
The extension was released in October and is available free of charge for download in the IBM Service Management Library.
Download VMware Additional Disk Extension here - http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0B
Exciting news!! We announced this week the upcoming availability of IBM Tivoli Monitoring for Virtual Environments v7.1 (formerly known as ITM for Virtual Servers). Why did we change the name? Previously, our focus was on ensuring the health of the virtual server environment - VMs & hosts and virtual storage and network elements like data store capacity, etc. With this release, we are now focused across the virtual environment to include physical network and storage performance, thus, providing a holistic view of all physical and virtual shared resources across the virtual environment. This offering will be generally available November, 23rd. Enhanced capabilities include:
- New capacity planning reports for recommendations on workload
placement, highlighting potential energy and server costs savings while
adhering to co-location policies. You can now use benchmarking data,
results simulation, and a policy framework to more intelligently assess
where workloads should be placed, instead of relying solely on resource
availability in the virtual host farm.
- Busy administrators can make rapid assessments of server, storage,
network components, showing physical and virtual performance, and
change history via default settings via a new Web 2.0 dashboard.
- Diversified virtualization investments can extend Tivoli virtual environment performance and
availability monitoring to Citrix XenApp and XenDesktop via newly
- If you have invested in the Cisco Unified Computing System (UCS)
platform, you can now monitor performance and availability attributes
of UCS systems, including chassis and blade health, network fabric
health, and storage management integration.
Check out the official announcement:
IBM Tivoli Monitoring for Virtual Environments V7.1 is now available.
At a glance:
IBM® Tivoli® Monitoring for Virtual Environments V7.1 extends the benefits of end-to-end performance monitoring in a virtualized environment by providing additional hypervisor support and new capacity planning reports.
• New Web 2.0 dashboard
• New Cisco UCS monitoring agent
• New Citrix XenDesktop and XenApp performance and availability monitoring
• New capacity analytics and workload placement guidance for VMware
IBM Tivoli Monitoring for Virtual Environments V7.1 extends the benefits of end-toendperformance monitoring in a virtualized environment by providing sophisticated capacity analytics and workload placement guidance that allow you to safely maximize the density of virtual hosts. By using this insight and guidance, you are more likely to realize the promised cost savings of virtualization by eliminating the uncertainty that often accompanies a migration from physical application servers to virtual ones. Policy-driven analytics for VMware environments do not simply place virtual machines on the "least busy" hosts, but rather place them where those workloads will function best across a range of performance and compliance conditions. New dashboards will then allow operators to conveniently assess the overall health of the newly tuned virtual infrastructure.
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
new white papers are available on the IBM Integrated Service Management
Library ( ISML ) that explain how to use Tivoli Storage Manager to back
up different areas within IBM SmartCloud Provisioning.
first white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the boot volume of
an IBM SmartCloud Provisioning persistent virtual machine and how to
make periodic back ups of a normal volume, and select and restore a
white paper can be downloaded from the IBM Integrated Service
Management Library( ISML ) following this link -> Backing up IBM SmartCloud Provisioning's Persistent Volumes with Tivoli Storage Manager Client
second white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the following
components of the IBM SmartCloud Provisioning infrastructure: the
Preboot Execution Environment ( PXE ) server, the web console
configuration, and the HBase data store.
white paper can be downloaded from the IBM Integrated Service
Management Library( ISML ) following this link -> Backing up IBM SmartCloud Provisioning's Infrastructure with Tivoli Storage Manager Client
There is a new white paper available on the IBM Integrated Service Management Library ( ISML ) that explains how to use Tivoli Storage Manager to back up a VMware virtual machine that was deployed by the Workload Deployer in IBM SmartCloud Provisioning version 2.1.
The white paper explains how to locate, and back up the virtual machine in VMware using IBM Tivoli Storage Manager, and how to restore the virtual machine to the Workload Deployer environment.
Interim Fix 2 for the ITM VMware VI agent version 7.1.0 is available. This interim fix is cumulative so customers will not need to install Interim Fix 1. For a list of APARs fixed in IF 1 see this list.
Interim fix 2 includes fixes for problems described by APARs
- IV19978 Abstract: EFFECTIVE SERVERS AND TOTAL SERVERS HAVE A WRONG VALUES.
- IV22056 Abstract: VM AGENT SHOWING NAA.ID AS LARGE NUMBER.
In addition to APAR fixes this interim fix includes new
attributes that were requested by customers. These attributes provide further insight into the memory demands of executing virtual machines and the CPU utilization on the host server. Added for virtual machines are usage, active, shared
memory attributes. For the host, CPU core utilization (vSphere 5.0 or higher is needed) has been added.
Interim Fix 2 may be downloaded from IBM Support Fix Central. More information may be found here
Join us on
the upcoming Tivoli User Community webcast and
opportunity for questions, Tuesday, September 18th at
11:00 AM, ET, USA
Reserve Your Webcast Seat Now
The benefits of virtualization,
have spurred many organizations to move toward a virtual infrastructure. While
these organizations enjoy accelerated service delivery and resource
optimization to reduced costs for IT resources, they have also frequently experienced
a new set of management complexities. The nature of the virtual environment,
which calls for a vast network of shared resources is best handled with
cloud-computing capabilities on the virtualized infrastructure. In this webcast, the IBM team will address the complexities head
on by learning to increase your ROI from your current virtualized
infrastructure by building the workload optimized cloud.
About The Speaker: Shawn Jaques, Program Manager IBM SmartCloud Foundation
Shawn Jaques is a
program manager in IBM marketing focused on technical marketing of the IBM
SmartCloud Foundation portfolio. Prior
to this role, Shawn led the Tivoli Cloud product management and strategy team,
responsible for identifying market opportunities and setting cloud product and
portfolio direction. Shawn's prior
experience includes numerous product management, market management and strategy
roles within IBM as well as consulting and financial auditing for other
firms. Shawn has a Master of Business
Administration from The University of Texas at Austin and a Bachelor of Science
from the University of Montana. He lives
in Boulder, Colorado and is a fitness and outdoor enthusiast.
Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars, presentations
and research for Users, by Users and the latest information on Tivoli
products. The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership
is complimentary. Join
1. On VM console menu navigate to VM > Guest > Install /upgrade VMware Tools
2. open the VMware installables folder in terminal. Run command rpm -i VMwareTools-4.0.0-261974.tar.gz
3. Run /usr/bin/vmware-config-tools.pl
The steps mentioned above usually work fine for 64 bit OS. However today I had to create a 32 bit RHEL6.1 OS. Faced couple of issues starting from :-
1. gcc not installed. Install the OS complaint rpm.
2. Kernel headers files not found in /usr/include. After a little googling, I found the solution to this issue.
i. run command uname -r
ii. install rpm -ivh kernel-devel-<of the version found in command above>
iii. ls -d /usr/src/kernels/$(uname -r)*/include gives us the kernel header files path which we can then feed to vmware-config-tools.pl prompt.
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
If you're in North America, register here for the April 16th session:
If you're in Asia Pacific, register for the April 23rd session: