Archive

Virtual desktop environment sizing

Share this post:

Just a reminder from my previous blog entries:

  • “Always be sure to consider any user objections before deploying desktop virtualization and thin client technology.”
  • “Best practices don’t mean best performances.”
  • “The virtual desktop experience must be equally as good as the physical desktop experience for the user.”
  • It’s not guesswork.

We currently have three types of users:

  • Knowledge workers
  • Task workers
  • High performance worker

Using the standard rule of about 40 virtual desktops per server is very often a bad choice. The next step is to have better knowledge of your existing desktops’ environment.

The main questions are:

  • What applications are in use?
  • What are the resource demands?
  • Which applications pose virtualization problems?

SysTrack® from Lakeside Software, Inc. combines comprehensive system monitoring capabilities with sophisticated statistical analysis for applications and users to create reports; it can help you to discover and visualize your environment.

Discover:

  • Collection of full inventory of PC, peripherals, and applications
  • All applications in use regardless of installation method
  • Identification of web applications and compatibility (IE 6)
  • PC and attached peripheral inventory with age

Visualize:

  • Assessment status of applications, machines, and users in a dashboard view

We must decide the following information:

  • What systems and users to virtualize
  • What the resource demands will be
  • How to best map physical to virtual for optimum results
  • What results we will achieve

The main point I want to address in this blog entry is capacity:

  • How much CPU, memory, and IO am I currently consuming?
  • How should I define the virtual machines based on the resource demands?
  • How do I account for workload, user need, time of day, geography, and other factors?

The main steps using SysTrack are as follows:

  1. SysTrack installation and configuration
  2. Candidate workstations selection
  3. SysTrack agent installation
  4. Data collection monitoring efforts (3-4 weeks)
  5. Data collection
  6. Data Analysis using Virtual Machine Planner and report creation

Virtual Machine Planner and Report Creation tool can provide a lot of information – more than 40 reports are available, for example:

  • What applications are in use?
  • What are the resource demands?

The most important point for the hypervisor design is: Results prescribe mappings from existing desktops and servers to virtual servers for optimal balance.

All of this information is given with the permission of Lakeside.

More information is available from http://www.lakesidesoftware.com/index.html  or you may contact me.

Hypervisor section from executive summary:

Predicted Enterprise Demand

The following chart shows average statistics of the enterprise according to the current allocation strategy. In this example you can see a different number of VMs on the three servers according to the data that is collected.

Another report gives the name of users for each hypervisor.

The objective isn’t to copy all reports in this blog entry but I want to show that 40 VMs by hypervisor is not always the best practice.

Hypervisor Specs

Name Hypervisor
MIPS 50,000
Memory 96 GB
I/O Capacity SAN/NFS
Network 600 Mbits/sec
Growth Rate 30%
Quantity 3

Predicted Enterprise Demand

The following chart shows average statistics of the enterprise according to the current allocation strategy.

Avg Min Max

CPU (MIPS)

50,929 19,148 112,669

Memory (GB)

149.679 63.188 225.563

Disk (IO Ops/sec)

1794 746 10138

Network I/O (MBits/sec)

13.17 1.4 45.6

Allocation Statistics

 

The following table shows the predicted statistics for all suggested hypervisors according to the current allocation plan as an example.  Note that maximum values may exceed logical values (for example more than 100% CPU).  Loads that exist for short time spans can be safely deferred to maximize cost and efficient use of hardware.

Hypervisor Name

VMs Statistic Avg Max

Hypervisor 1

84

 

CPU 36.45% 69.20%

 

MEM 28.28% 55.51%

 

Disk (Reads/sec) 254 861

 

Disk (Writes/sec) 297 728

 

Disk (IO Ops/sec) 551 1371

 

Net 0.49% 4.46%

Hypervisor 2

83

 

CPU 45.28% 77.58%

 

MEM 30.96% 62.20%

 

Disk (Reads/sec) 265 1019

 

Disk (Writes/sec) 318 718

 

Disk (IO Ops/sec) 583 8719

 

Net 1.31% 5.20%

Hypervisor 3

42

 

CPU 23.88% 60.55%

 

MEM 17.17% 36.96%

 

Disk (Reads/sec) 198 728

 

Disk (Writes/sec) 180 473

 

Disk (IO Ops/sec) 378 1083

 

Net 0.49% 5.20%
More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading