Tag Archives: wastage

Right-sizing Virtual Machines

This post is part of Operationalize Your World program. Do read it first to get the context.

Over Provisioning is a common malpractice in real life SDDC. P2V is a common reason, as the VM was simply matched to the physical server size. Another reason is conservative sizing by vendor, which was further added by Application Team.

I’ve seen large enterprise customers try to do a mass adjustment, downsizing many VMs, only to have the effort backfired when performance suffer.

Since performance is critical, you should address it from this angle. Educate VM Owner that right sizing actually improves performance. Carrot is a lot more effective than stick, especially for those with money. Saving money is a weak argument in most cases, as VM Owners have paid for the VMs.

Why oversized VM is bad

Use the picture below to explain why they are bad for VM Owner

  • Boot time.
    • If a VM does not have reservation, vSphere will create a swap file the size of the configured RAM. The bigger the VM, the longer it takes to create the file, especially in slow storage.
  • vMotion
    • The bigger the VM, the longer it takes to do vMotion and Storage vMotion.
  • NUMA
    • This happens when the VM cannot fit into a single socket.
    • It also happens when the VM’s active vCPU are more than what is available at that point in time. Example:
      • you have a 12 vCPU VM on a 12-core Socket. This is fine, if it’s the only VM running on the box. In reality, you have other VMs competing for resources. If the 12 vCPU wants to run, but Socket 0 has 6 free cores and Socket 1 has 6 free cores, the VM will be spread across both sockets.
  • Co-Stop and Ready
    • It will experience higher co-stop and ready time. Even if not all vCPU is used by the application, the Guest OS will still demand all the vCPUs be provided by the hypervisor.
  • Snapshot
    • Longer to snapshot, especially if memory snapshot is included.
  • Processes
    • The Guest OS is not aware of the NUMA nature of the physical motherboard, and thinks it has a uniform structure. It may move processes within its own CPUs, as it assumes it has no performance impact. If the vCPUs are spread into different NUMA node, example a 20 vCPU on a box with 2-socket and 20 cores, it can experience the ping-pong effect.
  • Visibility
    • Lack of performance visibility at the individual vCPU or virtual core level. Majority of the counter is at the VM level, which is an aggregate of all of its vCPU. It does not matter whether you use virtual socket or virtual core.

Impacts of Large VMs

  • Large VMs are also bad for other VMs, not just for themselves. They can impact other VMs, large or small. ESXi VMkernel scheduler has to find available cores for all the vCPUs, even though they are idle. Other VMs maybe migrated from core to core, or socket to socket, as a result. There is a counter in esxtop that tracks this migration.
  • Large VMs tend to have slower performance. ESXi may not have all the available vCPU for them. Large VMs are slower as all their vCPU have to be scheduled. The counter CPU Co-Stop tracks this.
  • Large VMs reduce consolidation ratio. You can pack more vCPU with smaller VM than with big VM.

As a Service Provider, it actually hit your bottom line. Unless you have progressive pricing, you make more money with smaller VM as you sell more vCPU. Example. If you have 2 socket, 20 cores, 40 threads, you can have either:

  • 1x 20-vCPU VM with moderately high utilisation
  • 40x 1 vCPU VM with moderately high utilization

In the above example, you sell 20 vCPU vs 40 vCPU.

Approach to Right-sizing

Focus on large VM

  • Every downsize is a battle because you are changing paradigm with “Less is More”. Plus, it requires downtime.
  • Downsizing from 4 vCPU to 2 does not buy much nowadays with >20 core Xeon.
  • No one likes to give up that they are given, especially if they are given little. By focusing on the large ones, you spend 20% effort to get 80% result.

Focus on CPU first, then RAM

  • Do not change both at the same time.
    • It’s hard enough to ask apps team to reduce CPU, so asking for both will be even harder.
    • If there is a performance issue after you reduced both CPU and RAM…., you have to bring both back up, even though it was caused by just one of them.
  • RAM in general is plentiful as RAM is cheap
  • RAM monitoring is hard to measure, even with agents. If the App manages its own memory, it needs application-specific counter. Apps like Java VM, Databases do not expose to Windows/Linux how it manages its RAM.

Disk right size needs to be done at Guest OS partition level

  • VM Owners won’t agree that you resize their partition.
  • Windows and Linux have different partition names. This can make reporting difficult across OSes

Technique

The technique we use for both CPU and RAM are the same. I’d use CPU as an example.

The first thing you need is to create a dynamic group that capture all the large VMs. Create 1 group for CPU, and one for RAM.

Once you create a group, the next step is to create a super metric. You should create 2 super metrics:

  1. Maximum CPU Workload among these large VMs
    • You expect this number to be hovering around 80%, as it only takes 1 VM among all the large VMs for the line chart to spike.
    • If you have many large VMs, one of them tend to have high utilisation at any given time.
    • If this number is low, that means a severe wastage!
  2. Average CPU Workload among these large VMs
    • You expect this number to hovering around 40%, indicating sizing was done correctly.
    • If this chart is below <25% all the time for entire month, then all the large VMs are over sized.

You do not need to create the Minimum. There is bound to be a VM who are idle at any given time.

In a perfect world, if all the large VMs are right sized, which scenario will you see?

All good. The 2 line chart shows us the degree of over provisioning. Can you tell a limitation?

It lies in the counter itself. We cannot distinguish if the CPU usage is due to real demand or not. Real demand comes from the app. Non-real demands comes from the infra, such as:

  • Guest OS reboot
  • AV full scan.
  • Process hang. This can result in 100% CPU Demand. How to distinguish a runaway process?

If your Maximum line is constantly ~100%, you may have a runaway process.

Now that you’ve got the technique, we are ready to implement it. Follow these 2 blogs

  1. CPU right sizing.
  2. RAM right sizing