Tag Archives: contention

vCenter and vRealize counters – part 2


The following diagram shows how a VM gets its resource from ESXi. Unlike a physical server, a VM has dynamic resources given to it. It is not static. Contention, Demand and Entitlement are concepts that do not exist in the physical world. It is a pretty complex diagram, so let me walk you through it.

Book - chapter 3 - 02

The tall rectangular area represents a VM. Say this VM is given 8 GB of virtual RAM. The bottom line represents 0 GB and the top line represents 8 GB. The VM is configured with 8 GB RAM. We call this Provisioned. This is what the Guest OS sees, so if it is running Windows, you will see 8 GB RAM when you log into Windows.

Unlike a physical server, you can configure a Limit and a Reservation. This is done outside the Guest OS, so Windows or Linux does not know. You should minimize the use of Limit and Reservation as it makes the operation more complex.

Entitlement means what the VM is entitled to. In this example, the hypervisor entitles the VM to a certain amount of memory. I did not show a solid line and used an italic font style to mark that Entitlement is not a fixed value, but a dynamic value determined by the hypervisor. It varies every minute, determined by Limit, Entitlement, and Reservation of the VM itself and any shared allocation with other VMs running on the same host.

Obviously, a VM can only use what it is entitled to at any given point of time, so the Usage counter does not go higher than the Entitlement counter. The green line shows that Usage ranges from 0 to the Entitlement value.

In a healthy environment, the ESXi host has enough resources to meet the demands of all the VMs on it with sufficient overhead. In this case, you will see that the Entitlement, Usage, and Demand counters will be similar to one another when the VM is highly utilized. This is shown by the green line where Demand stops at Usage, and Usage stops at Entitlement. The numerical value may not be identical because vCenter reports Usage in percentage, and it is an average value of the sample period. vCenter reports Entitlement in MHz and it takes the latest value in the sample period. It reports Demand in MHz and it is an average value of the sample period. This also explains why you may see Usage a bit higher than Entitlement in highly-utilized vCPU. If the VM has low utilization, you will see the Entitlement counter is much higher than Usage.

An environment in which the ESXi host is resource constrained is unhealthy. It cannot give every VM the resources they ask for. The VMs demand more than they are entitled to use, so the Usage and Entitlement counters will be lower than the Demand counter. The Demand counter can go higher than Limit naturally. For example, if a VM is limited to 2 GB of RAM and it wants to use 14 GB, then Demand will exceed Limit. Obviously, Demand cannot exceed Provisioned. This is why the red line stops at Provisioned because that is as high as it can go.

The difference between what the VM demands and what it gets to use is the Contention counter. Contention is a special counter that tracks all these competition for resources. It’s a counter that only exists in the virtual world.

So Contention, simplistically speaking, is Demand – Usage. I said simplistically as that’s not the actual formula. The actual formula does not really matter for all practical purpose as it’s all relative to the expectation that you’ve set to your customers (VM Owner).

Contention happens when what the VM demands is more than it gets to use. So if the Contention is 0, the VM can use everything it demands. This is the ultimate goal, as performance will match the physical world. This Contention value is useful to demonstrate that the infrastructure provides a good service to the application team. If a VM owner comes to see you and says that your shared infrastructure is unable to serve his or her VM well, both of you can check the Contention counter.

The Contention counter should become a part of your Performance SLA or Key Performance Indicator (KPI). It is not sufficient to track utilization alone. When there is contention, it is possible that both your VM and ESXi host have low utilization, and yet your customers (VMs running on that host) perform poorly. This typically happens when the VMs are relatively large compared to the ESXi host. Let me give you a simple example to illustrate this. The ESXi host has two sockets and 20 cores. Hyper-threading is not enabled to keep this example simple. You run just 2 VMs, but each VM has 11 vCPUs. As a result, they will not be able to run concurrently. ESXi VMkernel will schedule them sequentially as there are only 20 physical cores to serve 22 vCPUs. Here, both VMs will experience high contention.

Hold on! You might say, “There is no Contention counter in vSphere and no memory Demand counter either.”

This is where vR Ops comes in. It does not just regurgitate the values in vCenter. It has implicit knowledge of vSphere and a set of derived counters with formulae that leverage that knowledge.

You need to have an understanding of how the vSphere CPU scheduler works.
The following diagram shows the various states that a VM can be in:

Book - chapter 3 - 03

The preceding diagram is taken from The CPU Scheduler in VMware vSphere®
5.1: Performance Study. This is a whitepaper that documents the CPU scheduler with
a good amount of depth for VMware administrators. I highly recommend you read this
paper as it will help you explain to your customers (the application team) how your
shared infrastructure juggles all those VMs at the same time. It will also help you pick
the right counters when you create your custom dashboards in vRealize Operations.

Capacity Management based on Business Policy – Part 1

In this book, I discussed why Capacity Management changed drastically once you virtualized. In this article, I’d provide a summary of how you can manage it.

A new model for capacity is required because the existing model has 2 limitations:

  • It does not consider Availability Policy. There is concentration risk when you place too many VMs in a cluster or datastore. Just because you can, does not mean you should.
  • It is not aware of Performance. The correlation between performance and utilization is not perfect. You can have poor performance at low utilization.

The new model is driven by 3 factors shown below. The Capacity Remaining is the lower of the 3 factors.

Capacity is defined:

  • First by Availability SLA. If concentration risk is reached, capacity is full, regardless of how fast it performs.
  • Second by Performance. If it can’t serve existing workload, capacity is full.
  • Third by IaaS Utilisation. Yes, your cluster utilization is the last thing you check.

Let’s use an example to explain:

You have 100 TB space left in your storage. Lots of space for new VMs. 
But latency is bad. VMs are getting 100 ms latency.
Will adding VM make the situation worse? 
Should you add more VM?
No. Every VM consumes IOPS, not just space.

For both SLAs, you naturally have different service tiers. The Availability SLA of a mission critical VM is certainly higher than a development VM.  The same goes for performance. You will not accept any form of resource contention for mission critical VM, but will accept contention in development environment as cost is more important.

Ideally, your IaaS should have 3 service tiers, with clear differentiation:

  • Tier 1.
    • This is the highest, most important tier.
    • This the Physical Tier, because Performance and Availability are on par with physical server.
    • You can and should guarantee Performance.
    • Suitable for: mission critical VMs
    • This should be around 5 – 20% of your Prod environment.
  • Tier 2.
    • Do not promise performance will match physical. In fact, educate that it will not from time to time, and you cannot control when.
    • Majority of Production VMs are placed here. If not, there is something wrong with your definition of critical. It is not granular enough. Yes, I understand that all VMs in production is important. However, some are more important than others 🙂
    • No mission critical VM should be placed here.
  • Tier 3.
    • This is the cost-optimized and lowest tier.
    • Some of Production VMs are here. Perhaps around 30%.
    • Majority of Test & Development VMs are placed here.

Avoid having >3 tiers. Even in a large environment (>100,000 VM), keep it at 3 tiers.

  • The more tiers, the more confusing it is for your customers (the Application Team).
  • The relative positioning of each tier must be clear. Too many tiers blurs this differentation.
  • Your costs go up as inter-tier sharing should be discourage as it undermines your higher tier.

Performance: Service Definition

I recommend a sample of Performance SLA here. It’s a separate post as we need to undo our thinking of the word performance.

Please review it first.

Done? Good. I’m copying the table from that link below for your convenience:


  • Tier 1 has no over subscription. There is enough CPU and RAM for every VM in the host. No VM needs to wait or contend for resource. As a result, reservation is not applicable.
  • Notice I do not have over subscription ratio. I do not define something like “1.5x CPU Over Subscription” or “2x RAM Over Subscription”. This is because over subscription is an incomplete policy. It fails to take into account utilization. I’ve seen this in customers, where the higher tier perform worse than the lower tier. Once you oversubscribe, you are no longer able to guarantee consistent performance. Contention can happen, even if your ESXi utilization is not high.
  • Use Contention to quantify the SLA. The chance of contention goes up as the tier gets lower. Tier 3 has a higher threshold.
  • Specify 1% for CPU Contention because technically the value is not 0. Look at vCenter CPU Ready counter for VM. Even if it’s the only VM in the ESXi, not contending with anyone, the CPU Ready is not 0. In healthy environment, the CPU Contention will be less than 0.5%.
  • All the hosts in Tier 1 cluster should be identical. The CPU generation and speed are identical. This makes performance predictable. Do not make such guarantee in Tier 3. The cluster may start with 6 identical nodes, but over time may grow into 20 nodes. The 20-node is certainly not identical in terms of performance as the new nodes will sport faster CPU.


  • The SLA is set at 5 minute average. Too see actual IO latency at more granular level, you need to go down to VMkernel. Review this article.
  • In Tier 1, the disk is thick provisioned, so no performance penalty in the first write. I do not provide the same service quality in lower tier.


  • I do not distinguish the Service Tier to keep the service simple.
  • You should not expect drop packets in your DC.

With the above definition, you have a clear 3-tier services based on Performance. Let’s now cover Availability service.

Availability: Service Definition

Mission critical VMs have to be better protected than development VMs. You would go the extra mile to ensure redundancy. In the event of failure, you also want to cap the degree of the damage. You want to minimize the chance of human error too. The table provides such an example. I specify both the maximum number of VM in a host and in the cluster. You can choose one only if that is good enough for you.

Capacity Management 03

Capacity Management

Based on the above 2 service definition (Performance and Availability), you can tell already that Capacity Management becomes simpler.

SDDC Capacity Management is still complex though. Instead of solving it as 1 big entity, you break it into 4. It’s much easier this way. It also lets you work with different team.

  • For Storage, work with Storage team. Have the same dashboard, so you are looking at the same information.
  • For Network, with Network team.
  • For VM, with Tenants or Apps team.

The table below shows the 4 components. For each, we list the major subcomponents

Capacity requires a longer time frame. Minimum 1 month, preferably 3 months. I think >3 months is too long as it’s too far into the past already. You should also project 1-3 months into the future. This allows you to spot trend early, as it takes weeks to buy hardware.

As a result, line chart is required. It lets you see the trend across time.

Tier 1

From the above, you can see capacity management is simple. Your ESXi will have low utilisation most of the time. Even if all VMs are running at 100%, the ESXi will not hit contention as there is enough physical resource for everyone. As a result, there is no need to track contention as there won’t be any. You do not need to check capacity against Performance SLA. You just need to check against Availability SLA.

To help you monitor, you can create the following:

  • A line chart showing the total number of VM left in the cluster.
  • A line chart showing the total number of vCPU left in the cluster.
  • A line chart showing the total number of vRAM left in the cluster.

Look at the above 3 charts as 1 group. Take the one with the lowest number. If the number is approaching a low number, it’s time to add capacity. How low that number should be, depends on how fast you can procure hardware.

Lastly, consider ESXi Network utilization. Make sure they are not saturated.

Tier 2 and 3

This is more complex compared to Tier 1, as there is now contention possibility. You now need to check against Performance SLA and Availability SLA. In fact, it will be driven by Performance.

To help you monitor, you need the following:

  • Line chart showing the total number of VM left in the cluster.
  • Line chart showing the maximum & average CPU Contention experience by any VM in the cluster.
  • Line chart showing the maximum & average RAM Contention experience by any VM in the cluster.
  • Line chart showing the remaining capacity, with 1 month projection
    • This Demand over Net Usable Capacity.
    • Demand is the sum of all ESXi Demand.
    • Net Usable Capacity is the physical capacity – HA – Buffer.

The model main limitation is it does not tell you how many VMs remaining. This is actually a difficult question, as there are too many variables.

What’s your thought? Comment below please. I’m going deeper in the super metric formula in Part 2 of this series.