Tag Archives: vSphere

CPU Ready vs CPU Contention

Folks like Daniel in Hong Kong, Sajag in Thailand, and Ramandeep in US have noticed that I shifted my recommendation from CPU Contention to CPU Ready as Performance SLA. The reason is essentially Change Management. Moving from complaint-based operations to SLA-based is a transformation. It’s not something you do in a month. You need to enlighten your boss and your customers. It’s a paradigm shift that can take months.

As a result, CPU Ready is a better start than CPU Contention. Your IaaS business is not ready for Contention, pun intended.

CPU Ready is more stable than CPU Contention, as it’s not affected by Hyper Threading and Power Management.

  • Running both HT on a core reduces the amount of CPU cycle by 50%. Since HT gives only 1.25x boost, each HT gets 62.5% when both are running. That reduction is accounted for in CPU Contention, which is why it can spike to >35% when Ready is not even 1%. Test this by running 2 large VMs in 1 ESXi. If the ESXi is 16 cores 32 treads, then you run 2x 16 vCPU VM. Run both at 100%. Set Power Management to Max so you eliminate frequency scaling from impacting CPU Contention. Both should experience minimal CPU Ready but high CPU Contention. My guess is CPU Ready will be <1%, while CPU Contention will be >35%.
  • Power Management. As you can see here, in general you should take advantage of power savings. The performance degradation is minimal while the savings is substantial. CPU Contention accounts for this frequency drop. My guess is frequency drop of 25% will result in CPU Contention of 25%. I wrote guess as I have not seen a test.

Considering the above, Ready is a lot less volatile. This makes it more suitable as SLA. Operationally, it’s easier to implement. It’s easier to explain to folks less familiar with VMkernel CPU Scheduler.

If you use CPU Contention as formal SLA, you may be spending a lot of time troubleshooting when the business don’t even notice the performance degradation.

Where do you use CPU Contention then?

  • If the value is low, then you don’t need to check CPU Ready, Co-Stop, Power Management and CPU overcommit. The reason is they are all accounted for in CPU Contention.
  • If the value is high (my take is > 37.5%), then follow these steps:
    1. Check CPU Run Queue, CPU Context Switch, “Guest OS CPU Usage“, CPU Ready and CPU Co-Stop. Ensure all the CPU counters are good. If they are all low, then it’s Frequency Scaling and HT. If they are not low, check VM CPU Limit and CPU Share.
    2. Check ESXi power management. If they are set to Maximum correctly, then Frequency Scaling is out (you’re left with HT as the factor), else HT could be at play. A simple solution for apps who are sensitive to frequency scaling is to set power management to max.
    3. Check CPU Overcommit at the time of issue. If there is more vCPU than pCore on that ESXi, then HT could be impacting, else HT not impacting. IMHO, it’s rare that an application does not tolerate HT as it’s transparent to it. While HT reduces the CPU time by 37.5%, a CPU that is 37.5% faster will logically make up for it.

Unfortunately, there is no way to check directly the individual impact of HT and Frequency Scaling. There is no separate counter for each. You can see it indirectly by checking CPU Demand or CPU Usage. If there is a dip at the same CPU Contention went up, but CPU Run does not dip, then it’s HT or Frequency Scaling impacted the VM.

Hope that clarifies. If your observation in production differ to the above, do email me.

Large Scale vSAN Monitoring

Large scale VMware vSAN operations raises the need for easier and faster monitoring. With many and large vSAN clusters, monitoring and troubleshooting become more challenging. To illustrate, let’s take a single vSAN cluster with the following setup:

Here are some of the questions you want to ask in day to day operations:

  • Is any of the ESXi running high CPU utilization?
  • Is any of the ESXi running high Memory utilization?
  • Is any of the NIC running high utilization?
    • With 4 NIC per ESXi, you have 40 TX + 40 RX metrics.
  • Is vSAN vmkernel network congested?
  • Is the Read Cache used?
  • Is the Write Buffer sufficient?
  • Is the Cache Tier performing fast?
    • Each disk has 4 metrics: Read Cache Read Latency, Read Cache Write Latency, Write Buffer Write Latency, Write Buffer Read Latency
    • Since there are 20 disks, you need to check 80 counters
  • Is the Capacity Disks performing fast?
    • Check both Read and Write latency.
    • Total 120 x 2 = 240 counters.
  • Is any of the Disk Group running low on space?
  • Is any of the Disk Group facing congestion?
    • You want to check both the max and count the number of occurrence > 60.
  • Is there outstanding IO on any of the Disk Group?

If you add them the above, you are looking at 530 metrics for this vSAN cluster. And that’s just 1 point in time. In 1 month you’re looking at 530 x 8766 = 4.6+ millions data points!

How do you monitor millions of data so you can be proactive?

vRealize Operation 6.7 sports vSAN KPIs. We collapsed each of those questions. So you only have 12 metrics to check instead of 530, without losing any insight. In fact, you get better early warning, as we hide the average. Early Warning is critical as buying hardware is more than a trip to local DIY hardware store.

The KPIs achieve this simplification by using supermetrics:

Using Min, Max, Count, it picks the early warning.

The KPI has been a hit with customers. But it falls short when you have many vSAN clusters. If you have say 25 hybrid clusters and 25 All Flash clusters, you need to check 50 clusters. While you can click 50x, what you want is to see all 50 at the same time.

This means we need to aggregate the metrics further. There should only be 1 and only 1 metric per cluster.

The challenge is the KPI has different units and scale. How do we normalize them into Green, Yellow, Orange and Red?

We do it by defining a normalization table. We need 1 table for All Flash and 1 for Hybrid, as they have different KPI and threshold. Here is the table for All Flash:

I’m including Utilization even though it does not impact performance. ESXi running at 99% is not slower than ESXi running at 1%, so long there is no contention or latency. The reason is convenience, as it’s hard to monitor when there are >1 counter. You need to bring it down to 1 counter.

I’m setting CPU Ready, CPU Co-Stop and RAM Contention at low numbers, so we can catch early warning. You can adjust after you import.

Here is the table for Hybrid. It has Read Cache Hit Rate (%)

Once you have the table, you can map into threshold.

vSAN Performance is the average of all these. We are not taking the worst to prevent 1 value from keeping it red all the time. If you take the worst, the value will likely remain constant. That’s not good, as pattern is important in monitoring. The relative movement can be more important than the absolute value.

You implement the above using super metric. Yup, heaps of them 🙂

Hope you find it useful. I will share how the above is implemented in future post.

Purpose-driven Architecture

When you architect IaaS or DaaS, what end goals do you have in mind? I don’t mean the design considerations, such as best practices. I mean the business result that your architecture has to deliver. A sign that your architecture has failed to deliver is you get into this situation:

The goal of IaaS is to ensure the VMs are running well. The goal of DaaS is to ensure End Users are getting good desktop experience. Have you defined well or good?

Let’s zoom into discuss IaaS. Say you’re architecting for 10K VM in 2 datacenters. You envisage 2K VM in the first month, then ramp up to 10K within the first year. Do you know the basic info about each of these 10K VMs, so that you can architect an infra to serve them well?

  • How big are they? vCPU, RAM, Disk
  • How intense are they? CPU Utilization, RAM utilisation, Disk IOPS, Network throughput?
  • Their workload pattern? Daily, weekly, monthly, etc.

You don’t. Even the applications team don’t know. Their vendors don’t know either, as you’re talking about the future.

So why then, do you promise that your IaaS will serve them well?

That’s a mistake you make as Systems Architect. It’s akin to promising the highway you architect will serve all the cars, buses and motorcycle well, when you have no idea how many they are and how often they will use it.

Can you do something about it?

Yes. You simply provide a good set of choice. The principle you share to your customers are the common sense used in all service industry:

You want it cheap, it won't be fast.
You want it fast, it won't be cheap.

You then offer a few class of service. Give 2-3 good choices, at difference price point. The highest price has the best performance.

  • Your price has to be cheaper than VMware on AWS, else what’s the point. VMware on AWS  has identical architecture to yours, as it’s using the same software and providing same capabilities. This assures your customers that they are getting good price.
  • Your performance is well defined. It is not subject to interpretation. You put a Performance SLA on the table, assuring your customers that you’re confidence of delivering as promised.

You then architect your IaaS to deliver the above classes of service. The class of service is your business offering. It’s the purpose of your architecture. With class of service clearly defined, the question below becomes easy to answer.

When you know exactly the quality of service you need to deliver, the operations team will not suffer. You handover your architecture to them with ease, as it can be operated easily. It has clear definition of performance and capacity.

Keep the summary below when you are architecting IaaS or DaaS.

For more details, review Operationalize Your World.