How to prove your IaaS Cluster is fast

You provide IaaS to your customers. A cluster, be it VMware vSphere Cluster or Microsoft Hyper-V Cluster, is a key component to this service offering. In hyper-converged era, your cluster does storage too. You take time to architect it, making sure all best practices are considered. You also consider performance and ensure it’s not a slow system.

Why is it then, when a VM owner complains that her VM is slow and she blames your infrastructure, you start troubleshooting? Doesn’t that show your lack of confidence in your own creation? If your cluster is indeed fast, why can’t you just show it and be done in 60 seconds?

Worth pondering, isn’t it? 😉

It’s hard to reach formal agreement with customers quickly and consistently when the line is not clearly drawn. If you have disagreement with your customers, especially paying customers, guess who win 🙂

You need to be able to show something like this.

In the above chart, there is a threshold. It defines what is acceptable level of performance. It quantifies what exactly you mean when you promise “fast”. It is your Performance SLA. Review this if you need more details.

You assure them it will be fast, and you’ve got it backup with measureable metrics. You prove that with the 2nd line. That’s the actual performance. Fast or not is no longer debatable.

You measure performance every 5 minutes, not every hour. In a month, that is 12 x 24 x 30 = 8650 proofs. Having that many data points backing you up helps in showing that you’re doing your job.

Now that you’ve got the Performance SLA, how do you implement it in vRealize Operations?

I’ll take disk latency as an example, as it’s easy to understand.

The chart below shows the various disk latency among 6 VMs, from 9:00 am until 9:35 am. What do you spot?

The average is good. They are mostly below 5 ms.

The worst is bad. It is hovering around 20 ms. It is 4x higher than average, indicating a VM is hit. The storage subsystem is unable to serve all VMs. It’s struggling to deliver.

Let’s plot a line along the worst (highest) disk latency. The bold red line is the maximum among all the disk latency from all the VM. We call this Max (VM Disk Latency) in the cluster.

A cluster typically have a lot more VMs than 6. It’s common to see >100 VMs. Plotting >100 lines will make the chart unreadable. Plus, at this junction, you’re interested in the big picture first. You want to know if the cluster is performing fast.

This is the power of super metric. it tracks the maximum among all VMs, creating its own metric as a result. You lose information on which metric in the super metric, as it’s made of >1 VM.

The next chart has all the details removed. We just have the Maximum and the Average. It’s clear now that the max is much higher than average.

We added 3 dotted line in the above chart. They are the 3 possible outcome. If your Maximum is:

  • below the line, then you are good. The cluster is serving all its VM well.
  • near the threshold, then your capacity is full. Do not add more VM.
  • above the threshold, then your cluster is full. Move VM to reduce demand before VM Owner complains.

Can you see the importance of the Performance SLA?

It’s there to protect your job. Without the line, your reputation is at risk. Say you’ve been delivering Disk Latency at <1 ms on your all flash SSD array. Everyone is happy. Of course! 🙂

You then do a storage maintenance for just 1 hour. During that period, disk latency went up to 4 ms. It is still a respectable number. In fact, it’s a pretty damn good number. But you got a complaint. It happened to coincide with the time you did the maintenance.

Can you guess who is responsible for the slowness experience by business?

You bet. Your fault 🙁

But if you have established a Performance SLA, you’re protected. Say you promise 5 ms. You will be able to say “Yes, I knew it would go up as we’re doing maintenance. I’ve catered for this in my design. I knew we could still deliver as per agreed SLA.”

Let’s now show a real example. This is what it actually looks like in vR Ops.

Notice the Maximum is >10x higher than the average, and the average is very stable. Once the Cluster is unable to cope, you’d see pattern like this. Almost all VMs can be served, but 1-2 were not served well. The maximum is high because there is always 1 VM that wasn’t served.

Only when the Cluster is unable to serve ~50% of the VMs, will average become high too.

The above is for Disk. IaaS consists of providing the following as a service:

  1. CPU
  2. RAM
  3. Disk
  4. Network

Hence we need to display 4 line charts, showing that each service is delivered well.

As every Service Tier performance is different, you need to show it per service tier. A Gold Tier delivers faster performance than Silver Tier, but if it’s higher than its SLA, it’s still not performing. Performance is relative to what you promise.

Since VMs move around in a cluster due to DRS and HA, we need to track at Cluster level. Tracking at Resource Pool level is operationally challenging. Do not mix service tier, as Tier 3 performance can impact Tier 1. The only way you can protect higher tier is with Reservation, which has its own complication operationally.

Once I know what to display, I’d normally do a whiteboard, often with customers. It helps me to think clearly.

This is what the dashboard looks like. It starts with a list of clusters. Selecting a cluster, will automatically show the performance. It shows CPU, RAM and Disk. Network drop packet should be 0 at all times, hence not shown. You can track it at data center level, not cluster.

The final dashboard can be seen here . As performance has to be considered in capacity, we show how it’s done in a series of post here.

NOC Dashboards for SDDC – Part 2

This post continues from the Operationalize Your World post. Do read it first so you get the context.

Dashboard: Performance

Is my IaaS performing?

That’s the key question that you need to answer. You need to show if the clusters are coping well. Show how the clusters are performing in terms of CPU, RAM and Disk.

The above dashboard is per Service Tier. Do you know why?

Yes, the threshold differs for each tier. What is acceptable in Tier 3 may not be acceptable in Tier 1.

The good thing about line chart is it provides visibility beyond present time. You can show the last 6 hours and still get good details. Showing >24 hours will result in visualization that is too static, not suitable for NOC use case.

Limitation & customisation:

  • You need 1 widget per Service Tier.
  • If you only have a few clusters, you can show multiple Service Tiers in 1 dashboard. 1 row per tier results in simpler visualisation.
  • In environment with >10 clusters, we can group them into Service Tier. Focus more on the highest tier (Tier 1).
  • In environment with >100 clusters, we need another grouping in between. Group the Tier 1 clusters into physical location.

When a cluster is unable to cope, is it because it’s having high utilization? I show CPU, RAM and Disk here. You can add Network as you know the physical limit of ESXi vmnic.

Disk is tricky for 2 reasons:

  • It is not in %. There is no 100% for IOPS or Throughput. The good thing is when you architected the array or vSAN, you did have certain IOPS number in mind right? 😉 Well, now you can get the storage vendor to prove it that it does indeed perform as per the PowerPoint 😉 If not, you get free hardware if they promise a fast performance that will meet business requirement.
  • You need to show both IOPS and Throughput. While they are related, you can have low IOPS and high throughput, and vice versa.

If the cluster utilization is high, the next dashboard drills into each ESXi.

We can also see if there are unbalanced. In theory, they should not, if you set DRS to Fully Automated, and pick at least the middle level sensitivity (3). DRS in vSphere 6.5 considers Network also, so you can have unbalanced CPU/RAM.

With the dashboard above, we can tell if ESXi CPU Utilisation is healthy or not.

  • Low value does not mean VM performs fast. A VM is concerned with VM Contention metric, not ESXi Utilization. Low value means we over invest. It is not healthy as we waste money and power.
  • High value means we risk performance (read: contention)

For ESXi, go with higher core count. You save license if you can reduce socket.

We can also tell if ESXi RAM Utilisation is healthy or not.

  • Customers tend to overspend on RAM and underspend on CPU. The reason is this.
  • For RAM, we have 2 metrics:
    • Active RAM
    • Mapped RAM
  • The value you want is somewhere in between Active and RAM.

In the dashboard, the 3 widgets have different range. The range I set is 30 – 90, 50 – 100 and 10 – 90.

Why not 0 – 100?

It is not 100% because you want to cater for HA. Your ESXi should not hit 100% as if you have HA, it would be beyond 100, meaning performance will be badly affected.

If the cluster or ESXi utilization is high, is it because there are VMs generating excessive workloads?

The dashboard above answers if we have VMs that dominate the shared environment.

  • CPU: show a heat map showing all VMs, sized by CPU Demand in GHz (not %), color by contention
  • RAM: show a heat map showing all VMs, sized by Active RAM, color by contention
  • Storage: show a heat map showing all VMs, sized by IOPS color by latency.

At a glance, we can tell the workload distribution among the VMs. We can also tell if they being served well or not.

Limitation & customisation:

  • You need 1 widget per Service Tier.
  • You can change the threshold anytime. If you want a brand new storage from Finance, set the max to 1 ms 😉
  • In larger environment, group your heatmap (e.g. by cluster, host, folder).
  • We can show individual VM, but we can’t show the history as there are too much data to show.
  • This needs to be done per Tier. 1 dashboard per Tier, as the threshold varies per tier.

Hope you find it useful. For the product-specific implementation, review this blog. To prevent vROps session from timing out, implement this trick by Sunny.

NOC Dashboards for SDDC – Part 1

This post continues from the Operationalize Your World post. Do read it first so you get the context.

Be careful of what you’re showing on the big screen. There are many roles in an organisation. Each will have his/her own viewpoint. What you want to show for your managers is probably different to what you want your customers (App Team) to see.

  • If you create something for the big boss to see, you normally hang the big screen near her office. The complication is a lot of other people can see the info too. Would they appreciate the simplification that you do?
  • If you create for the NOC (Network Operation Center) team to see, think of what they already have. The Help Desk have alerts and large desktop screens already. Complement it, not duplicate it.
  • If you create for the Infra team to see, think from their viewpoint. Since they are not help desk, they don’t look at the dashboards and get the alerts.

Best Practices

In addition to the general best practices, NOC dashboard has its own specific guidelines:

No interaction with the screen

  • Avoid having buttons as there is no user clicking on any part of the dashboard. There is no mouse and keyboard either.

KISS (Keep it simple show)

  • Show minimal information, with large numbers.
  • Don’t show detailed charts as that is hard to read from afar. Be aware of how far the info needs to be displayed. 9 point Calibri at laptop is clear, but not at the projector screen.
  • Ideally, all the numbers are in %, with 0 being bad and 100 being perfect.
  • In cases like Utilisation, you should use the following marker
    • 50% = good, balanced utilization. Ideally, this should be 75%.
    • 0% = wastage
    • 100% = highly utilized.

Use color to classify information.

  • Color is easier than text, as you don’t even need to read.
  • Easier to digest from afar and at a glance.
  • Use key colors (green, yellow, amber, and red).

Remember the 5-second test.

  • A dashboard for NOC should be easy and user friendly. It should not require an explanation.

Choose content that drives action.

  • If you display something that is red most of the time, after a while the viewer will ignore it. This defeats the very purpose of displaying on the big screen.
  • When something on the big screen is red, you want action to be taken. And it’s immediate, not tomorrow.

Dashboards Flow

You should also think of how the dashboards flow. The dashboards are not a collection of unrelated screens. There should be logical continuation, else it can be confusing to the viewers.

There are 5 areas that you can show. For each area, you can show multiple screens so it has some depth. Here are some examples of details:


  • Show the VM Availability and IaaS Availability.
  • For Infra, you can split into Compute, Storage and Network.


  • You can split into Tier 1, Tier 2 and Tier 3.

Automatically cycle the dashboard every 1-2 minute. If you wait too long, viewers will lose interest and move on.


These are ideas to jumpstart your Big Screen dashboard. Every customer seems to have different requirements, because you do operations a little differently. So what you see here need to be tailored.

Dashboard: VM Availability

Are the VMs up and running?

The dashboard answers this question: What is the availability among all VMs? Was any of them down in the past 30 days?

Just by looking at the color, you can tell easily if any VM has <100% uptime in the past 30 days. The red color is obvious. Notice the green has different intensity. You can tailor this setting.

We’re using heat map because it can scale >1000 objects. It’s also color coded.

Limitation & customisation

  • This heatmap requires custom group. If you do not group it, it will include VM that you intentionally powered off (and hence do not want to show)
  • The group name is Monitored VMs. To exclude a VM, you need to place it under the exclusion list.
  • If you want to separate Tier 1 VM from lower tiers, you can create 2 separate heatmap widgets.

Dashboard: IaaS Availability

Are the ESXi up and running? Is there any one of them running on high temperature? Temperature that runs high will trigger BIOS to power off the box.

This heat map can scale to a few hundred hosts, so it’s good enough for most customers. For customers with >500 hosts, group them into service tier. Yes, that means you need different dashboard, 1 per service tier.

The 2nd widget is based on vCenter Datacenter object. The logic is you don’t have a localized heat problem, unless it’s a fan failure. Speaking of fan failure, you should add Blue Medora so you can show hardware, not just ESXi. Show hardware failure, like power supply, fan, disk.

That’s it for Part 1. Hope you find it useful. Part 2 has been scheduled for Thursday.