Tag Archives: business policy

Capacity Management based on Business Policy – Part 3

Part 1 explained a new concept, where we use Contention as the basis of Capacity Management. Part 2 provided the super metric equation for each charts. Part 3 will provide example of the super metric formula and dashboard screenshots.

Compute Tier 1 (no over-commit)

To recap, we are implementing the dashboard shown here. We need to create line charts showing the following:

  1. The total number of vCPU left in the cluster.
  2. The total number of vRAM left in the cluster.
  3. Total number of VM left in the cluster.

The screenshot below shows the super metric formula to get the total number of vCPU left in the cluster.

Tier 1 - No of vCPU left in a cluster after HA

Copy-paste the formula below:

${this, metric=cpu|alloc|actual.capacity} *
(

( ${this, metric=summary|number_running_hosts} 1 ) /
${this, metric=summary|number_running_hosts}
)
${this, metric=summary|number_running_vcpus}

In logic, the formula is Supply – Demand, where:

  • Supply = No of Physical Cores in Cluster x ((No of Hosts – 1) / No of Hosts)
  • Demand = No of running vCPU in cluster

I have to assume there is 1 HA host in the cluster. If you have 2, replace 1 with 2 in the formula above.

I have to calculate the supply manually as vRealize Operations does not have a metric for No of Hosts – HA. Actually, it does, but the metric cannot be enabled.

If you find the formula complex, you can actually split them into 2 super metrics first. Work out Supply, then work out Demand. Let me use the RAM as example.

The screenshot below shows the super metric formula to get the total RAM supply. It is the total RAM in the cluster, after we take into account HA. I have to divide the number by 1024, then again by 2014, to convert from KB to GB.

Notice I always preview it. It’s important to build the habit of always verifying that your formula is correct.

Tier 1 - Total physical RAM capacity in a cluster after HA

Once the Supply side is done, I worked on the Demand side. Demand here does not refer to the Demand metric in vRealize or vCenter. It is simply the word Demand in dictionary, which is request/order/need/want. It’s demand in “supply & demand.” The following screenshot shows the demand.

Tier 1 - Total VM vRAM configured in a cluster

Once I verified that both are correct, it’s a matter of combining them together.

Tier 1 - Total vRAM left in a cluster after HA

You can copy paste the formula below:

(
${this, metric=mem|alloc|actual.capacity} /1024 /1024 *
(
(
${this, metric=summary|number_running_hosts} – 1 ) /
${this, metric=summary|number_running_hosts} )
)
(

Sum (${adapterkind=VMWARE, resourcekind=VirtualMachine, attribute=mem|guest_provisioned, depth=2}) /
1024/1024)

The screenshot below shows the super metric formula to get the total number of VM left in the cluster. I have to hardcode the maximum number that I allowed.

No of VM left in the cluster

 

Hope you find useful. In the next post, I cover the super metrics for Tier 2 & 3, and for Network.

Capacity Management based on Business Policy – Part 1

In this book, I discussed why Capacity Management changed drastically once you virtualized. In this article, I’d provide a summary of how you can manage it.

A new model for capacity is required because the existing model has 2 limitations:

  • It does not consider Availability Policy. There is concentration risk when you place too many VMs in a cluster or datastore. Just because you can, does not mean you should.
  • It is not aware of Performance. The correlation between performance and utilization is not perfect. You can have poor performance at low utilization.

The new model is driven by 3 factors shown below. The Capacity Remaining is the lower of the 3 factors.

Capacity is defined:

  • First by Availability SLA. If concentration risk is reached, capacity is full, regardless of how fast it performs.
  • Second by Performance. If it can’t serve existing workload, capacity is full.
  • Third by IaaS Utilisation. Yes, your cluster utilization is the last thing you check.

Let’s use an example to explain:

You have 100 TB space left in your storage. Lots of space for new VMs. 
But latency is bad. VMs are getting 100 ms latency.
Will adding VM make the situation worse? 
Yes! 
Should you add more VM?
No. Every VM consumes IOPS, not just space.

For both SLAs, you naturally have different service tiers. The Availability SLA of a mission critical VM is certainly higher than a development VM.  The same goes for performance. You will not accept any form of resource contention for mission critical VM, but will accept contention in development environment as cost is more important.

Ideally, your IaaS should have 3 service tiers, with clear differentiation:

  • Tier 1.
    • This is the highest, most important tier.
    • This the Physical Tier, because Performance and Availability are on par with physical server.
    • You can and should guarantee Performance.
    • Suitable for: mission critical VMs
    • This should be around 5 – 20% of your Prod environment.
  • Tier 2.
    • Do not promise performance will match physical. In fact, educate that it will not from time to time, and you cannot control when.
    • Majority of Production VMs are placed here. If not, there is something wrong with your definition of critical. It is not granular enough. Yes, I understand that all VMs in production is important. However, some are more important than others 🙂
    • No mission critical VM should be placed here.
  • Tier 3.
    • This is the cost-optimized and lowest tier.
    • Some of Production VMs are here. Perhaps around 30%.
    • Majority of Test & Development VMs are placed here.

Avoid having >3 tiers. Even in a large environment (>100,000 VM), keep it at 3 tiers.

  • The more tiers, the more confusing it is for your customers (the Application Team).
  • The relative positioning of each tier must be clear. Too many tiers blurs this differentation.
  • Your costs go up as inter-tier sharing should be discourage as it undermines your higher tier.

Performance: Service Definition

I recommend a sample of Performance SLA here. It’s a separate post as we need to undo our thinking of the word performance.

Please review it first.

Done? Good. I’m copying the table from that link below for your convenience:

Servers 

  • Tier 1 has no over subscription. There is enough CPU and RAM for every VM in the host. No VM needs to wait or contend for resource. As a result, reservation is not applicable.
  • Notice I do not have over subscription ratio. I do not define something like “1.5x CPU Over Subscription” or “2x RAM Over Subscription”. This is because over subscription is an incomplete policy. It fails to take into account utilization. I’ve seen this in customers, where the higher tier perform worse than the lower tier. Once you oversubscribe, you are no longer able to guarantee consistent performance. Contention can happen, even if your ESXi utilization is not high.
  • Use Contention to quantify the SLA. The chance of contention goes up as the tier gets lower. Tier 3 has a higher threshold.
  • Specify 1% for CPU Contention because technically the value is not 0. Look at vCenter CPU Ready counter for VM. Even if it’s the only VM in the ESXi, not contending with anyone, the CPU Ready is not 0. In healthy environment, the CPU Contention will be less than 0.5%.
  • All the hosts in Tier 1 cluster should be identical. The CPU generation and speed are identical. This makes performance predictable. Do not make such guarantee in Tier 3. The cluster may start with 6 identical nodes, but over time may grow into 20 nodes. The 20-node is certainly not identical in terms of performance as the new nodes will sport faster CPU.

Storage

  • The SLA is set at 5 minute average. Too see actual IO latency at more granular level, you need to go down to VMkernel. Review this article.
  • In Tier 1, the disk is thick provisioned, so no performance penalty in the first write. I do not provide the same service quality in lower tier.

Network

  • I do not distinguish the Service Tier to keep the service simple.
  • You should not expect drop packets in your DC.

With the above definition, you have a clear 3-tier services based on Performance. Let’s now cover Availability service.

Availability: Service Definition

Mission critical VMs have to be better protected than development VMs. You would go the extra mile to ensure redundancy. In the event of failure, you also want to cap the degree of the damage. You want to minimize the chance of human error too. The table provides such an example. I specify both the maximum number of VM in a host and in the cluster. You can choose one only if that is good enough for you.

Capacity Management 03

Capacity Management

Based on the above 2 service definition (Performance and Availability), you can tell already that Capacity Management becomes simpler.

SDDC Capacity Management is still complex though. Instead of solving it as 1 big entity, you break it into 4. It’s much easier this way. It also lets you work with different team.

  • For Storage, work with Storage team. Have the same dashboard, so you are looking at the same information.
  • For Network, with Network team.
  • For VM, with Tenants or Apps team.

The table below shows the 4 components. For each, we list the major subcomponents

Capacity requires a longer time frame. Minimum 1 month, preferably 3 months. I think >3 months is too long as it’s too far into the past already. You should also project 1-3 months into the future. This allows you to spot trend early, as it takes weeks to buy hardware.

As a result, line chart is required. It lets you see the trend across time.

Tier 1

From the above, you can see capacity management is simple. Your ESXi will have low utilisation most of the time. Even if all VMs are running at 100%, the ESXi will not hit contention as there is enough physical resource for everyone. As a result, there is no need to track contention as there won’t be any. You do not need to check capacity against Performance SLA. You just need to check against Availability SLA.

To help you monitor, you can create the following:

  • A line chart showing the total number of VM left in the cluster.
  • A line chart showing the total number of vCPU left in the cluster.
  • A line chart showing the total number of vRAM left in the cluster.

Look at the above 3 charts as 1 group. Take the one with the lowest number. If the number is approaching a low number, it’s time to add capacity. How low that number should be, depends on how fast you can procure hardware.

Lastly, consider ESXi Network utilization. Make sure they are not saturated.

Tier 2 and 3

This is more complex compared to Tier 1, as there is now contention possibility. You now need to check against Performance SLA and Availability SLA. In fact, it will be driven by Performance.

To help you monitor, you need the following:

  • Line chart showing the total number of VM left in the cluster.
  • Line chart showing the maximum & average CPU Contention experience by any VM in the cluster.
  • Line chart showing the maximum & average RAM Contention experience by any VM in the cluster.
  • Line chart showing the remaining capacity, with 1 month projection
    • This Demand over Net Usable Capacity.
    • Demand is the sum of all ESXi Demand.
    • Net Usable Capacity is the physical capacity – HA – Buffer.

The model main limitation is it does not tell you how many VMs remaining. This is actually a difficult question, as there are too many variables.

What’s your thought? Comment below please. I’m going deeper in the super metric formula in Part 2 of this series.