Tag Archives: capacity management

Datastore Capacity Management

This post is part of Operationalize Your World post. Do read it first to get the context.

This is the 2nd installment of Storage Capacity Management. The previous post covers the overall storage capacity management, where you can see the big picture and know which datastores are low in capacity. This post drills further and lets you analyze a specific datastore.

Datastore capacity is driven by 2 factors:

  • Performance: If the datastore is unable to serve its existing VMs, are you going to add more VM? You are right, the datastore is full, regardless of how space it has left.
  • Utilization: How much capacity is left? Thin provisioning makes this challenging.

This is what the dashboard looks like.

You start by selecting a datastore you want to check. This step is actually optional, as you would have come from the overall dashboard.

When you select a datastore, its Performance and Utilization are automatically shown.

  • Performance
    • Both actual and SLA are shown.
    • You just need to ensure that actual does not breach SLA.
  • Utilization
    • This shows the total capacity, the provisioned capacity (configured to the VM), and what’s actually used (thin provisioned).
    • You want to be careful with thin provisioning, as the VM can consumed the space as it’s already allocated to it. The line chart has 30-day projection to help you plan.

The 2 line charts is all you need. It is simple enough, yet detailed enough. It gives you room to make the judgement call. You can decide to ignore the spike because you knew it was a special event.

If you want to analyse, you can see the individual VMs. The heatmap shows the VMs distribution. You can see if there are large VMs, because they are bigger. You can see if any VM is running out of capacity, or any VM is wasting the allocated capacity.

The heatmap configuration below shows how it’s done.

You can also check if there are VMs that you can delete. Reclamation will give you extra space. The heatmap has a filter for powered off VMs, so only powered off VMs are shown.

From there, you can drill further to check that the VM has indeed met your Powered Off definition. It’s showing the VM powered off time (%) in the past 30 days. I’ve set the threshold to be 99%. Green means the VM is at least 99% powered off in the past 30 days.

Logic

I hope you agree by now that datastore performance is measured on how well it serves its VMs. We can track this by plotting a line chart showing the maximum storage latency experienced by any VM in the datastore. This maximum number has to be lower than the SLA you promise at all times.

For Utilization, we will plot a line chart showing the disk capacity left in the datastore cluster.

You should be using Datastore Cluster. Other than the benefits that you get from using it, it also makes capacity management easier.

  • You need not manually exclude local datastore.
  • You need not manually group the shared datastores, which can be complex if you have multiple clusters.

With vSAN, you only have 1 datastore per cluster and need not exclude local datastores manually. This means it’s even simpler in vSAN.

Include buffer for snapshot. This can be 20%, depending on your environment. This is why I’m not a fan of many small datastores, as you have pockets of unusable capacity. This does not have to be hardcoded in your super metric, but you have to be mentally aware of it.

Super Metrics

The screenshot below shows the super metric formula to get the Maximum latency of all the VMs in the cluster. I’ve chosen at Virtual Disk level, so it does not matter whether it is VMFS, VMFS, NFS or VSAN.

super metric - vDisk

You can copy paste the formula below:

Max ( ${adapterkind=VMWARE, resourcekind=VirtualMachine, attribute=virtualDisk|totalLatency, depth=2 } )

The screenshot below shows the super metric formula to get the total number of disk capacity left in the cluster. This is based on Thin Provisioning consumption.

You can copy paste the formula below:

sum( ${adapterkind=VMWARE, resourcekind=Datastore, attribute=capacity|available_space, depth=1} )

For Thick Provision, use the following super metric:

super metric - Disk - space left in datastore cluster - thick

You can copy paste the formula below:

Sum
(
${adapterkind=VMWARE, resourcekind=Datastore, attribute=capacity|total_capacity, depth=1}
) –
Sum
(
${adapterkind=VMWARE, resourcekind=Datastore, attribute=capacity|consumer_provisioned, depth=1}
)

Hope you find it useful. Just in case you’re not aware, you don’t have to implement all these manually. You can import this dashboard, together with 50+ others, from this set.

Storage Capacity Management

This post is part of Operationalize Your World post. Do read it first to get the context.

We showed the dashboards in vSphere visibility for Storage Team post. Let’s now show how they are implemented.

BTW, you don't have to know how they are implemented.
You can simply imports all the ~50 dashboards by following the steps here.

For convenience, here is what the dashboard looks like. From this overall dashboard, you can drill down into datastore-level capacity planning.

The Summary is just a scoreboard widget. Here is the config.

It’s using 3 groups, which you have to create as part of the import process. The 3 groups are:

  • Datastores (Shared)
  • Powered On VMs
  • Powered Off VMs

You can customize the threshold. Do note that it applies to all service tiers.

The bar chart is just a view widget. Since I already have a group for Shared Datastores, it’s a matter of selecting them. It’s faster than selecting the World object, then filter the datastore. In the following screenshot, that’s the name of the view.

The 3 heatmaps are documented below.

I showed you the Datastore Cluster first as you should take advantage of Datastore Cluster. It makes operations easier compared with datastore.

The datastore heatmap again leverages the group. This filters out local datastores, which is what you want.

The view widget also leverages the group, for the same reason I shared above.

For the heatmap “Are the VMs using the storage given to them?“, I’m only showing the powered on VMs.

Hope that helps for overall storage capacity management. For capacity management at the datastore level, review this.

vSphere Capacity Reclamation

This post continues from the Operationalize Your World post. Do read it first so you get the context.

There are 5 Reclamation levels you can do. Start from the easiest one first.

Let’s go through the table above:

  • Non VM is the easiest, because they are not owned by someone else. They are yours! Non VM objects, such as template and ISO should be kept in 1 Datastore per physical location. Naturally, you can only reclaim Disk, and not CPU & RAM.
  • Orphaned VM and orphaned vmdk are next, as they are not even registered in vCenter. If they are, they may appear italicized, indicating something wrong. They may not have owners too. Take note vR Ops 6.4 cannot check orphaned vmdk.
  • Powered Off VM is harder, as there is now owner of the VM. You need to deal with VM Owner before you delete them.
  • Idle VM is a great target, as you can now claim CPU and RAM when you power them off. You cannot claim disk yet as you are not deleting them yet.
  • Active VM is the hardest. Focus on large VM. Take on CPU and RAM separately. Easier to tackle when you split them. Divide and conquer.
  • Claiming CPU and RAM from Small VM can be futile, regardless if it’s idle. A idle VM with 1 vCPU cannot be further reduced. It should be powered off. By powering them off first, it’s a safer procedure. You can simply power on if the VM is actually being used.
  • Snapshot. This is actually not as hard as CPU and RAM, hence in the actual dashboard we list them separately.

Why do cars have brake?

So it can go faster!

Take advantage of Powered Off as the brake for your Idle VMs. If you treat Idle and Powered off as 1 continuum, you can power off the Idle VMs earlier. You get the benefit of CPU and RAM reclamation.

What value is considered as Idle?

  • It has to be defined, so it’s measurable and not subjective. Declare it as a formal policy, so you don’t end up arguing with your customers.
  • Default setting in vR Ops policy is CPU Demand = 100 MHz. A VM using 100 Hz or less CPU will be considered Idle.
  • While a VM uses CPU, RAM, Disk and Network, we only use CPU as a definition for Idle. I think there is no need to consider all 4, and states all 4 must be idle, because they are inter-related. It takes CPU cycle to process Network Packets and perform Disk activity. Data from NIC and Disk must be copied to RAM also, and the copying effort requires CPU cycle.
  • How long has it been under that threshold?
    VM does not use CPU non-stop for months. There are times it’s idle, and it’s normal. A month-end VM that processes payroll can be idle for 29 days! The default value of 90% will miss this.

Because of these month-end VMs, I recommend you change the definition from 90% to 99%. Even 99% for 30 days can still wrongly mark active VM as Idle. 1% active means it’s only active for a total of 8 hours (0.3 days) in 30 days. Notice it’s a total, not 1 continuous 8 hours. It’s accumulative within 30 days.

A VM that is idle for 30 days straight, then active for 8 hours, will only need 8 hours to be marked as non idle. A VM that does not accumulate 8 hours, will obviously need more time. The Idle decision logic runs only every 24 hours. So the VM may be marked idle for days after it’s gone active.

The drawback of setting at 99% is we wait the full 30 days before deciding. In some corner case, the VM may never be marked as idle. Take a scenario:

  • A VM was active and serve its purpose for months.
  • After 2 years, the application is being decommissioned as new version released.
  • As a result, the VM goes idle, as it is simply waiting to be deleted. But because we set at 99%, the logic will wait for the full 30 days before deciding.
  • It’s consuming CPU/RAM during the period, as basic services like AV and OS Patches still run. If these non-app workload adds up to >8 hours in 30 days, the VM will never be marked as Idle

Solution: increase threshold from 100 MHz to the amount you think it’s suitable. If possible, power off the VM if it’s really not used.

Powered Off is simpler than Idle, as it’s binary.

A VM that has been powered off for at least 15 days, will take 15 days for it to be marked as Powered On. This creates problem as it’s not a VM you can reclaim.

Solution: add “Is it Powered On now?” into the formula. If a VM is running, it’s no longer considered powered off right away.

This is where the setting is in vR Ops 6.4.

You need to modify the value in your active policy:

  • Change idle from 90% to 99%
  • Change powered off from 90% to 50%

The above is the first of a set of vR Ops dashboard for Capacity Reclamation. I added a short Read Me for 2 reasons:

  • There are 4 dashboards.
    1. The dashboard above
    2. Idle VMs and Powered Off VM. See below.
    3. Active VM: CPU. See this
    4. Active VM: RAM. See this.
  • Reclamation is quite complex when you look at the details. There are many things we can reclaim.

You can replace the Read Me widget with a picture if you know the target screen resolution. I didn’t use image as it will make your import harder.

The above is the 2nd dashboard. It shows the Powered Off VMs and Idle VMs.

The summary at the top tells how much you can reclaim. The table shows where you can claim it.

For the powered off VMs, the widget gives the summary. It tells you how many VMs, and how much space. The table provides details.

The numbers will not be identical due to rounding. The summary is shown in TB while the table in GB. Just in case you’re wondering. 3.7 TB is the correct rounding for 3769.36 GB as there are 1024 GB in 1 TB. 3769/1024 is actually less than 3.7 TB.