You have a lot of ESXi hosts, and you know that traffic such as vMotion and VSAN can consume a lot of bandwidth. With vRealize Operations, you can see whether any of the vmnics (the physical NICs) across all ESXi hosts are hitting its limit. Since the network now sports full duplex transmission, we need to prove that neither **Transmit** (TX) nor **Receive** (RX) hit the limit. The limit can be 1 GE or 10 GE.

An ESXi typically has 2x 10 GE, or many 1 GE. In my example below, each ESXi has 4 vmnics. That means I need to check 4 x 2 = 8 counters, and make sure none of them hit a limit.

So I need to have the following equation for each ESXi Host:

Max (vmnic0 TX, vmnic0 RX, vmnic1 TX, vmnic1 RX, vmnic2 TX, .vmnic2 RX, vmnic3 TX, vmnic3 RX)

I will use vRealize Operations to implement the above idea. You can use either 5.x or 6.x for this. The example below uses 6.0.1

First step is to create a new super metric. Since I need to apply the formula to the ESXi in question, I need to use the **This** option. Click on the little blue icon, above the formula bar, then double click on the formula you want. Follow the steps in the screenshot below, as it’s different to a super metric that do not use the This option.

- Click the This icon. In the screenshot, I’ve clicked it.
- Choose the Object Type: ESXi Host.
- Choose any of your ESXi host. It does not matter which one, as the name will be replaced with “
**This Resource**“. - Choose the counter you want to add to super metric. Double click on it.

I performed the above steps repeatedly. 10x to be exact. Yes, I added vmnic4 as I needed to verify whether the formula will result in an error since my ESXi does not have vmnic4. Why do I do this? In an environment with **hundreds** of ESXi globally, you may have variances. So you want to create 1 super metric, and apply to all.

I then did the usual preview, to verify that it works. Notice the actual numbers and pattern of the chart.

I know you don’t want to double click so many times, so here it is for you to copy-paste:

**Max ([
Max(${this, metric=net:vmnic0|received_average}),
Max(${this, metric=net:vmnic0|transmitted_average}),
Max(${this, metric=net:vmnic1|received_average}),
Max(${this, metric=net:vmnic1|transmitted_average}),
Max(${this, metric=net:vmnic2|received_average}),
Max(${this, metric=net:vmnic2|transmitted_average}),
Max(${this, metric=net:vmnic3|received_average}),
Max(${this, metric=net:vmnic3|transmitted_average}),
Max(${this, metric=net:vmnic4|received_average}),
Max(${this, metric=net:vmnic4|transmitted_average})
]) * 8 / 1024
**

In the formula above, I manually added ** * 8 / 1024 **so KBps gets converted to Mbps.

One habit that I recommend when it comes to creating super metric, is always verify the formula. In the screenshot below, I manually plotted each vmnic TX and RX. I have 8 lines. Notice the Max of these 8 lines, matches the super metric I created above. With this, I have the proof that it works as expected.

As usual, do not forget to associate it, and also enable it. You should know the drills by now 🙂 If you are not sure on the steps, search my blog on the steps.

The above is great for 1 ESXi host. You can enable alert, and since you have visibility at the ESXi level, you can know for sure which ESXi was affected.

You may say that with DRS and HA, you should also apply it at the **cluster** level. Ok, we can do that too. We just need to create another super metric. Since I already have the data for each host, all I need to do is applying Max to the super metric.

As usual, I did a preview on the above screen, and then verify it manually on the following screen. Notice the patterns and numbers match!

The above works for a cluster. What if you need to apply at the higher level? You need to adjust the **depth=** parameter. In the example below, I use the World object. Notice I set the **depth=4** as the hierarchy from World to ESXi host is:

- Cluster
- vCenter Data Center
- vCenter
- World

There you go. You can now have the **proof** whether any of your vmnic has hit the limit on either TX or RX. And you can see it at the cluster level and ESXi level 🙂

Pingback: Capacity Management in SDDC - VMware Blogs

Pingback: Newsletter: July 11, 2015 | Notes from MWhite

Pingback: Capacity Management in SDDC

Pingback: Capacity Management based on Business Policy