Monthly Archives: November 2016

vRealize Operations 6.3 Self Monitoring

vRealize Operations 6.3 sports an enhanced self monitoring. This is covered by Michael Ryom on this blog as part of his what’s new in 6.3, so I will start where he left off. So do read his first.

The screenshots in this blog are taken from 6.4 release (see what’s new by Roshan). I do not have 6.3, but I think this self-monitoring feature is the same with 6.3.

vCenter Adapter Details

The vCenter Adapter collects data from vCenter. The dashboard helps answer collection questions from each vCenter, such as:

  • Is there anything wrong in collection? A big drop in the number of objects and metrics can give a clue, especially if you are not removing objects in the associated vCenter.
  • Is collection taking longer than usual?
  • Is collection failing to collect the new objects?


The lab has ~300 VMs and 30 ESXi. I added the number of objects and metrics. On average, I get around 160 metrics per object.

As you can see from the above, I have customised it. It is safe to customize, and I do encourage you to do so. Best to follow these 2 rules:

  1. Do not use Admin account. You won’t be able to track what you have changed if you do.
  2. Do not modify the existing object. Clone them, prefix with your company name (e.g. MSFT)

If you want to know where the counters come from, go into edit mode. Notice you cannot edit if you are not using the built-in Admin account. That’s a protection, so you do not accidentally modify OOTB objects.


From the above, you can tell the metrics are coming from the vCenter object itself. vSphere World is chosen, as its children are vCenter objects.

Cluster Statistics

The dashboard provides aggregate information at cluster level, so you can see summary before going into each node. There are interesting counters such as Object, Metrics, Alarms and Alerts.

You can click on each scoreboard and the detail line chart will be automatically shown. For example, I clicked on Metric and can see that collection went up on 21 November. If this is not due to new VMs or vSphere infra, then it’s something I’d need to investigate.


You can also get usual information such as CPU, RAM, Disk and Network. I’ve selected the CPU Usage in the example below.


If your vR Ops is slow, you can use the Average IO Transaction Time to tell you if vR Ops is experiencing high disk latency. If the number is much higher here than what you see at the VM level, check if the IO is stuck in the Guest OS.


We can also see the IOPS. From here we can see there is a daily pattern. There is a daily spike in Writes. The peak hit 4K IOPS sustained over 5 minutes. So the actual IOPS is higher as it is a 300 seconds average. There is also a daily spike in Reads, but at a different time.


Performance Details

The detail dashboard covers the individual node. The lab only has 1 node, which is what I’d recommend you to deploy. From what I see, a single node with 4 vCPU running the latest Intel Xeon should be able to handle up to 4000 VM. I’m assuming you only use the vSphere Adapters.


Can you spot the customisation I made to the dashboard?

Yes, I’ve added extra column. This is how it’s done.


Notice you do not need more than 4 vCPU here.


Can you guess the one time peak around 16 November? Yes, that’s when we upgraded it.


You might want to customize the dashboard further, or build your own. You may also want to setup new alerts. To do that, you need to know 2 info:

  1. The Relationships among objects, such as the hierarchy.
  2. The metrics and properties for each object.

One way to study is to click on a particular object and see the All Metric page. Below is an example. This one is for the Collector services. You can see the metrics and property you can get from this object.


You can see the full list of metric & property here.

To create new alert and new symptom, it’s wise to check if existing alert has covered it. For example, here are the symptoms for collector. Notice there are different object type. You need to know that.


Hope you find it useful. I will expand this post next week once I finish my travel.

vSphere visibility for Network Team

This post continues from the Operationalize Your World post. Do read it first so you get the context.

Similar to the problem face between Storage Team and Platform Team, VMware Admin needs to reach out to Network Team. A set of purpose-built dashboards will enable both team to look at issue from the same point of view.

The dashboards must answer the following basic questions for Network Team:

  1. What have I got in this virtual environment?
    • What is the virtual network configuration? What are the networks, and how big are they?
    • We have Distributed vSwitches, Distributed Port Group, Datacenter, Cluster, ESXi, etc. How are they related?
    • Who are the consumer of my network? Where are they located?
  2. Are they healthy?
    • Do we have any errors in our networks? Which port groups see packets dropped? If there is problem, which VMs or ESXi, are affected?
    • Do we have too many special packets? Broadcast, multicast and unknown packets. Who generates them and when?
  3. Are they optimized?
    • Just because something is healthy does not mean they are optimized. Look for opportunity to right size.

Once Network Admin know what they are facing, they are in better position to analyse:

  1. Utilization
    • Is any VM or ESXi near its peak?
    • Who are the top consumer for each physical datacenter?
    • How is the workload distributed in this shared environment?
  2. Performance
    • When VM Owner thinks Network is the culprit, can both Network Team and Platform verify that quickly?
  3. Configuration
    • Are the config consistent?
    • Do they follow best practice?

Once we know the questions and requirements, we can plan a set of dashboards. I’ll show some sample dashboards to get you going. They follow the dashboard best practices.

What have I got? 

This first dashboard provides overall visibility to Network team. It gives insight into the SDDC.

  • It shows the total environment at a glance. A Network Admin can see how the virtual network maps to the virtual environment (ESXi, vCenter Datacenter, vCenter).
  • It quickly shows the structure of virtual network. For each Distributed vSwitch, you can see what its port groups are and ESXi are connected to it. You see the config of both objects (port group and ESXi). You can see if the configuration is not consistent.
  • The heatmap quickly shows all port groups by size, so you can find out your largest ones easily. The color code also lets you see which ones are used the most.


Once you know your environment, you are in a position to do monitoring. I don’t feel comfortable doing monitoring unless I have the big picture. It gives me context.

Are they healthy?

The next dashboard shows quickly if there is dropped packet and error packet in your network.

  • The first line chart shows the maximum packet dropped among all Distributed Switch. So if any switch has dropped packet, it will show up.
  • The second line chart sums all the error packets.

Both line charts are color coded, and you should expect to see green. This means no dropped packets nor error packets.


The dashboard above has interaction, allowing you to drill down when the line charts are not showing green.

  • If you do not see green, you can drill down into each Distributed Switch.
    • As a virtual switch can span thousands of ports, it helps if you can drill down by Port Group and ESXi host. The dashboard automatically shows the relevant Port Group and ESXi of the distributed switch.
  • If there is a need to, you can even drill down to individual VM.
    • The table at the bottom is collapsed by default. Expand it and you’ll see all VMs with dropped packets information.

Other than dropped packets and error packets, Network Admin can also check for multicast, broadcast, and unknown packets. You don’t want to have too many of them zipping around your DC.


The same concept is being applied, hence Network team only need to learn the dashboard once.

The line charts show the total broadcast and total multicast packets. We are not doing hourly average as at the global level, the law of large numbers will ensure its smooth. A significant deviation is required to move the number. Hence if there is big jump, you know something amiss.

Just like the previous dashboard, this dashboard lets you drill down too. You can check which VMs are generating broadcast packets.

Is Utilisation running high? 

Another factor that can impact performance is high utilization. Network team can see the total utilization of the network. The following dashboard answers questions such as:

  • What’s the total workload hitting our physical switches?
  • If the total workload increasing?
  • Any crazy pattern in utilization? Any sudden spike that should not have happened?


Line charts are used instead of a single number, as you can see pattern over time. In fact, 2 line charts are provided: detailed and big picture.

The chart gives you the Total throughput hitting the physical switches, so you know how much bandwidth is generated. The chart will be defaulted to 1 month as this is more of a long term view, not really for troubleshooting.

Based on the line charts, you can drill down into a specific time period where the peak was high. The Top-N lists the ESXi with the highest usage. Click on it, and its detailed utilization will be shown. You can see if any of its vmnic is near the physical limit. The super metric takes into account both RX and TX, and you can hit a limit on either.

If you see an ESXi is saturated, but others are barely used, that means your workload is not well distributed. Note that vSphere 6.0 DRS does not consider network, so unbalance is possible. vSphere 6.5 DRS takes network into account.


  • You can add the total line chart as above, but for VM.
    • VM traffic does not include hypervisor traffic (vMotion, management network, IP storage). So it’s pure business workload.
    • We should be expecting this number to slowly rise, reflecting growth.
    • A sudden spike is bad, and so is a sudden drop. We can turn on analytics on it so you get alert.
  • For details on how to do it, see

How is the workload distributed?

Distributed Switch does not span beyond vSphere Data Center. So data center is a logical choice to start analysing the traffic. The following dashboard compares the workload of each data center. Using color code makes it easy to see which DC reaches a workload that is high.

You can drill down inside the datacenter object. Click on it, and all the ESXi and port groups connected to it will be automaticaly shown. Click on an ESXi, and you can drill down into a VM.


You can change the threshold (limitation: same limit for every DC) to suit your need.

Who are the Top Talkers? 

Another reason for performance is you have VMs that are consuming excessive network resources. Or you have a peak period, where the total is simply much higher than low period. The next dashboard provides 2 line charts. Again, line chart is used as you can see the pattern.

The table provides a list of VM, sorted by their peak utilization. You can find out who are the bursty-users (5 minute highest), and who are the sustained-users (1 hour highest and 1 day highest).


This example is only for the VM. We can build one for the ESXi if that’s needed. The concept is the same.

Is Network the culprit?

Lastly, it’s all about Service, not System or Architecture. When a VM Owner complains that IaaS is causing the problem, Network Admin and VMware Admin can quickly see the same dashboard to agree whether network is the culprit or not.


Hope you find the material useful. If you do, go back to the Main Page for the complete coverage of SDDC Operations. It gives you the big picture so you can see how everything fits together. If you already know how it all fits, you can go straight to download here.