Tag Archives: vR Ops

Super Metrics bulk export import

If you have a lot of super metrics, backing them up can be a challenge. You cannot do bulk export to back up. It’s easy to have version control issue if you manually export each.

Replicating in another instance (e.g. your test/dev) is tedious as you need to import one by one.

A workaround is to use Policy as vehicle to bulk export/import.

  • For backup purpose, export is all you need.
  • For restore into the same environment (where you exported it earlier), you can use the same XML file. You don’t need to customise it.
  • For replicating in another environment, you likely need to modify the XML file. This is because the policy file contains other settings, such as alert. It’s safer if your exported policy does not contain all other settings.

I’ll show you how to trim the policy file, so it has only super metrics. In this way, it’s safe to import into any environment, as it won’t modify anything. The XML file only contains super metrics, that’s all.

The policy file is just a long XML file. In the example below, it has >5000 lines! Notice it has Alerts and Custom Profile.

To delete an entire section, simply use the keyboard to highlight them. See below, where I selected Custom Profile.

I’ve also deleted the alert section. The file is shorter now 🙂

We still have some irrelevant lines (line 4 – 1187 in my case). Delete them too. You’ll end up with something like this.

I’ll expand the content so you know what exactly the supermetric section contains.

BTW, do not copy/paste your super metric from vR Ops UI into the XML file. The expression is not 100% identical.

Once done, you can import it safely into another vR Ops instance. The import is much faster too! Here is what it looks like:

And if you go to Super Metrics, they are all there 🙂

To enable them, go your default policy (the one marked with a tiny D on the column), and edit it. Go to section 6, and find your super metrics. In Operationalize Your World case, they are all prefixed with “Ops.

Do not enable for all objects. It will slow down your system. It also makes it more complex unnecessarily as the formula don’t apply to them.

Do I need to upsize my vRealize Operations?

One common mistake I see in the field is oversized vRealize Operations. I guess the thinking is bigger is better is hard to let go. It does not help that the official sizing guide is conservative. It is conservative for a good reason. There is a wide permutation of vR Ops deployment.

So if your deployment is a simple one, with no management pack and End Point Operations, there is a good chance that you are better off with smaller deployment. So how do you check?

I’ll use an actual example and run through my thought process. The example below is from real production environment, not a lab. The environment is a mid-size, around 3000 VM on 300 ESXi hosts.

The environment is heavy on vCenter folders, vR Ops custom groups, super metrics and alerts. It also has integration with ticketing system. The result is 6000 objects and 10 million metrics. The actual collection is 5500 objects.

To see the above break down, go to Cluster Management screen, as shown below. What can you tell from it?

The vR Ops has 5 nodes. It’s clustered and well balanced. Each node handles around 1100 objects. It’s also using Remote Collector to offload the 5-minute processing. As you’ll see later, that strategy pays off well.

You can see the breakdown of Objects being collected. The number 211 was made of 205 + 6.

Now that we know what the deployment is, we can see each node. From the screen below, you can see again that Remote Collector has a subset of the full node. It only has 4 main modules (Collector, Suite API, Watchdog and Admin UI). There is no Persistence and Databases there.

From the above, we can see the full metrics and property of each node. We can also drill down into each modules.

Remember the 5500 objects being collected? Let’s see the history. I’m plotting since Day 1. This is a new vR Ops, so it only goes back to 1 March.

Notice it starts from 0, as that’s when we deployed it. It was a phased deployment. We registered more vCenter, so the number of VMs, objects and VM went up. The CPU Usage didn’t jump accordingly, indicating it has more than enough CPU to handle the extra load. Another word, the additional load was too small to make a difference.

Since the 5 nodes are well balance, let’s take 1 of them, so we can dive deeper. I added Guest OS RAM this time around.

We see the similar jump in objects and metrics. That’s expected by now. The impact on CPU was also minimal.

The spike you see in CPU is actually a daily chart. We will show that later on that it happens at midnight. The daily spike eventually became higher. I’m not sure exactly, but it’s a daily calculation (e.g. capacity or DT). It’s not super metric or groups, as these are calculated every 5 minutes.

The additional load was actually decent. It was in fact 2x load, as you can see below. I used a more detailed chart, and you can see here the sharp jump as we added a few vCenter. The vCenter in turn brings all the objects.

The sharp jump makes a tiny difference in CPU Usage. From the pattern below, you won’t believe that there was 2x load. To me, the extra load was absorbed by Remote Collector.

The RAM pattern was puzzling. I don’t know why. BTW, this counter is from Guest OS, not from VM level. I do expect memory to be fully used, as it’s just a form of cache. I just don’t know why Free RAM went higher ahead of the addition.

Let’s look at Network. The pattern match CPU. The absolute number was low though. 10K KBps = 10 MBps = 80 Mbps.

Let’s look at Storage. The pattern match CPU. Read is higher, because at night vR Ops does its capacity and DT, and that means it’s reading a lot of data. The absolute number was low though. 1000 IOPS for 1100 objects means 1 object = 1 IOPS.

I said earlier we would dive into the CPU. Here is a 7-day chart. You can see there is daily peak at midnight. But what about the 2nd peak, the one I marked with “?

To answer that, we have to zoom into that period. Here is what it looks like. Turned out, there was a problem. Notice there was no collection. So when we rectified the problem, vR Ops has to catch up.

From the chart, we can also see that the daily calculation does not last >15 minutes. The burst was short.

Hope that helps you in right-sizing your vR Ops.

vRealize Operations Object Relationship

Relationship among objects is critical in analysis in vR Ops. The dashboards, reports, groups, super metric rely on correct relationship. The good thing is most relationship is provided out of the box.

You can also add your own. The manual in 6.5 explains it here, but does not show how it’s done. Luckily, an older version of the manual has it. So here it is:

I’m not giving the link as the remaining of the content is not relevant. Plus it’s a really version, so other parts are likely outdated.

I’ll complement the above step with pictures. From the screenshot below, see the big red number 1

Click that Object Relationship. A 3-column area is shown.

  • First column is for parent. You choose the parents from here.
  • Last column is for children. You choose the children from here.

The way you choose a parent is by choosing its container, then filter it. In the example below, I chose Virtual Machine, then I type the object name. It will return all object whose names contains the filter.

You do the same thing for the children.

You then simply select and drag the selected object to the parent. Iit will automatically build the relationship. From the picture below, you can see that the VM has vR Ops Node has its child.

If your vR Ops has >1 nodes, you need to repeat for each node. I’m afraid it’s a manual job.

Once done, you will see the relationship reflected. The VM below below now has vR Ops node as its children. This means I can build things like super metric

That’s all folks. Have a good relationship! 🙂