One common mistake I see in the field is oversized vRealize Operations. I guess the thinking is bigger is better is hard to let go. It does not help that the official sizing guide is conservative. It is conservative for a good reason. There is a wide permutation of vR Ops deployment.
I’ll use an actual example and run through my thought process. The example below is from real production environment, not a lab. The environment is a mid-size, around 3000 VM on 300 ESXi hosts.
The environment is heavy on vCenter folders, vR Ops custom groups, super metrics and alerts. It also has integration with ticketing system. The result is 6000 objects and 10 million metrics. The actual collection is 5500 objects.
To see the above break down, go to Cluster Management screen, as shown below. What can you tell from it?
The vR Ops has 5 nodes. It’s clustered and well balanced. Each node handles around 1100 objects. It’s also using Remote Collector to offload the 5-minute processing. As you’ll see later, that strategy pays off well.
You can see the breakdown of Objects being collected. The number 211 was made of 205 + 6.
Now that we know what the deployment is, we can see each node. From the screen below, you can see again that Remote Collector has a subset of the full node. It only has 4 main modules (Collector, Suite API, Watchdog and Admin UI). There is no Persistence and Databases there.
From the above, we can see the full metrics and property of each node. We can also drill down into each modules.
Remember the 5500 objects being collected? Let’s see the history. I’m plotting since Day 1. This is a new vR Ops, so it only goes back to 1 March.
Notice it starts from 0, as that’s when we deployed it. It was a phased deployment. We registered more vCenter, so the number of VMs, objects and VM went up. The CPU Usage didn’t jump accordingly, indicating it has more than enough CPU to handle the extra load. Another word, the additional load was too small to make a difference.
Since the 5 nodes are well balance, let’s take 1 of them, so we can dive deeper. I added Guest OS RAM this time around.
We see the similar jump in objects and metrics. That’s expected by now. The impact on CPU was also minimal.
The spike you see in CPU is actually a daily chart. We will show that later on that it happens at midnight. The daily spike eventually became higher. I’m not sure exactly, but it’s a daily calculation (e.g. capacity or DT). It’s not super metric or groups, as these are calculated every 5 minutes.
The additional load was actually decent. It was in fact 2x load, as you can see below. I used a more detailed chart, and you can see here the sharp jump as we added a few vCenter. The vCenter in turn brings all the objects.
The sharp jump makes a tiny difference in CPU Usage. From the pattern below, you won’t believe that there was 2x load. To me, the extra load was absorbed by Remote Collector.
The RAM pattern was puzzling. I don’t know why. BTW, this counter is from Guest OS, not from VM level. I do expect memory to be fully used, as it’s just a form of cache. I just don’t know why Free RAM went higher ahead of the addition.
Let’s look at Network. The pattern match CPU. The absolute number was low though. 10K KBps = 10 MBps = 80 Mbps.
Let’s look at Storage. The pattern match CPU. Read is higher, because at night vR Ops does its capacity and DT, and that means it’s reading a lot of data. The absolute number was low though. 1000 IOPS for 1100 objects means 1 object = 1 IOPS.
I said earlier we would dive into the CPU. Here is a 7-day chart. You can see there is daily peak at midnight. But what about the 2nd peak, the one I marked with “?”
To answer that, we have to zoom into that period. Here is what it looks like. Turned out, there was a problem. Notice there was no collection. So when we rectified the problem, vR Ops has to catch up.
From the chart, we can also see that the daily calculation does not last >15 minutes. The burst was short.
Hope that helps you in right-sizing your vR Ops.