[4 Sep 2016: vRealize Operations 6.3 can retrieve Guest OS RAM without agent]
I covered how to monitor Windows RAM usage in earlier blog posts. I did Windows 7 x64 in this post and Windows 2008 R2 in this post. The information shows that the way Windows manages memory is not visible to the hypervisor. Some counters it uses are not visible, as they are acting as cache. BTW, you can use those counters when monitoring physical machines, useful in sizing them before you P2V them into a VM.
vRealize Operations 6.1 delivers a good enhancement in Guest OS visibility. It uses an agent inside the Guest OS, called End Point agent. Having this in-guest visibility improves vRealize Operations accuracy.
Let’s take an example. I will use Windows 2012 this time around, as I have used Windows 2008 and Windows 7 in previous blogs.
The Windows VM runs a small MS AD and DNS features, and support the VMware ASEAN lab. It’s not providing other features or services. Let’s see the system configuration. The System Information shows it has 8 GB of Physical Memory + 1.25 GB of Virtual Memory. For a small MS AD doing just DNS and AD, 8 GB is more than enough. That probably explains why the pagefile.sys is only 1.25 GB.
What about the usage (utilization)? A quick snapshot using Resource Manager shows that Available is hovering around 6.4 GB. This indicates I have plenty of RAM, as I have 8 GB of physical RAM. The RAM usage is also quite stable, with very little hard fault per second.
I use the word “around” 6.4 GB, as there is a difference between how vRealize Operations and Windows sample the data.
Windows PerfMon and Resource Manager reports every second. This is different to vRealize Operations, which reports every 5 minutes. 5 minutes is 300 seconds, and it is an average. So expect the data to be different as the sampling length is different. vRealize Operations use case is for Monitoring tool, while tools that can go down to more granular level (Log Insight, Windows counters, esxtop) is more suitable for Troubleshooting. They are not so suitable for overall monitoring, as you will induce performance penalty while monitoring. In addition, you really do not want to react to every spike that lasts only a few seconds. There is a good chance it does not impact the business.
With that, let’s take a look the first metric. Commit Limit is important as a growing value is an early warning sign. Remember Windows proactively increases its pagefile.sys if it’s under memory pressure.
I’m expecting the Commit Limit to be a constant 9.25 GB, as my AD is not under memory pressure at all. In bytes, this is 9,931,640,832 bytes. So that’s the value I’m expecting from vRealize Operations EP Agent.
Bingo! The value matches. More importantly, it’s showing a constant value, meaning Windows 2012 is not under memory pressure. I need to maintain this value to be 16 GB or below. I’m surprised Windows 2012 uses a very low page file. Windows 2008 would have probably set it to 16 GB.
Tips: create a super metric that tracks the ratio of pagefile.sys to total RAM. If it is >1.0, you need to add RAM.
The Commit Limit, while it is a good indicator, does not change frequently. It’s also not telling how much RAM is actually used, and how much RAM is free. In Windows (2012, 2008, 7 x64), the counter Available Memory tells you how much RAM is available. It consists of Standby RAM and Free RAM. It is possible that the data in Free Memory dropped to near 0 in a short period. Windows manages its memory actively and will increase it.
Performance Monitor shows in the following screenshot that I have around 6.4 GB of available RAM. This is consistent with a snapshot I saw at Resource Monitor.
Let’s now look at the data at vRealize Operations 6.1 EP agent. Since it is a 5 minute average, it can be slightly different. The good thing is business requirements do not dictate us to be so accurate to the nearest megabytes. In fact, sizing to the nearest GB is acceptable. vRealize Operations shows that I’m using around 20%. 80% of 8 GB is around 6.4 GB, which is shown by the Free Memory. Again, this matches what I saw inside Windows.
20% may seem on the low side of memory utilisation. As I shared earlier, Windows would take advantage of the RAM given to it. So you need to include the Standby RAM and Modified RAM also, if you want to be more conservative in your sizing. If you add them, you will see higher utilisation. This means you have 2 choices of sizing: Conservative or Cost Effective
Cost Effective: Memory Used. Conservative: Memory Used + Cached Memory
Following the KISS principle, I’d aim for 90% utilisation (Used + Cached) as the healthy range for both server workload and VDI workload. In this specific case of VMware lab, we know actually that AD does not use 80% actively, as the cached size is large. In this case, I probably can live with 5 GB comfortably.
For reporting convenient, vRealize Operations also provides the Free Memory counter. The 2 counters add up to 100%, as you can see below. The Free Memory only started to show up at 1 pm as it was not enabled by default. I enabled it manually around that time.
Memory Usage value from outside the Guest OS
At this point you may ask me what the value as seen from outside the Guest OS, i.e. from the hypervisor. The value is different. As you can see, the pattern is different. Yes, the absolute value is similar, which means you can use the value as a gauge.
There are other cases where the difference is not negligible. Let’s show some.
I’ll show you an example where the difference is big enough, that it resulted in a false alarm by the hypervisor. I use the word hypervisor, as it should apply to all hypervisor, not just vSphere.
In the chart below, vRealize Operations show 3 values: A, B, and C.
A is the memory usage counter from the hypervisor. C is the Memory Used counter from inside the Guest OS. The delta is quite significant. The pattern is also different.
The value in vCenter is hovering around 90%, and it actually triggered an alarm.
Let’s zoom into to see the various other counters, just in case we’re missing something. As you can see, there is no ballooning, swapping, and compression. Memory latency is also 0. Another word, there is no hypervisor-related factor that can impact the VM internal memory usage.
The above example shows a situation where the hypervisor was over-reporting. What about under-reporting? Can the hypervisor show a value that was too low?
Let’s look at the example below.
The counter from hypervisor reported that the VM above was doing <20%. The Memory Usage counter was stable, hovering around 15 – 20%.
What I did then was installing the EP Agent. As you could see, the EP Agent started providing the metrics.
Interesting, the Memory Usage counter rose from around 20% to 90%. Not only that, it stayed there. No, the agent does not consume excessive RAM. It’s just the way Active Memory is sampled. Please review this excellent article by Mark Achtemichuk.
Installing the EP agent had a significant impact on the Memory Usage counter. The value from vCenter moved from being lower to being higher. I let the VM stabilised for several hours. The values did not change. Both the Memory Usage and Memory Used maintains their pattern.
I decided to trigger a long running work. In this case, I decided to update Windows. This VM has not been patched for probably over 2 years, so it had a lot of patches. The entire patching took several hours. You can see the impact on the counters. The 2 counters began their own track, and you can see that they do not match. You have both under-reporting and over-reporting scenarios.
I hope the article has been useful. I do encourage you to install the EP agent. If you are using the Advanced edition or higher, it’s already licensed.
This blog article covers Windows, not Linux, because I do not want to make the assumption until I have tested it. In future, I hope to test Linux and share it here.