eG Monitoring
 

Measures reported by VmgGPUTest

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. GPU-accelerated computing enhances application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU.

Imagine if you could access to your GPU-accelerated applications anywhere on any device, even those requiring intensive graphics power. NVIDIA GRID makes this possible. With NVIDIA GRID, a virtualized GPU designed specifically for virtualized server environments, data center managers can bring true PC graphics-rich experiences to users.

The NVIDIA GRID GPUs will be hosted in enterprise data centers and allow users to run virtual/physical desktops or virtual applications on multiple devices connected to the internet and across multiple operating systems, including PCs, notebooks, tablets and even smartphones. Users can utilize their online-connected devices to enjoy the GPU power remotely.

In VDI/virtualized server environments, the NVIDIA GRID delivers GPU resources to virtual/physical desktops/VMs using one of the two technologies described below:

  • Dedicated GPU or GPU Pass-through Technology: NVIDIA GPU pass-through technology lets you create a virtual workstation that gives users all the benefits of a dedicated graphics processor at their desk. By directly connecting a dedicated GPU to a Virtual/physical machine through the hypervisor, you can now allocate the full GPU and graphics memory capability to a single Virtual/physical machine without any resource compromise.
  • Shared GPU or Virtual/ GPU (vGPU) Technology: GRID vGPU is the industry's most advanced technology for sharing true GPU hardware acceleration between multiple Virtual/physical desktops-without compromising the graphics experience. With GRID vGPU technology, the graphics commands of each Virtual/physical machine are passed directly to the GPU, without translation by the hypervisor. This allows the GPU hardware to be time-sliced to deliver improved shared Virtual/physicalized graphics performance. The GRID vGPU manager allows for management of user profiles. IT managers can assign the optimal amount of graphics memory and deliver a customized graphics profile to meet the specific needs of each user. Every Virtual/physical desktop has dedicated graphics memory, just like they would at their desk, so they always have the resources they need to launch and run their applications.

In GPU-enabled VMware vSphere environments, if users to VMs/Virtual/physical desktops complain of slowness when accessing graphic applications, administrators must be able to instantly figure out what is causing the slowness – is it because adequate GPU resources are not allocated to the VMs/Virtual/physical desktops? Is it because of excessive utilization of GPU memory and processing resources by a few VMs/Virtual/physical desktops? Or is it because the GPU clock frequencies are improperly set for one/more GPUs used by a VM/Virtual/physical desktop? Accurate answers to these questions can help administrators determine whether/not:
  • The VMs/Virtual/physical desktops have been allocated enough vGPUs;
  • The vGPUs are configured with enough graphics memory;
  • The vGPU clock frequencies are rightly set;
  • The GPU technology in use - i.e., the GPU Pass-through technology or the Shared vGPU technology - is ideal for the graphics processing requirements of the environment;
Measures to right-size the host and fine-tune its GPU configuration can be initiated based on the results of this analysis. This is exactly what the EsxGPUStatsTest test helps administrators achieve!

For each vGPU assigned to each VM/Virtual/physical desktop on a VMware vSphere/ESX server, this test reports the memory usage on that vGPU, thus pointing to vGPUs where memory is over-used. The test also reveals how each of these VMs/Virtual/physical desktops use each of the allocated vGPUs, thus enabling administrators to determine whether/not the allocated vGPUs are sufficient for the current and future processing requirements of the VMs/Virtual/physical desktops. In the process, the test also pinpoints those VMs/Virtual/physical desktops that are over-utilizing the graphical processors assigned to them. Also, to make sure that the assigned GPUs are functioning without a glitch, the power consumption, temperature, and clock frequency of each GPU is also checked at periodic intervals, so that abnormalities can be quickly detected.

Note:

This test will report metrics for only those Windows VMs where the NVWMI is installed. The steps for installing NVWMI and configuring the eG agent to use it have been detailed in the Monitoring Xen Servers document.

The measures made by this test are as follows:

Measurement Description Measurement Unit Interpretation
Cooler_rate Indicates the percentage of device cooler rate for this GPU of this VM/Virtual/physical desktop. Percent  
GPU_usage Indicates the proportion of time over the past sample period during which one or more kernels was executing on this GPU of this VM/Virtual/physical desktop. Percent A value close to 100% indicates that the GPU of the VM/Virtual/physical desktop is busy processing graphic requests almost all the time.

In a Shared vGPU environment a vGPU may be in use almost all the time, if the VM/Virtual/physical desktop it is allocated to runs graphic-intensive applications. A resource-hungry VM/Virtual/physical desktop can impact the performance of other VMs/Virtual/physical desktops on the same server. If you find that only a single VM/Virtual/physical desktop has been consistently hogging the GPU resources, you may want to switch to the Dedicated GPU mode, so that excessive GPU usage by that VM/Virtual/physical desktop has no impact on the performance of other VMs/Virtual/physical desktops on that host.

If all GPUs assigned to a VM/Virtual/physical desktop are found to be busy most of the time, you may want to consider allocating more GPU resources to that VM/Virtual/physical desktop.
Power_consumption Indicates the current power usage of this GPU allocated to this VM/Virtual/physical desktop. Watts A very high value is indicative of excessive power usage by the GPU.

Compare the value of this measure across GPUs to know which VM’s/Virtual/physical desktop's GPU is consuming power excessively.
Temperature Indicates the current temperature of this GPU allocated to this VM/Virtual/physical desktop. Celsius Ideally, the value of this measure should be low. A very high value is indicative of abnormal GPU temperature.

Compare the value of this measure across VMs/Virtual/physical desktops to identify that VM/Virtual/physical desktop for which GPU temperature soared since the last reading.

To reduce the heat output of the GPU and consequently its temperature, you may consider performing underclocking. For instance, it is possible to set a GPU to run at lower clock rates when performing everyday tasks (e.g. internet browsing and word processing), thus allowing the card to operate at lower temperature and thus lower, quieter fan speeds.
Total_memory Indicates the total size of frame buffer memory of this GPU of this VM/Virtual/physical desktop. MiB Frame buffer memory refers to the memory used to hold pixel properties such as color, alpha, depth, stencil, mask, etc.
Used_memory Indicates the amount of frame buffer memory on-board this GPU that has been allocated to this VM/Virtual/physical desktop. MiB Frame buffer memory refers to the memory used to hold pixel properties such as color, alpha, depth, stencil, mask, etc.

Properties like the screen resolution, color level, and refresh speed of the frame buffer can impact graphics performance.

Also, if Error-correcting code (ECC) is enabled, the available frame buffer memory may be decreased by several percent. This is because, ECC uses up memory to detect and correct the most common kinds of internal data corruption. Moreover, the driver may also reserve a small amount of memory for internal use, even without active work on the GPU; this too may impact frame buffer memory.

For optimal graphics performance therefore, adequate frame buffer memory should be allocated to the VM/Virtual/physical desktop.
Available_memory Indicates the amount of frame buffer memory on-board this GPU that has not been allocated to this VM/Virtual/physical desktop. MiB  
Virtual/physical_memory Indicates the Virtual/physical memory of this GPU device of this VM/Virtual/physical desktop. MB  
Util_mem Indicates the proportion of time over the past sample period during which global (device) memory was being read or written on this GPU of this VM/Virtual/physical desktop. Percent A value close to 100% is a cause for concern as it indicates that the graphics memory on a GPU is almost always in use.

In a Shared vGPU environment, memory may be consumed all the time if one/more VMs/Virtual/physical desktops utilize the graphics memory excessively and constantly. If you find that only a single VM/Virtual/physical desktop has been consistently hogging the graphic memory resources, you may want to switch to the Dedicated GPU mode, so that excessive memory usage by that VM/Virtual/physical desktop has no impact on the performance of other VMs/Virtual/physical desktops on that host.

If the value of this measure is high almost all the time for most of the GPUs, it could mean that the VM/Virtual/physical desktop is not sized with adequate graphics memory.
BAR_tot Indicates the total size of the BAR1 memory of this GPU allocated to this VM/Virtual/physical desktop. MiB BAR1 is used to map the frame buffer (device memory) so that it can be directly accessed by the CPU or by 3rd party devices (peer-to-peer on the PCIe bus).
BAR_used Indicates the amount of BAR1 memory on this GPU that is allocated to this VM/Virtual/physical desktop. MiB For better user experience with graphic applications, enough BAR1 memory should be available to the VM/Virtual/physical desktop.
BAR_free Indicates the total size of BAR1 memory of this GPU that is still not allocated to this VM/Virtual/physical desktop. MiB  
Pwr_mgmt Indicates whether/not power management is enabled for this GPU of this VM/Virtual/physical desktop.   Many NVIDIA graphics cards support multiple performance levels so that the server can save power when full graphics performance is not required.

The default Power Management Mode of the graphics card is Adaptive. In this mode, the graphics card monitors GPU usage and seamlessly switches between modes based on the performance demands of the application. This allows the GPU to always use the minimum amount of power required to run a given application. This mode is recommended by NVIDIA for best overall balance of power and performance. If the power management mode is set to Adaptive, the value of this measure will be Supported.

Alternatively, you can set the Power Management Mode to Maximum Performance. This mode allows users to maintain the card at its maximum performance level when 3D applications are running regardless of GPU usage. If the power management mode of a GPU is Maximum Performance, then the value of this measure will be Maximum.

The numeric values that correspond to these measure values are discussed in the table below:

Measure Value Numeric Value
Supported 1
Maximum 0

Note:

By default, this measure will report the Measure Values listed in the table above to indicate the power management status. In the graph of this measure however, the same is represented using the numeric equivalents only.
Pwr_limit Indicates the power limit configured for this GPU of this VM/Virtual/physical desktop. Watts This measure will report a value only if the value of the ‘Pwr_mgmt’ measure is ‘Supported’.

The power limit setting controls how much voltage a GPU can use when under load. Its not advisable to set the power limit at its maximum - i.e., the value of this measure should not be the same as the value of the Pwr_maxLimit measure - as it can cause the GPU to behave strangely under duress.
Pwr_dLimit Indicates the default power management algorithm's power ceiling for this GPU. Watts This measure will report a value only if the value of the ‘Pwr_mgmt’ measure is ‘Supported’.
Pwr_enfLimit Indicates the power management algorithm's power ceiling for this GPU of this VM/Virtual/physical desktop. Watts This measure will report a value only if the value of the ‘Pwr_mgmt’ measure is ‘Supported’.

The total board power draw is manipulated by the power management algorithm such that it stays under the value reported by this measure.
Pwr_minLimit The minimum value that the power limit be set to for this GPU of this VM/Virtual/physical desktop. Watts This measure will report a value only if the value of the ‘Pwr_mgmt’ measure is ‘Supported’.
Pwr_maxLimit The maximum value that the power limit for this GPU of this VM/Virtual/physical desktop can be set to. Watts This measure will report a value only if the value of the ‘Pwr_mgmt’ measure is ‘Supported’.

If the value of this measure is the same as that of the Power limit measure, then the GPU may behave strangely.
Core_clock Indicates current frequency of the graphics clock on this GPU of this VM/Virtual/physical desktop. MHz GPU has many more cores than your average CPU but these cores are much simpler and much smaller so that many more actually fit on a small piece of silicon. These smaller, simpler cores go by different names depending upon the tasks they perform. Stream processors are the cores that perform a single thread at a slow rate. But since GPUs GPU has many more cores than your average CPU but these cores are much simpler and much smaller so that many more actually fit on a small piece of silicon. These smaller, simpler cores go by different names depending upon the tasks they perform. Stream processors are the cores that perform a single thread at a slow rate. But since GPUs contain numerous stream processors, they make overall computation high. The streaming multiprocessor clock is how fast the stream processors run. The memory clock is how fast the memory on the card runs. The GPU core clock is the speed at which the GPU assigned to the VM/Virtual/physical desktop operates.

By correlating the frequencies of these clocks - i.e., the value of these measures - with the memory usage, power usage, and overall performance of the GPU, you can figure out if overclocking is required or not.

Overclocking is the process of forcing a GPU core/memory to run faster than its manufactured frequency. Overclocking can have both positive and negative effects on GPU performance. For instance, memory overclocking helps on cards with low memory bandwidth, and with games with a lot of post-processing/textures/filters like AA that are VRAM intensive. On the other hand, speeding up the operation frequency of a shader/streaming processor/memory clock, without properly analyzing its need and its effects, may increase its thermal output in a linear fashion. At the same time, boosting voltages will cause the generated heat to sky rocket. If improperly managed, these increases in temperature can cause permanent physical damage to the core/memory or even “heat death”.

Putting an adequate cooling system into place, adjusting the power provided to the GPU, monitoring your results with the right tools and doing the necessary research are all critical steps on the path to safe and successful overclocking.
Memory_clock Indicates current memory clock frequency on this GPU of this VM/Virtual/physical desktop. MHz
Clk_sm Indicates the current frequency of the streaming multiprocessor clock on this GPU of this VM/Virtual/physical desktop. MHz
Frame_rate Indicates the rate at which frames are processed by this GPU of this VM/Virtual/physical desktop. Frames/Sec FPS is how fast your graphics card can output individual frames each second. It is the most time-tested and ideal measure of performance of a GPU. Higher the value of this measure, healthier is the GPU.
Fan_speed Indicates the percent of maximum speed that this GPU's fan is currently intended to run at. Percent The value of this measure could range from 0 to 100%.

An abnormally high value for this measure could indicate a problem condition - eg., a sudden surge in the temperature of the GPU that could cause the fan to spin faster.

Note that the reported speed is only the intended fan speed. If the fan is physically blocked and unable to spin, this output will not match the actual fan speed. Many parts do not report fan speeds because they rely on cooling via fans in the surrounding enclosure. By default the fan speed is increased or decreased automatically in response to changes in temperature.
Compute_proc Indicates the number of processes having compute context on this GPU of this VM. Number Use the detailed diagnosis of this measure to know which processes are currently using the GPU. The process details provided as part of the detailed diagnosis include, the PID of the process, the process name, and the GPU memory used by the process.

Note that the GPU memory usage of the processes will not be available in the detailed diagnosis, if the Windows platform on which XenApp operates is running in the WDDM mode. In this mode, the Windows KMD manages all the memory, and not the NVIDIA driver. Therefore, the NVIDIA SMI commands that the test uses to collect metrics will not be able to capture the GPU memory usage of the processes.
Vol_sin_ecc_err Indicates the number of volatile single bit errors in this GPU of this VM/Virtual/physical desktop. Number Volatile error counters track the number of errors detected since the last driver load. Single bit ECC errors are automatically corrected by the hardware and do not result in data corruption.

Ideally, the value of this measure should be 0.
Vol_dou_ecc_err Indicates the total number of volatile double bit errors in this GPU of this VM/Virtual/physical desktop. Number Volatile error counters track the number of errors detected since the last driver load. Double bit errors are detected but not corrected.

Ideally, the value of this measure should be 0.
Agg_sin_ecc_err Indicates the total number of aggregate single bit errors in this GPU of this VM/Virtual/physical desktop. Number Aggregate error counts persist indefinitely and thus act as a lifetime counter. Single bit ECC errors are automatically corrected by the hardware and do not result in data corruption.

Ideally, the value of this measure should be 0.
Agg_dou_ecc_err Indicates the total number of aggregate double bit errors in this GPU of this VM/Virtual/physical desktop. Number Aggregate error counts persist indefinitely and thus act as a lifetime counter. Double bit errors are detected but not corrected.

Ideally, the value of this measure should be 0.
BAR1_mem_util Indicates the percentage of BAR1 memory on this GPU that is allocated to this VM/Virtual/physical desktop. Percent A value close to 100% is indicative of excessive usage of the BAR1 memory by a VM/Virtual/physical desktop. For best graphics performance, this value should be low. To ensure that, adequate BAR1 memory should be allocated to the VM.
FB_mem_util Indicates the percentage of frame buffer memory on-board this GPU that has been allocated to this VM/Virtual/physical desktop. Percent Ideally, the value of this measure should be low. A value close to 100% is indicative of excessive usage of frame buffer memory.

Properties like the screen resolution, color level, and refresh speed of the frame buffer can impact graphics performance.

Also, if Error-correcting code (ECC) is enabled, the frame buffer memory usage will increase by several percent. This is because, ECC uses up memory to detect and correct the most common kinds of internal data corruption. Moreover, the driver may also reserve a small amount of memory for internal use, even without active work on the GPU; this too may impact frame buffer memory usage.

For optimal graphics performance therefore, adequate frame buffer memory should be allocated to the VM/Virtual/physical desktop.
Mode Indicates the mode using which the GPU resources were delivered to the VMs.   The values that this measure can take and their corresponding numiec values are as follows:

Measure Value Numeric Values
Pass through 0
Shared 1
Unavailable (GPU card is not allocated to VM) 2

Note:

By default, this test reports the Measure Values listed in the table above to indicate the mode of GPU delivery. In the graph of this measure however, the same is represented using the numeric equivalents only.
Physical_gpu_util Indicates the proportion of time over the past sample period during which one or more kernels were executing on the physical GPU of this VM/Virtual/physical desktop. Percent This measure will report metrics only VMs configured with a Tesla GPU card.

A value close to 100% indicates that the physical GPU is busy processing graphic requests from this VM almost all the time.

In a Shared vGPU environment a vGPU may be in use almost all the time, if the VM/Virtual/physical desktop it is allocated to run graphic-intensive applications. A resource-hungry VM/Virtual/physical desktop on a XenServer can impact the performance of other VMs/Virtual/physical desktops on the same server. If you find that only a single VM/Virtual/physical desktop has been consistently hogging the GPU resources, you may want to switch to the Dedicated GPU mode, so that excessive GPU usage by that VM/Virtual/physical desktop has no impact on the performance of other VMs/Virtual/physical desktops on that host.

If all GPUs assigned to a VM/Virtual/physical desktop are found to be busy most of the time, you may want to consider allocating more GPU resources to that VM/Virtual/physical desktop. 
Encoder Indicates the amount of physical GPU of this VM/Virtual/physical desktop that is utilized for encoding process. Percent These measures will report metrics only VMs configured with a Tesla GPU card.

A value close to 100 is a cause of concern. By closely analyzing these measures, administrators can easily be alerted to situations where graphics processing is a bottleneck.
Decoder Indicates the amount of physical GPU of this VM/Virtual/physical desktop that is utilized for decoding process. Percent