Agents Administration - Tests
 

Default Parameters for VTGPUUserTest

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. GPU-accelerated computing enhances application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU.

In GPU-enabled virtual environments, if users to virtual applications complain of slowness when accessing graphic applications, administrators must be able to instantly figure out what is causing the slowness - is it because adequate GPU resources are not available to the users? Or is it because of excessive utilization of GPU memory and processing resources by any of the users accessing the applications on the host? Accurate answers to these questions can help administrators determine whether/not:

  • The host is sized with sufficient GPU resources;

  • The GPUs are configured with enough graphics memory;

Measures to right-size the host and fine-tune its GPU configuration can be initiated based on the results of this analysis. This is exactly what the VTGPUUserTest helps you achieve!

To help with better utilization of resources, you can track the GPU usage rates of your instances for each user who is currently accessing the applications on the on the host. When you know the GPU usage rates, you can then perform tasks such as setting up managed instance groups that can be used to autoscale resources based on needs.

This page depicts the default parameters that need to be configured for the VTGPUUserTest.

  • The TEST PERIOD list box helps the user to decide how often this test needs to be executed. By default, this is 15 minutes.

  • By default, Auto is selected from GPU VENDOR drop-down list indicating that this test would automatically discover the vendor name of the GPU card installed on the target server and collect performance metrics. However, you can select NVIDIA from this list if NVIDIA GPU card is installed in the target server. Choosing NVIDIA from this list will enable this test to use nvidia-smi commands to collect performance metrics from the NVIDIA GPU card.

  • By default, NVIDIA Home parameter is set to none indicating that the eG agent would automatically discover the location at which the nvidia-smi is installed for collecting the metrics of this test. If the nvidia-smi is installed in a different location in your virtual environment, then indicate that location in the NVIDIA Home text box.

  • By default, REPORT BY DOMAIN NAME flag is set to Yes. This implies that by default, this test will report metrics for every domainname\username configured for this test. This way, administrators will be able to quickly determine which user logged in from which domain. If you want the test to report metrics for the username alone, then set this flag to No.

  • The DD FREQUENCY refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD FREQUENCY.

  • Once the necessary values have been provided, clicking on the UPDATE button will register the changes made.

When changing default configurations of tests, the values with “$” indicate variables that will be replaced by the eG system according to the specific server being managed - for instance, $hostName is the host/nickname of the target host, $port is the port number of the server being monitored. E.g., for a server xyz:80, $hostName will be changed automatically by the eG manager to “xyz*” and $port will be changed to “80” when configuring a test.