| Agents Administration - Tests |
|---|
|
Default Parameters for TermGPUAppTest GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. GPU-accelerated computing enhances application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. In GPU-enabled virtual environments, if users to virtual applications complain of slowness when accessing graphic applications, administrators must be able to instantly figure out what is causing the slowness - is it because adequate GPU resources are not available to the users? Or is it because of excessive utilization of GPU memory and processing resources by any of the users accessing the applications on the host? Accurate answers to these questions can help administrators determine whether/not:
Measures to right-size the host and fine-tune its GPU configuration can be initiated based on the results of this analysis. This is exactly what the TermGPUAppTest helps you achieve! To help with better utilization of resources, you can track the GPU usage rates of your instances for each application on the target RDS server. When you know the GPU usage rates, you can then perform tasks such as setting up managed instance groups that can be used to autoscale resources based on needs. This page depicts the default parameters that need to be configured for the TermGPUAppTest.
When changing default configurations of tests, the values with “$” indicate variables that will be replaced by the eG system according to the specific server being managed - for instance, $hostName is the host/nickname of the target host, $port is the port number of the server being monitored. E.g., for a server xyz:80, $hostName will be changed automatically by the eG manager to “xyz*” and $port will be changed to “80” when configuring a test. |