eG Monitoring
 

Measures reported by CtxXcXAAppTest

This test reports statistics pertaining to the different applications executing on a Citrix XenDesktop Apps server and their usage by Citrix clients.

Note:

This test will report metrics only if the XenApp server being monitored uses the .Net framework v3.0 (or above).

The measures made by this test are as follows:

Measurement Description Measurement Unit Interpretation
Number_of_processes Number of instances of the published application currently executing on this Citrix XenDesktop Apps server. Number This value indicates if too many or too few instances corresponding to an application are executing on the host.

Use the Detailed diagnosis of this measure to identify all the users executing this application and comparing the users will help you to identify which user is utilizing the maximum memory, CPU etc.
Cpu_util Indicates the percentage of CPU used by the published application. Percent A very high value could indicate that the specified application is consuming excessive CPU resources.
Memory_util This value represents the ratio of the resident set size of the memory utilized by the application to the physical memory of the host system, expressed as a percentage. Percent A sudden increase in memory utilization for an application may be indicative of memory leaks in the application.
Handle_count Indicates the number of handles opened by this application. Number An increasing trend in this measure is indicative of a memory leak in the application.
No_of_threads Indicates the number of threads that are used by the application. Number  
IO_data_rate Indicates the rate at which this application is reading and writing bytes in I/O operations. KBytes/Sec This value counts all I/O activity generated by each instance of the application and includes file, network and device I/Os.
IO_data_oper_rate Indicates the rate at which this application is issuing read and write data to file, network and device I/O operations. Operations/Sec  
IO_read_data_rate Indicates the rate at which this application is reading data from file, network and device I/O operations. KBytes/Sec  
IO_write_data_rate Indicates the rate at which this application is writing data to file, network and device I/O operations. KBytes/Sec  
Page_fault_rate Indicates the total rate at which page faults are occurring for the threads of all matching applications. Faults/Sec This measure is a good indicator of the load on the application.

A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This may not cause the page to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another application with whom the page is shared.

Memory_used Indicates the current size of the working set of this application. MB The Working Set is the set of memory pages touched recently by the threads in a process/application. If free memory in the server is above a threshold, pages are left in the Working Set of an application even if they are not in use.

When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before leaving main memory. Comparing the working set across applications indicates which application is taking up excessive memory.
Max_process_input_delay Indicates the maximum amount of time lag detected between the user's input through any input device (e.g., mouse, keyboard) and the time at which this application detected the input. Seconds

Poor application performance is one of the most difficult problems to diagnose by the administrators. Traditionally, diagnosis was done by collecting CPU, memory, disk I/O and a few other metrics. The data collected from traditional metrics were not sufficient to figure out the root cause of poor performance of the applications since the variations measured by the metrics were large. In virtual environments where multiple users accessed an application from remote at the same time, users faced difficulties in accessing the application whenever there was an increase in the count of users. The more the users are accessing the application, the higher was the CPU usage of the systems in the environment and the higher was the user input delays i.e., the users were forced to wait for a longer duration to interact with the application. The user input delay is measured by how long any user input (such as mouse or keyboard usage) stays in the queue before it is picked up by a process.

These two measures capture such user input delays at the user session level. These insights enable administrators to accurately identify which user's Citrix experience is being scarred by user input delays.

These measures will be reported only on Windows 2019 (and above).

Ideally, the values of these measures should be 0 or very low.

Avg_process_input_delay Indicates the average amount of time lag detected between the user's input through any input device (e.g., mouse, keyboard) and the time at which this application detected the input. Seconds