eG Monitoring
 

Measures reported by SPUserAnalyticTest

Enterprises typically use SharePoint to create web sites and web applications. The success of the SharePoint platform therefore hinges on the level of user satisfaction with the web sites and applications created on that platform. The key to ensuring high user satistaction lies in closely tracking user requests to the web sites/web applications on SharePoint, measuring the responsiveness of the web sites/web applications to the user requests, instantly detecting poor responsiveness, and accurately isolating which user's experience is being impacted by this slowness, well before that user notices! This can be achieved using the SPUserAnalyticTest test!

This test queries the SharePoint usage database at configured intervals and collects usage metrics that are stored therein - this includes the web sites/web applications accessed, count and names of users of each web site/web application, the browsers that were used for web site/web application access, web pages requested, the time taken for the requested pages to load, where page views spent time and how much, error responses returned, resources consumed, and many more. Using the query results, the test then auto-discovers the users accessing each of the web sites/web applications that are configured for monitoring. Then, for each such user, this test reports the average time taken by the corresponding site/web application to load pages. In the process, the test points administrators to slow web sites/web applications, reveals the exact user who has suffered the most owing to this slowness, and also leads them to the probable source of the slowness - is it owing to a latent web front end? is it because of slow service calls? Or is it due to inefficient queries to the backend database?

Sometimes, poor user experience can be attributed to HTTP errors. This is why, this test instantly alerts administrators to HTTP error responses, thus ensuring their timely intervention and rapid resolution of the error conditions.

This way, the SPUserAnalyticTest test enables administrators to proactively detect users who are experiencing or who will potentially experience performance issues with a web site/web application, helps them promptly and accurately diagnose the source of the poor user experience, and thus ensures that they initiate measures to enhance user experience and pre-empt the damage that may be caused to revenue and reputation.

Note that this test will run only if a SharePoint Usage and Health Service application is created and is configured to collect usage and health data. To know how to create and configure this application, Click here.

Output of the test : One set of results for each user accessing every SharePoint SITE configured for monitoring

First-level descriptor: Site URL

Second-level descriptor: User name

The measures made by this test are as follows:

Measurement Description Measurement Unit Interpretation
Unique_browsers Indicates the number of unique browsers used for accessing this site by this user. Number To know which browsers are commonly used to access this site, use the detailed diagnosis of this measure. Here, the unique browsers will be listed and the number of hits to the web site from each browser will be displayed alongside, so that you can instantly identify that browser that has been widely used to access the web site.
Unique_visitors Indicates the number of unique sessions for this user on this web site. Number Compare the value of this measure across users to identify the user who has the maximum number of open sessions on the site, and is hence, probably overloading the site.

The detailed diagnosis of this measure reveals the unique client IP addresses from which the user launched his/her sessions and the number of requests received from each IP address.
Unique_destinations Indicates the number of unique destinations for this site for this user. Number To know the most popular destination URLs for a user, use the detailed diagnosis of this measure. Here, you will find the top-10 destinations in terms of the number of hits.
Unique_referrers Indicates the number of unique URLs external to this site (parent web application is treated as external as well), from where this user navigated to the browser. Number To know which referrer URL was responsible for the maximum hits, use the detailed diagnosis of this measure. The top-10 unique referrer URLs in terms of the number of hits they generated will be displayed as part of the detailed diagnostics.
Apdex_score Indicates the apdex score of this user for this site. Number Apdex (Application Performance Index) is an open standard developed by an alliance of companies. It defines a standard method for reporting and comparing the performance of software applications in computing. Its purpose is to convert measurements into insights about user satisfaction, by specifying a uniform way to analyze and report on the degree to which measured performance meets user expectations.

The Apdex method converts many measurements into one number on a uniform scale of 0-to-1 (0 = no users satisfied, 1 = all users satisfied). The resulting Apdex score is a numerical measure of user satisfaction with the performance of enterprise applications. This metric can be used to report on any source of end-user performance measurements for which a performance objective has been defined.

The Apdex formula is:

Apdext = (Satisfied Count + Tolerating Count / 2) / Total Samples

This is nothing but the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples.

A score of 1.0 means all responses were satisfactory. A score of 0.0 means none of the responses were satisfactory. Tolerating responses half satisfy a user. For example, if all responses are tolerating, then the Apdex score would be 0.50.

Ideally therefore, the value of this measure should be 1.0. A value less than 1.0 indicates that the user experience with the browser has been less than satisfactory.
Satisfied_page_views Indicates the number of times pages were viewed in this web site by this user without any slowness. Number A page view is considered to be slow when the average time taken to load that page exceeds the SLOW TRANSACTION CUTOFF configured for this test. If this SLOW TRANSACTION CUTOFF is not exceeded, then the page view is deemed to be ‘satisfactory’.

Ideally, the value of this measure should be high.

If the value of this measure is much lesser than the value of the Tolerating_page_views and the Frustrated_page_views, it is a clear indicator that the experience of the users of this user is below-par. In such a case, use the detailed diagnosis of the Tolerating_page_views and Frustrated_page_views measures to know which pages are slow.
Tolerating_page_views Indicates the number of tolerating page views for this user in this web site. Number If the Total_duration of a page exceeds the SLOW TRANSACTION CUTTOFF configuration of this test, but is less than 4 times the SLOW TRANSACTION CUTOFF (i.e., < 4 * SLOW TRANSACTION CUTOFF), then such a page view is considered to be a Tolerating page view.

Ideally, the value of this measure should be 0. A value higher than that of the Satisfied_page_views measure is a cause for concern, as it implies that the overall user experience from this browser is less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure.
Frustrated_page_views Indicates the number of frustrated page views for this user in this web site. Number If the Total_duration of a page is over 4 times the SLOW TRANSACTION CUTTOFF configuration of this test (i.e., > 4 * SLOW TRANSACTION CUTOFF), then such a page view is considered to be a Frustrated page view.

Ideally, the value of this measure should be 0. A value higher than that of the Satisfied_page_views measure is a cause for concern, as it implies that the experience of the user has been less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure.
Total_duration Indicates the average time taken by the pages in this site that are requested by this user to load completely. Secs This is the average interval between the time that a user initiates a request and the completion of the page load of the response in the user's browser.

If the value of this measure is consistently high for a user, it implies a degraded user experience. You may want to check the Apdex_score in such circumstances to determine whether/not user experience has already been affected. Regardless, you should investigate the anomaly and quickly determine where the bottleneck lies - is it with the web front-end? is it owing to slow service calls? Or is it because of inefficient queries to the backend? - so that the problem can be fixed before users even notice any slowness! For that, you may want to compare the values of the Duration, Service_calls_duration, CPU_duration, IIS_latency, and Query_duration measures of this test.
Duration Indicates the average time in milliseconds it took for the web front end server to process the requests of this user to this web site. Msecs If the Total_duration of of a browser is abnormally high, then you can compare the value of this measure with that of the Service_calls_duration, CPU_duration, IIS_latency, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls?
Service_calls_duration Indicates the time taken by the requests of this user to this web site to generate service calls. Msecs If the Total_duration of a user is abnormally high, then you can compare the value of this measure with that of the Duration, CPU_duration, IIS_latency, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls?
IIS_latency Indicates the average time requests from this user took in the frontend web server after the requests were received by the frontend web server but before the browser began processing the requests. Msecs If the Total_duration of a user is abnormally high, then you can compare the value of this measure with that of the Duration, CPU_duration, Service_calls_duration, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls?
CPU_duration Indicates the average time for which requests from this user to this site used the CPU. Msecs If the Total_duration of a user is abnormally high, then you can compare the value of this measure with that of the Duration, IIS_latency, Service_calls_duration, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls?
SQL_logical_reads Indicates the total number of 8 kilobyte blocks that this browser read from storage on the back-end database server. Number  
CPU_mega_cycles Indicates the average number of CPU mega cycles spent processing the requests to this browser in the client application on the front end web server. Number  
Total_queries Indicates the total number of database queries generated by requests to this browser. Number  
Query_duration Indicates the average time taken for all backend database queries generated by requests from this user to this web site. Msecs If the Total_duration of a page is abnormally high, then you can compare the value of this measure with that of the Duration, IIS_latency, Service_calls_duration, and CPU_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls?
Bytes_consumed Indicates the average bytes of data downloaded by the requests of this user. KB  
GET_requests Indicates the number of GET requests from this user to this site. Number  
POST_requests Indicates the number of POST requests from this user to this site. Number  
OPTIONS_requests Indicates the number of OPTION requests from this user to this site. Number  
Responses_300 Indicates the number of responses for requests from this user that had a status code in the 300-399 range. Number 300 responses could indicate page caching on the client browsers. Alternatively 300 responses could also indicate redirection of requests. A sudden change in this value could indicate a problem condition.
Errors_400 Indicates the number responses for requests from this user that had a status code in the range 400-499. Number A high value indicates a number of missing/error pages.

Use the detailed diagnosis of this measure to know when each of the 400 errors occurred, which user experienced the error, when using what browser, from which machine. This information will greatly aid troubleshooting.
Errors_500 Indicates the number of responses for requests from this user that had a status code in the range 500-599. Number Since responses with a status code of 500-600 indicate server side processing errors, a high value reflects an error condition.

Use the detailed diagnosis of this measure to know when each of the 500 errors occurred, which user experienced the error, when using what browser, from which machine. This information will greatly aid troubleshooting.