Measures reported by SPBrowseAnalyticTest
Different users use different browsers to access and browse the web sites and web applications created on SharePoint. Very often, user experience with a web site/application can vary with the browser being used! Using an obsolete or an unsupported browser can cause users to see errors or serious performance degradations when accessing web sites or mission-critical web application. This in turn can delay critical business operations, impair user productivity, and basically, be the reason for enterprises to incur huge penalities, mounting costs, and heavy losses! What administrators need to do therefore is to identify what browsers are being used by their users, see for themselves whether/not user experience changes with browser, and in the process, isolate those browsers that could be delivering a sub-par experience to their users.
This is where the SPBrowseAnalyticTest test helps! This test queries the SharePoint usage database at configured intervals and collects metrics on browser usage that is stored therein. For each browser used, the test then reports the average time taken by that browser to load pages. In the process, the test points administrators to slow browsers and also leads them to the probable source of the slowness - is it owing to a latent web front-end? Is it because of slow service calls? Or is it due to inefficient queries to the backend database?
The test also captures HTTP errors that occurred when using each browser, thus enabling administrators to quickly detect browser-related issues and rapidly fix them before user experience is impacted.
This way, the SPBrowseAnalyticTest test enables administrators to identify problematic browsers, helps them to try and enhance the experience of users using such browsers, or at least conclude which browsers are not ideal for usage with which web sites/web applications.
Note that this test will run only if a SharePoint Usage and Health Service application is created and is configured to collect usage and health data. To know how to create and configure this application, Click here.
Output of the test : One set of results for each browser using which users are accessing SharePoint web applications
The measures made by this test are as follows:
| Measurement |
Description |
Measurement Unit |
Interpretation |
| Unique_users |
Indicates the number of unique users of this browser. |
Number |
Compare the value of this measure across browsers to identify the most popular one.
The detailed diagnosis of this measure reveals the names of the unique users and the number of requests from each user to the browser. From this, you can identify those users who are actively using the browser. |
| Unique_visitors |
Indicates the number of unique visitors using this browser. |
Number |
SharePoint authenticated users and anonymous users (using IP address) are counted as visitors.
You can use the detailed diagnosis of this measure to know who are the unique visitors to the browser and the number of requests from each visitor to the browser. This way, you can identify that visitor who uses the browser most frequently. |
| Unique_destinations |
Indicates the number of unique destinations of this browser. |
Number |
To know the most popular destination URLs, use the detailed diagnosis of this measure. Here, you will find the top-10 destinations in terms of the number of hits. |
| Unique_referrers |
Indicates the number of unique URLs external to this browser (parent web application is treated as external as well), from where the users navigated to this browser. |
Number |
To know which referrer URL was responsible for the maximum hits, use the detailed diagnosis of this measure. The top-10 unique referrer URLs in terms of the number of hits they generated will be displayed as part of the detailed diagnostics. |
| Apdex_score |
Indicates the apdex score of this browser. |
Number |
Apdex (Application Performance Index) is an open standard developed by an alliance of companies. It defines a standard method for reporting and comparing the performance of software applications in computing. Its purpose is to convert measurements into insights about user satisfaction, by specifying a uniform way to analyze and report on the degree to which measured performance meets user expectations.
The Apdex method converts many measurements into one number on a uniform scale of 0-to-1 (0 = no users satisfied, 1 = all users satisfied). The resulting Apdex score is a numerical measure of user satisfaction with the performance of enterprise applications. This metric can be used to report on any source of end-user performance measurements for which a performance objective has been defined.
The Apdex formula is:
Apdext = (Satisfied Count + Tolerating Count / 2) / Total Samples
This is nothing but the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples.
A score of 1.0 means all responses were satisfactory. A score of 0.0 means none of the responses were satisfactory. Tolerating responses half satisfy a user. For example, if all responses are tolerating, then the Apdex score would be 0.50.
Ideally therefore, the value of this measure should be 1.0. A value less than 1.0 indicates that the user experience with the browser has been less than satisfactory. |
| Satisfied_page_views |
Indicates the number of times pages were viewed in this browser without any slowness. |
Number |
A page view is considered to be slow when the average time taken to load that page exceeds the SLOW TRANSACTION CUTOFF configured for this test. If this SLOW TRANSACTION CUTOFF is not exceeded, then the page view is deemed to be “satisfactory”.
Ideally, the value of this measure should be high.
If the value of this measure is much lesser than the value of the Tolerating_page_views and the Frustrated_page_views, it is a clear indicator that the experience of the users of this browser is below-par. In such a case, use the detailed diagnosis of the Tolerating_page_views and Frustrated_page_views measures to know which pages are slow. |
| Tolerating_page_views |
Indicates the number of tolerating page views in this browser. |
Number |
If the Total_duration of a page exceeds the SLOW TRANSACTION CUTTOFF configuration of this test, but is less than 4 times the SLOW TRANSACTION CUTOFF (i.e., < 4 * SLOW TRANSACTION CUTOFF), then such a page view is considered to be a Tolerating page view.
Ideally, the value of this measure should be 0. A value higher than that of the Satisfied_page_views measure is a cause for concern, as it implies that the overall user experience from this browser is less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure. |
| Frustrated_page_views |
Indicates the number of frustrated page views in this browser. |
Number |
If the Total_duration of a page is over 4 times the SLOW TRANSACTION CUTTOFF configuration of this test (i.e., > 4 * SLOW TRANSACTION CUTOFF), then such a page view is considered to be a Frustrated page view.
Ideally, the value of this measure should be 0. A value higher than that of the Satisfied_page_views measure is a cause for concern, as it implies that the experience of users using this browser has been less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure. |
| Total_duration |
Indicates the average time taken by the pages to load completely in this browser. |
Secs |
This is the average interval between the time that a user initiates a request and the completion of the page load of the response in the user-s browser.
If the value of this measure is consistently high for a browser, there is reason to worry. This is because, it implies that the web application is slow in responding to requests. If this condition is allowed to persist, it can adversely impact user experience with the web application. You may want to check the Apdex_score in such circumstances to determine whether/not user experience has already been affected. Regardless, you should investigate the anomaly and quickly determine where the bottleneck lies - is it with the web front-end? is it owing to slow service calls? Or is it because of inefficient queries to the backend? - so that the problem can be fixed before users even notice any slowness! For that, you may want to compare the values of the Duration, Service_calls_duration, CPU_duration, IIS_latency, and Query_duration measures of this test.
If the Average front end time is the highest, it indicates that the problem is with the web site/web application front end - this can be attributed to a slowdown in page rendering or in DOM building. If the Average server connection time is the highest, it denotes that the network is the problem source. This in turn can be caused by TCP connection latencies and delays in domain look up. On the other hand, if the Average response available time measure registers the highest value, it indicates that the problem lies with the web site/web application backend - i.e., the web/web application server that is hosting the web site/web application being monitored. |
| Duration |
Indicates the average time in milliseconds it took for the web front end server to process the requests to this browser.. |
Msecs |
If the Total_duration of of a browser is abnormally high, then you can compare the value of this measure with that of the Service_calls_duration, CPU_duration, IIS_latency, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls? |
| Service_calls_duration |
Indicates the time taken by this browser to generate service calls. |
Msecs |
If the Total_duration of a browser is abnormally high, then you can compare the value of this measure with that of the Duration, CPU_duration, IIS_latency, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls? |
| IIS_latency |
Indicates the average time requests to this browser took in the frontend web server after the requests were received by the frontend web server but before this browser began processing the requests. |
Msecs |
If the Total_duration of a browser is abnormally high, then you can compare the value of this measure with that of the Duration, CPU_duration, Service_calls_duration, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls? |
| CPU_duration |
Indicates the average time for which requests to this browser used the CPU. |
Msecs |
If the Total_duration of a browser is abnormally high, then you can compare the value of this measure with that of the Duration, IIS_latency, Service_calls_duration, and Query_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls? |
| SQL_logical_reads |
Indicates the total number of 8 kilobyte blocks that this browser read from storage on the back-end database server. |
Number |
|
| CPU_mega_cycles |
Indicates the average number of CPU mega cycles spent processing the requests to this browser in the client application on the front end web server. |
Number |
|
| Total_queries |
Indicates the total number of database queries generated by requests to this browser. |
Number |
|
| Query_duration |
Indicates the average time taken for all backend database queries generated by requests to this browser. |
Msecs |
If the Total_duration of a browser is abnormally high, then you can compare the value of this measure with that of the Duration, IIS_latency, Service_calls_duration, and CPU_duration measures of this test to know what exactly is delaying page loading - a slow front-end web server? inefficient queries to the backend database? or slow service calls? |
| Bytes_consumed |
Indicates the average bytes of data downloaded by requests to this browser. |
KB |
|
| GET_requests |
Indicates the number of GET requests to this web browser. |
Number |
|
| POST_requests |
Indicates the number of POST requests to this web browser. |
Number |
|
| OPTIONS_requests |
Indicates the number of OPTION requests to this browser. |
Number |
|
| Responses_300 |
Indicates the number of responses for requests to this browser that had a status code in the 300-399 range. |
Number |
300 responses could indicate page caching on the client browsers. Alternatively 300 responses could also indicate redirection of requests. A sudden change in this value could indicate a problem condition. |
| Errors_400 |
Indicates the number responses for requests to this browser that had a status code in the range 400-499. |
Number |
A high value indicates a number of missing/error pages.
Use the detailed diagnosis of this measure to know when each of the 400 errors occurred, which user experienced the error, when using what browser, from which machine. This information will greatly aid troubleshooting. |
| Errors_500 |
Indicates the number of responses for requests to this browser that had a status code in the range 500-599. |
Number |
Since responses with a status code of 500-600 indicate server side processing errors, a high value reflects an error condition.
Use the detailed diagnosis of this measure to know when each of the 500 errors occurred, which user experienced the error, when using what browser, from which machine. This information will greatly aid troubleshooting. |
|