Measures reported by MongoTicket
Tickets are an internal representation for thread management. WiredTiger uses tickets to control the number of read/write operations simultaneously processed by the storage engine. The default value is 128 and works well for most cases. This count can be fine-tuned depending upon the workload of MongoDB.
If sufficient tickets are not available for processing read/write requests, it can reduce concurrency and cause request processing to slow down on the MongoDB server, thus scarring user experience with the server. If this is to be avoided, then administrators should periodically check ticket usage and detect a ticket crunch well before it impacts database performance or UX. This can be achieved using the MongoTicket test. This test monitors how the MongoDB server uses the read and write tickets allotted to it and warns administrators of a probable contention for processing power. This way, the test enables administrators to optimize its ticket configuration, so that concurrency is not compromised and critical processing resources are also conserved.
Outputs of the test : One set of results for the MongoDB server monitored.
The measures made by this test are as follows:
| Measurement |
Description |
Measurement Unit |
Interpretation |
| Total_read_tickets |
Indicates the maximum number of read tickets the target server can use. |
Number |
|
| Used_read_tickets |
Indicates the number of read tickets currently used. |
Number |
If the value of this measure is equal to or close to the value of Total_read_tickets measure, it is a cause for concern. |
| Available_read_tickets |
Indicates the number of read tickets currently unused. |
Number |
If the value of this measure is equal to or close to 0, it is a cause for concern. |
| Read_ticket_usage |
Indicates the percentage of read tickets currently in use. |
Percent |
A value close to 100% indicates that the server is about to run out of read tickets. This can cause subsequent read requests to be queued, waiting for tickets. Consequently, read request processing will slow down. One of the common causes for this is long-running read operations. Such operations typically hold on to read tickets without releasing them for long periods of time, thus reducing concurrency. Terminating such read operations can help free tickets. Alternatively, you can increase the maximum number of read tickets the server can use by modifying the wiredTigerConcurrentReadTransactions setting.
However be careful when increasing it: if the number of simultaneous operations gets too high, you might run out of system resources (CPU in particular). Scaling horizontally by adding more shards can help to support high throughputs.
|
| Total_write_tickets |
Indicates the maximum number of write tickets the target server can use. |
Number |
|
| Used_write_tickets |
Indicates the number of write tickets currently used. |
Millisecs |
If the value of this measure is equal to or close to the value of Total_write_tickets measure, it is a cause for concern. |
| Available_write_tickets |
Indicates the number of write tickets currently unused. |
Number |
If the value of this measure is equal to or close to 0, it is a cause for concern. |
| Write_ticket_usage |
Indicates the percentage of write tickets currently in use. |
Millisecs |
A value close to 100% indicates that the server is about to run out of write tickets. This can cause subsequent write requests to be queued, waiting for tickets. Consequently, writerequest processing will slow down. One of the common causes for this is long-running write operations. Such operations typically hold on to write tickets without releasing them for long periods of time, thus reducing concurrency. Terminating such write operations can help free tickets. Alternatively, you can increase the maximum number of write tickets the server can use by modifying the wiredTigerConcurrentWriteTransactions setting.
However be careful when increasing it: if the number of simultaneous operations gets too high, you might run out of system resources (CPU in particular). Scaling horizontally by adding more shards can help to support high throughputs. |
|