eG Monitoring
 

Measures reported by WLJMSQueueTest

A JMS queue represents the point-to-point (PTP) messaging model, which enables one application to send a message to another. PTP messaging applications send and receive messages using named queues. A queue sender (producer) sends a message to a specific queue. A queue receiver (consumer) receives messages from a specific queue.

This test auto-discovers the queues on a WebLogic server, and monitors each queue for the size, number, and type of messages it holds, so that impending overloads and probable delivery bottlenecks can be proactively isolated and corrected.

 The measures made by this test are as follows:

Measurement Description Measurement Unit Interpretation
Msgs_count Indicates the current number of messages in this queue. Number This count does not include the messages that are pending.
Msgs_pending_count Indicates the number of pending messages in this queue. Number A message is considered to be in pending state when it is:
  • sent in a transaction but not committed.
  • received and not acknowledged
  • received and not committed
  • subject to a redelivery delay (as of WebLogic JMS 6.1 or later)
  • subject to a delivery time (as of WebLogic JMS 6.1 or later)

While momentary spikes in the number of pending messages in a queue is normal, if the number is allowed to grow consistently over time, it is bound to increase the total number of messages in the queue. Typically, the sum of the values of the MessagesCurrentCount and the MessagesPendingCount measures equals the total number of messages in the queue. If this sum is equal to or is very close to the Messages Maximum setting for the quota resource that is mapped to this queue, it implies that the queue has filled up or is rapidly filling up with messages and cannot handle any more. When this happens, JMS prevents further sends with a ResourceAllocationException. Furthermore, such quota failures will force multiple producers to contend for space in the queue, thereby degrading application performance. To avoid this, you can do one/more of the following:

  • Increase the Messages Maximum setting of the quota resource mapped to the queue;
  • If a quota has not been configured for the queue, then increase the quota of the JMS server where the queue is deployed;
  • Regulate the flow of messages into the queue using one/more of the following configurations:
    • Blocking senders during quota conditions: The Send Timeout feature provides more control over message send operations by giving message producers the option of waiting a specified length of time until space becomes available on a destination.
    • Specifying a Blocking Send Policy on JMS Servers : The Blocking Send policies enable you to define the JMS server’s blocking behavior on whether to deliver smaller messages before larger ones when multiple message producers are competing for space on a destination that has exceeded its message quota.
    • Using the Flow Control feature: With the Flow Control feature, you can direct a JMS server or destination to slow down message producers when it determines that it is becoming overloaded. Specifically, when either a JMS server or it’s destinations exceeds its specified byte or message threshold, it becomes armed and instructs producers to limit their message flow (messages per second). Producers will limit their production rate based on a set of flow control attributes configured for producers via the JMS connection factory. Starting at a specified flow maximum number of messages, a producer evaluates whether the server/destination is still armed at prescribed intervals (for example, every 10 seconds for 60 seconds). If at each interval, the server/destination is still armed, then the producer continues to move its rate down to its prescribed flow minimum amount. As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed to increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.
    • By tuning Message Performance Preference: The Messaging Performance Preference tuning option on JMS destinations enables you to control how long a destination should wait (if at all) before creating full batches of available messages for delivery to consumers. At the minimum value, batching is disabled. Tuning above the default value increases the amount of time a destination is willing to wait before batching available messages. The maximum message count of a full batch is controlled by the JMS connection factory’s Messages Maximum per Session setting. It may take some experimentation to find out which value works best for your system. For example, if you have a queue with many concurrent message consumers, by selecting the Administration Console’s Do Not Batch Messages value (or specifying “0” on the DestinationBean MBean), the queue will make every effort to promptly push messages out to its consumers as soon as they are available. Conversely, if you have a queue with only one message consumer that does not require fast response times, by selecting the console’s High Waiting Threshold for Message Batching value (or specifying “100” on the DestinationBean MBean), you can ensure that the queue only pushes messages to that consumer in batches.
Bytes_count Indicates the current size of the message that is stored in the queue destination in bytes. Number This count does not include the pending bytes.
Bytes_pending_count Indicates the current size of the pending message that is stored in the queue destination in bytes. Number While momentary spikes in the size of pending messages in a queue is acceptable, if the size is allowed to grow consistently over time, it is bound to increase the total size of all messages in the queue. Typically, the sum of the values of the BytesCurrentCount and the BytesPendingCount measures indicates the total size of all messages in the queue. If this sum is equal to or is very close to the Bytes Maximum setting for the quota resource that is mapped to this queue, it implies that the queue has filled up or is rapidly filling up with messages and cannot handle any more. When this happens, JMS prevents further sends with a ResourceAllocationException. Furthermore, such quota failures will force multiple producers to contend for space in the queue, thereby degrading application performance. To avoid this, you can do one/more of the following:
  • Increase the Bytes Maximum setting of the quota resource mapped to the queue;
  • If a quota has not been configured for the queue, then increase the quota of the JMS server where the queue is deployed;
  • Regulate the flow of messages into the queue using one/more of the following configurations:
    • Blocking senders during quota conditions: The Send Timeout feature provides more control over message send operations by giving message producers the option of waiting a specified length of time until space becomes available on a destination.
    • Specifying a Blocking Send Policy on JMS Servers : The Blocking Send policies enable you to define the JMS server’s blocking behavior on whether to deliver smaller messages before larger ones when multiple message producers are competing for space on a destination that has exceeded its message quota.
    • Using the Flow Control feature: With the Flow Control feature, you can direct a JMS server or destination to slow down message producers when it determines that it is becoming overloaded. Specifically, when either a JMS server or it’s destinations exceeds its specified byte or message threshold, it becomes armed and instructs producers to limit their message flow (messages per second). Producers will limit their production rate based on a set of flow control attributes configured for producers via the JMS connection factory. Starting at a specified flow maximum number of messages, a producer evaluates whether the server/destination is still armed at prescribed intervals (for example, every 10 seconds for 60 seconds). If at each interval, the server/destination is still armed, then the producer continues to move its rate down to its prescribed flow minimum amount. As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed to increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.
    • By Tuning the MessageMaximum configuration: WebLogic JMS pipelines messages that are delivered to asynchronous consumers (otherwise known as message listeners) or prefetch-enabled synchronous consumers. The messages backlog (the size of the pipeline) between the JMS server and the client is tunable by configuring the MessagesMaximum setting on the connection factory. In some circumstances, tuning this setting may improve performance dramatically, such as when the JMS application defers acknowledges or commits. In this case, BEA suggests setting the MessagesMaximum value to: 2 * (ack or commit interval) + 1. For example, if the JMS application acknowledges 50 messages at a time, set the MessagesMaximum value to 101. You may also need to configure WebLogic clients in addition to the WebLogic Server instance, when sending and receiving large messages.
    • By compressing messages: You may improve the performance of sending large messages traveling across JVM boundaries and help conserve disk space by specifying the automatic compression of any messages that exceed a user-specified threshold size. Message compression can help reduce network bottlenecks by automatically reducing the size of messages sent across network wires. Compressing messages can also conserve disk space when storing persistent messages in file stores or databases.
    • By paging out messages: With the message paging feature, JMS servers automatically attempt to free up virtual memory during peak message load periods. This feature can greatly benefit applications with large message spaces.
    • By tuning the Message Buffer Size: The Message Buffer Size option specifies the amount of memory that will be used to store message bodies in memory before they are paged out to disk. The default value of Message Buffer Size is approximately one-third of the maximum heap size for the JVM, or a maximum of 512 megabytes. The larger this parameter is set, the more memory JMS will consume when many messages are waiting on queues or topics. Once this threshold is crossed, JMS may write message bodies to the directory specified by the Paging Directory option in an effort to reduce memory usage below this threshold.
Consumers_count Indicates the current number of consumers accessing the queue destination. number  
Msgs_moved_count Indicates the number of messages that have been moved from one queue destination to the other. Number  
Msgs_deleted_count Indicates the number of messages that have been deleted from this queue. Number While you can design a QueueBrowser on your JMS server to view and delete specific queue messages, some messages are automatically deleted by the server. For instance, one-way messages that exceed quota are silently deleted without immediately throwing exceptions back to the client.