Menu Search

4.7. Queues

Queues are named entities within a Virtualhost that hold/buffer messages for later delivery to consumer applications.

Messages arrive on queues either from Exchanges, or when using the AMQP 1.0 protocol, the producing application can direct messages straight to the queue. For AMQP 0-8, 0-9, 0-9-1, or 0-10, the exchange is the only way ingressing a message into a queue.

Consumers subscribe to a queue in order to receive messages from it.

The Broker supports different queue types, each with different delivery semantics. Queues also have a range of other features such as the ability to group messages together for delivery to a single consumer. These additional features are described below too.

4.7.1. Types

The Broker supports four different queue types, each with different delivery semantics.

  • Standard - a simple First-In-First-Out (FIFO) queue

  • Priority - delivery order depends on the priority of each message

  • Sorted - delivery order depends on the value of the sorting key property in each message

  • Last Value Queue - also known as an LVQ, retains only the last (newest) message received with a given LVQ key value Standard

A simple First-In-First-Out (FIFO) queue Priority

In a priority queue, messages on the queue are delivered in an order determined by the JMS priority message header within the message. By default Qpid supports the 10 priority levels mandated by JMS, with priority value 0 as the lowest priority and 9 as the highest.

It is possible to reduce the effective number of priorities if desired.

JMS defines the default message priority as 4. Messages sent without a specified priority use this default. Sorted Queues

Sorted queues allow the message delivery order to be determined by value of an arbitrary JMS message property. Sort order is alpha-numeric and the property value must have a type java.lang.String.

Messages sent to a sorted queue without the specified JMS message property will be put at the head of the queue. Last Value Queues (LVQ)

LVQs (or conflation queues) are special queues that automatically discard any message when a newer message arrives with the same key value. The key is specified by arbitrary JMS message property.

An example of an LVQ might be where a queue represents prices on a stock exchange: when you first consume from the queue you get the latest quote for each stock, and then as new prices come in you are sent only these updates.

Like other queues, LVQs can either be browsed or consumed from. When browsing an individual subscriber does not remove the message from the queue when receiving it. This allows for many subscriptions to browse the same LVQ (i.e. you do not need to create and bind a separate LVQ for each subscriber who wishes to receive the contents of the LVQ).

Messages sent to an LVQ without the specified property will be delivered as normal and will never be "replaced".

4.7.2. Messaging Grouping

The broker allows messaging applications to classify a set of related messages as belonging to a group. This allows a message producer to indicate to the consumer that a group of messages should be considered a single logical operation with respect to the application.

The broker can use this group identification to enforce policies controlling how messages from a given group can be distributed to consumers. For instance, the broker can be configured to guarantee all the messages from a particular group are processed in order across multiple consumers.

For example, assume we have a shopping application that manages items in a virtual shopping cart. A user may add an item to their shopping cart, then change their mind and remove it. If the application sends an add message to the broker, immediately followed by a remove message, they will be queued in the proper order - add, followed by remove.

However, if there are multiple consumers, it is possible that once a consumer acquires the add message, a different consumer may acquire the remove message. This allows both messages to be processed in parallel, which could result in a "race" where the remove operation is incorrectly performed before the add operation. Grouping Messages

In order to group messages, JMS applications can set the JMS standard header JMSXGroupId to specify the group identifier when publishing messages.

Alternatively, the application may designate a particular message header as containing a message's group identifier. The group identifier stored in that header field would be a string value set by the message producer. Messages from the same group would have the same group identifier value. The key that identifies the header must also be known to the message consumers. This allows the consumers to determine a message's assigned group.  The Role of the Broker in Message Grouping

The broker will apply the following processing on each grouped message:

  • Enqueue a received message on the destination queue.

  • Determine the message's group by examining the message's group identifier header.

  • Enforce consumption ordering among messages belonging to the same group. Consumption ordering means one of two things depending on how the queue has been configured.

    • In default mode, a group gets assigned to a single consumer for the lifetime of that consumer, and the broker will pass all subsequent messages in the group to that consumer.

    • In 'shared groups' mode (which gives the same behaviour as the Qpid C++ Broker) the broker enforces a looser guarantee, namely that all the currently unacknowledged messages in a group are sent to the same consumer, but the consumer used may change over time even if the consumers do not. This means that only one consumer can be processing messages from a particular group at any given time, however if the consumer acknowledges all of its acquired messages then the broker may pass the next pending message in that group to a different consumer.

The absence of a value in the designated group header field of a message is treated as follows:

  • In default mode, failure for a message to specify a group is treated as a desire for the message not to be grouped at all. Such messages will be distributed to any available consumer, without the ordering guarantees imposed by grouping.

  • In 'shared groups' mode (which gives the same behaviour as the Qpid C++ Broker) the broker assigns messages without a group value to a 'default group'. Therefore, all such "unidentified" messages are considered by the broker as part of the same group, which will handled like any other group. The name of this default group is "", although it can be customised as detailed below.

Note that message grouping has no effect on queue browsers.

Note well that distinct message groups would not block each other from delivery. For example, assume a queue contains messages from two different message groups - say group "A" and group "B" - and they are enqueued such that "A"'s messages are in front of "B". If the first message of group "A" is in the process of being consumed by a client, then the remaining "A" messages are blocked, but the messages of the "B" group are available for consumption by other consumers - even though it is "behind" group "A" in the queue.

4.7.3. Forcing all consumers to be non-destructive

When a consumer attaches to a queue, the normal behaviour is that messages are sent to that consumer are acquired exclusively by that consumer, and when the consumer acknowledges them, the messages are removed from the queue.

Another common pattern is to have queue "browsers" which send all messages to the browser, but do not prevent other consumers from receiving the messages, and do not remove them from the queue when the browser is done with them. Such a browser is an instance of a "non-destructive" consumer.

If every consumer on a queue is non destructive then we can obtain some interesting behaviours. In the case of a LVQ then the queue will always contain the most up to date value for every key. For a standard queue, if every consumer is non-destructive then we have something that behaves like a topic (every consumer receives every message) except that instead of only seeing messages that arrive after the point at which the consumer is created, all messages which have not been removed due to TTL expiry (or, in the case of LVQs, overwirtten by newer values for the same key).

A queue can be created to enforce all consumers are non-destructive. Bounding size using min/max TTL

For queues other than LVQs, having only non-destructive consumers could mean that messages would never get deleted, leaving the queue to grow unconstrainedly. To prevent this you can use the ability to set the maximum TTL of the queue. To ensure all messages have the same TTL you could also set the minimum TTL to the same value.

Minimum/Maximum TTL for a queue can be set though the HTTP Management UI, using the REST API. The attribute names are minimumMessageTtl and maximumMessageTtl and the TTL value is given in milliseconds. Choosing to receive messages based on arrival time

A queue with no destructive consumers will retain all messages until they expire due to TTL. It may be the case that a consumer only wishes to receive messages that have been sent in the last 60 minutes, and any new messages that arrive, or alternatively it may wish only to receive newly arriving messages and not any that are already in the queue. This can be achieved by using a filter on the arrival time.

A special parameter x-qpid-replay-period can be used in the consumer declaration to control the messages the consumer wishes to receive. The value of x-qpid-replay-period is the time, in seconds, for which the consumer wishes to see messages. A replay period of 0 indicates only newly arriving messages should be sent. A replay period of 3600 indicates that only messages sent in the last hour - along with any newly arriving messages - should be sent.

When using the Qpid JMS AMQP 0-x, the consumer declaration can be hinted using the address.

Table 4.1. Setting the replay period using a Qpid JMS AMQP 0-x address

Addressingmyqueue : { link : { x-subscribe: { arguments : { x-qpid-replay-period : '3600' } } } }
Binding URLdirect://'3600' Setting a default filter

A common case might be that the desired default behaviour is that newly attached consumers see only newly arriving messages (i.e. standard topic-like behaviour) but other consumers may wish to start their message stream from some point in the past. This can be achieved by setting a default filter on the queue so that consumers which do not explicitly set a replay period get a default (in this case the desired default would be 0).

The default filter set for a queue can be set via the REST API using the attribute named defaultFilters. This value is a map from filter name to type and arguments. To set the default behaviour for the queue to be that consumers only receive newly arrived messages, then you should set this attribute to the value:

            { "x-qpid-replay-period" : { "x-qpid-replay-period" : [ "0" ] } }

If the desired default behaviour is that each consumer should see all messages arriving in the last minute, as well as all new messages then the value would need to be:

            { "x-qpid-replay-period" : { "x-qpid-replay-period" : [ "60" ] } }

4.7.4. Holding messages on a Queue

Sometimes it is required that while a message has been placed on a queue, it is not released to consumers until some external condition is met. Hold until valid

Currently Queues support the "holding" of messages until a (per-message) provided point in time. By default this support is not enabled (since it requires extra work to be performed against every message entering the queue. To enable support, the attribute holdOnPublishEnabled must evaluate to true for the Queue. When enabled messages on the queue will be checked for the header (for AMQP 0-8, 0-9, 0-9-1 and 0-10 messages) or message annotation (for AMQP 1.0 messages) x-qpid-not-valid-before. If this header/annotation exists and contains a numeric value, it will be treated as a point in time given in milliseconds since the UNIX epoch. The message will not be released from the Queue to consumers until this time has been reached.

4.7.5. Controlling Queue Size

Overflow Policy can be configured on an individual Queue to limit the queue size. The size can be expressed in terms of a maximum number of bytes and/or maximum number of messages. The Overflow Policy defines the Queue behaviour when any of the limits is reached.

The following Overflow Policies are supported:

  • None - Queue is unbounded and the capacity limits are not applied. This is a default policy applied implicitly when policy is not set explicitly.

  • Ring - If a newly arriving message takes the queue over a limit, message(s) are deleted from the queue until the queue falls within its limit again. When deleting messages, the oldest messages are deleted first. For a Priority Queue the oldest messages with lowest priorities are removed.

  • Producer Flow Control -The producing sessions are blocked until queue depth falls below the resume threshold set as a context variable ${queue.queueFlowResumeLimit} (specifying the percentage from the limit values. Default is 80%).

  • Flow to Disk -If the queue breaches a limit, newly arriving messages are written to disk and the in-memory representation of the message is minimised. The Broker will transparently retrieve messages from disk as they are required by a consumer or management. The flow to disk policy does not actually restrict the overall size of the queue, merely the space occupied in memory. The Broker's other Flow to Disk feature operates completely independent of this Queue Overflow Policy.

  • Reject -A newly arriving message is rejected when queue limit is breached.

A negative value for maximum number of messages or maximum number of bytes disables the limit.

The Broker issues Operational log messages when the queue sizes are breached. These are documented at Table C.6, “Queue Log Messages”.

4.7.6. Using low pre-fetch with special queue types

Messaging clients may buffered messages for performance reasons. In Qpid, this is commonly known as pre-fetch

When using some of the messaging features described on this section, using prefetch can give unexpected behaviour. Once the broker has sent a message to the client its delivery order is then fixed, regardless of the special behaviour of the queue.

For example, if using a priority queue and a prefetch of 100, and 100 messages arrive with priority 2, the broker will send these messages to the client. If then a new message arrives with priority 1, the broker cannot leap frog messages of lower priority. The priority 1 will be delivered at the front of the next batch of messages to be sent to the client.

Using pre-fetch of 1 will give exact queue-type semantics as perceived by the client however, this brings a performance cost. You could test with a slightly higher pre-fetch to trade-off between throughput and exact semantics.

See the messaging client documentation for details of how to configure prefetch.