The Performance Tuning Process

The performance of a messaging application depends on the interaction between the application and the message service. Hence, maximizing performance requires the combined efforts of both the application developer and the server(Active MQ).

The process of optimizing performance begins with application design and continues through to tuning the message service after the application has been deployed. The performance tuning process includes the following stages:

In general, performance is a measure of the speed and efficiency with which a message service delivers messages from producer to consumer. However, there are several different aspects of performance that might be important to you, depending on your needs.

Connection Load The number of message producers, or message consumers, or the number of concurrent connections a system can support.

Message throughput: The number of messages or message bytes that can be pumped through a messaging system per second.

Latency: The time it takes a particular message to be delivered from message producer to message consumer.

Stability: The overall availability of the message service or how gracefully it degrades in cases of heavy load or failure.

Efficiency: The efficiency of message delivery; a measure of message throughput in relation to the computing resources employed.

These different aspects of performance are generally inter-related. If message throughput is high, that means messages are less likely to be backlogged in the message server, and as a result, latency should be low (a single message can be delivered very quickly). However, latency can depend on many factors: the speed of communication links, message server processing speed, and client processing speed, to name a few.

 

The Above mentioned points will give you a basic Idea on performance and the important factors of any performance test but when it comes to activemq explicitly you have to concentrate on the below parameters.

# KahaDB:

  1. Disable the enablejournalDiskSyncs : This is the most important factor which can improve your performance by 20 times. But disabling this may not give you 100% message guarantee.
  2. ConcorrentStoreAndDispatch : This parameter allows server to send the message or data to both the consumers and data store, if any of these action is actions is successful then your producer gets the acknowledgement. It will seriously improve performance 20 times. It also gives 100% guaranteed message delivery.
  3. You can also play with log size or journal file sizes and by default the journalMaxFileLength size is 32 MB.
  4.  Increase indexCacheSize—(default 10000)

# Prefetch:

  1. This the important parameter to improve consumer or receiver performance so you can change the prefetch values between 1 to 1000 and check which one works for you.

# DiskSpeed

# Network : Most of the times network will play the vital role in client server communication and by varying the socketBuffer  we can increase the ActiveMQ performance.

# Compress the message:  This parameter really improves the performance when you are dealing with huge message, when I was testing with 1KB messages and 1MB messages I got the same performance only because of this parameter.

# Co-Locate the broker: Embedding the broker (either in the producer or in the consumer), it is possible to eliminate one of the hops, thereby halving the load on the network.

# Socket buffer size : The default TCP socket buffer size is 64 KB, , it is a good idea to try doubling the socket buffer size to 128 KB. For example: tcp://hostA:61617?socketBufferSize=131072

# I/O buffer size : The default I/O buffer size is 8 KB and you could try doubling this size to achieve better performance. For example: tcp://hostA:61617?ioBufferSize=16384

# Enabling compression: If your application sends large messages and you know that your network is slow, it might be worthwhile to enable compression on your connections , tcp://hostA:61617?jms.useCompression=true

# optimizeAcknowledge option : When set to true, the consumer acknowledges receipt of messages in batches, where the batch size is set to 65% of the prefetch limit. Alternatively, if message consumption is slow, the batch acknowledgment will be sent after 300ms. Default is false, tcp://hostA:61617?jms.optimizeAcknowledge=true

# Optimizing prefetch limits: If you have just a single consumer attached to a queue, you can leave the prefetch limit at a fairly large value. But if you are using a group of consumers to distribute the workload, it is usually better to restrict the prefetch limit to a very small number—for example, 0 or 1

# Optimized dispatch: When this option is enabled, the broker no longer uses a dedicated thread to dispatch messages to each destination. 

<policyEntry queue=”>” optimizedDispatch=”true” />

# Reducing memory footprint of pending messages: When the reduceMemoryFootprint option is enabled, a message’s marshalled content is cleared immediately after the message is written to persistent storage. This results in approximately a 50% reduction in the amount of memory occupied by the pending messages.

For any questions or suggestions please leave a comment.

Leave a Reply