Very low throughput using the HornetQ main bridge

We are trying to use the HornetQ storage engine and the forwarding mechanism ... however sending messages from one standalone HornetQ instance to another using the main bridge is very slow. We could not increase the throughput of more than 200 messages per second.

The amazing fact is that if we point the same client (which posted messages to the HornetQ instance for forwarding) directly to the HornetQ instance of the recipient, we will begin to observe the transmission rate of more than 1000 messages per second (this client is based on JMS). This basically means that the main bridge that was configured between the Forwarding HornetQ instance and the Destination HornetQ instance is problematic.

The following sections are for setting up the main bridge in Forwarding HornetQ:

<connectors> <connector name="netty-bridge"> <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class> <param key="host" value="destination.xxx.com"/> <param key="port" value="5445"/> <param key="batch-delay" value="50"/> <param key="tcp-send-buffer-size" value="1048576"/> <param key="tcp-receive-buffer-size" value="1048576"/> <param key="use-nio" value="true"/> </connector> </connectors> <address-settings> <address-setting match="jms.queue.Record"> <dead-letter-address>jms.queue.RecordDLQ</dead-letter-address> <max-size-bytes>262144000</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings> <queues> <queue name="jms.queue.Record"> <address>jms.queue.Record</address> </queue> </queues> <bridges> <bridge name="core-bridge"> <queue-name>jms.queue.Record</queue-name> <forwarding-address>jms.queue.Record</forwarding-address> <retry-interval>1000</retry-interval> <retry-interval-multiplier>1.0</retry-interval-multiplier> <reconnect-attempts>-1</reconnect-attempts> <confirmation-window-size>10485760</confirmation-window-size> <static-connectors> <connector-ref>netty-bridge</connector-ref> </static-connectors> </bridge> </bridges> 

The following sections are for setting up the main bridge in Destination HornetQ:

 <acceptors> <acceptor name="netty"> <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class> <param key="host" value="${hornetq.remoting.netty.host:192.168.2.xxx}"/> <param key="port" value="${hornetq.remoting.netty.port:xxxx}"/> <param key="tcp-send-buffer-size" value="1048576"/> <param key="tcp-receive-buffer-size" value="1048576"/> <param key="use-nio" value="true"/> <param key="batch-delay" value="50"/> <param key="use-nio" value="true"/> </acceptor> <acceptors> <address-settings> <address-setting match="jms.queue.Record"> <dead-letter-address>jms.queue.RecordDLQ</dead-letter-address> <max-size-bytes>262144000</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings> <queues> <queue name="jms.queue.Record"> <address>jms.queue.Record</address> </queue> </queues> 

All system variables (CPU / Memory / Disk IO / Network / etc.) Are underutilized, and there are no errors in the logs.

Note We tried with both NIO and obsolete / old IO. This has been tested with both HornetQ-2.2.5-Final and HornetQ-2.2.8-GA (2.2.8-GA was built from source)

Any idea as to what might cause this problem and what could be the resolution?

Other observations . It seems that the messages sent through the main bridge are transactional ... so that you can perform these transactions and maintain communication between two HornetQ instances asynchronously?

+6
source share
2 answers

OK .. I figured it out for myself.

When Forwarding HornetQ creates a bridge, it internally uses only one thread to send messages across the bridge and opens only one connection to the Destination HornetQ. Thus, it cannot take advantage of several processors and is also limited by the network (latency / throughput / rtt) and cannot efficiently parallelize sending messages. Thus, if you have high throughput, you get the maximum throughput (in our case, about 200 messages per second). You can increase this by adjusting the parameters of the HornetQ connector and the parameters of the receiver (for example, the size of the TCP transmit and receive buffer) and the bridge settings (size of the confirmation window), but this can take you so long (we got a throughput of up to 300 messages per second) .

The solution is to create multiple bridges between the same pair of HornetQ forwarding and destination instances (including the same queues). This effectively parallelizes the transmission of messages and thereby increases throughput. The creation of three bridges almost tripled the throughput to 870 messages per second.

JBoss should ideally make this parallelization configurable in the main bridge.

+3
source

I believe that you used 2.2.5 (it is not clear from your publication which version you used), which had a bug on the bridges causing the problem you were talking about.

In some version, the bridge sent messages synchronously, rather than counting asynchronization confirmations.

See how he will behave in the latest version.

+1
source

Source: https://habr.com/ru/post/910597/


All Articles