Comparing jVerbs performance with JSOR is a bit tricky. The first is a message-oriented API, and the second hides RDMA behind the Java streaming API.
Here are some statistics. My test using a pair of old ConnectX-2 cards and Dell PowerEdge 2970 servers. CentOS 7.1 and Mellanox OFED version 3.1.
I was only interested in the latent test.
jVerbs
A test is a variation of the RPing sample (can be hosted on github if anyone is interested). Check the measured delay of 5,000,000 cycles of the next call sequence for a reliable connection. The message size was 256 bytes.
PostSendMethod.execute() PollCQMethod.execute() CompletionChannel.ackCQEvents()
Results (microseconds):
- Median: 10.885
- 99.0% percentile: 11.663
- 99.9% percentile: 17,471
- 99.99% percentile: 27,791
Jsor
A similar test through the JSOR socket. The test was a sample textbook client / server socket. The message size was also 256 bytes.
Results (microseconds):
- Median: 43
- 99.0% percentile: 55
- 99.9% percentile: 61
- 99.99% percentile: 217
These results are very far from the OFED timeout test. In the same ib_send_lat standard for hardware / OS, 2.77 as a median and 23.25 microseconds as the maximum delay were produced.
source share