Is setting a ByteBuffer (depending on buffer usage) a safe / good optimization?

Java - Big-Endian; Network Stack - Big-Endian; Intel / AMD (basically all of our computers) and the ARM processor (the most common Android and iOS ) are also low-intensity.

Given all this, if I assign a direct ByteBuffer for different purposes, is it always advisable to try to coordinate the native sine interaction interaction?

More specific:

  • Network Buffer: Leave It Big-Endian.
  • File Buffer (on x86): Small-Endian.
  • OpenGL / Native Process Buffer: Little-Endian.

etc.

I ask this because I never thought about the endian of my ByteBuffers, but after looking at some other questions on SO and the performance impact that may have, it seems to be worth it or at least something about I should be more aware when using ByteBuffers.

Or maybe there is a downside here to worry about the expiration of the deadline that I missed, and would like to know?

+4
source share
2 answers

The article states that the difference is quite small. (Probably no)

The results cited do not show consistent improvement, and using the latest JVMs may close the gap.


  • When I turn on the "native byte order" (which is actually unsafe if the machine uses a different "endian" convention):
mmap: 1.358 bytebuffer: 0.922 regular i/o: 1.387 
  • When I comment out the order instruction and use the default default order:
 mmap: 1.336 bytebuffer: 1.62 regular i/o: 1.467 

There is a measured difference, but it is small in the general scheme of things. If you want it to be much faster, the only option I found that matters a lot is to use Unsafe directly, in which case only your own order is available.

Even then, it helps only in the most delay-sensitive applications.

Unsafe even more interesting code and comments .;)

http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/sun/misc/Unsafe.java

+2
source

If you use ByteBuffer to read and store BYTES, then the byte order does not matter, just use the default value.

If you read and write non-byte primitive types (short, int, float, long, double), then the main processor will have to do additional work if the CPU endpoint (ByteOrder.nativeOrder ()) differs from the default Java Big Endianness. If you read other questions that you have associated with yourself, you can understand why ... The process will have to flip the bytes to do any work with the corresponding primitive types. This swapping operation will use some processor cycles.

A quick example using the two short values ​​1 and 2. Assuming your processor is an x86 processor.

 short A = 1; short B = 2; short C = A + B; 

If your own processor expects a bit of endian

 MOV ax, short[A] ; ax register [ 01, 00 ] MOV bx, short[B] ; bx register [ 02, 00 ] ADD ax, bx ; ax register [ 03, 00 ] MOV short[C], ax ; C [ 03, 00 ] 

And you give him a big argument, he has to do extra work.

 MOV ax, short[A] ; ax register [ 00, 01 ] MOV bx, short[B] ; bx register [ 00, 02 ] BSWAP ax ; ax register [ 01, 00 ] BSWAP bx ; bx register [ 02, 00 ] ADD ax, bx ; ax register [ 03, 00 ] MOV short[C], ax ; C [ 03, 00 ] 

Thus, at the lowest level, it matters, but if you have not noticed / not profiled, the main bottleneck in your code just uses the default value.

+2
source

Source: https://habr.com/ru/post/1396426/


All Articles