Faster implementation for Math.abs (ab) - Math.abs (cd)?

I have a Java method that repeatedly evaluates the following expression in a very tight loop with lots of repetitions:

Math.abs(a - b) - Math.abs(c - d) 

a , b , c and d are long values ​​that can span the full range of their types. They are different in each iteration of the loop, and they do not satisfy any invariant that I know of.

The profiler indicates that a significant portion of the processor’s time is spent on this method. Although I will use other optimization options at first, I was wondering if there is a more reasonable way to compute the above expression.

Besides nesting Math.abs() calls manually for a very slight (if any) performance increase, is there any mathematical trick I could use to speed up the evaluation of this expression?

+4
source share
4 answers

I ended up using this little method:

 public static long diff(final long a, final long b, final long c, final long d) { final long a0 = (a < b)?(b - a):(a - b); final long a1 = (c < d)?(d - c):(c - d); return a0 - a1; } 

I experienced a measurable increase in performance - about 10-15% for the entire application. I believe that this is mainly due to:

  • Eliminating a method call: instead of calling Math.abs() twice, I call this method once. Of course, static method calls are not overly expensive, but they still make an impact.

  • Eliminating a couple of denial operations: this can be offset by a slightly increased code size, but I gladly trick myself into believing that this really made a difference.

EDIT:

It seems to be the other way around. Explicit code nesting does not seem to affect performance in my micro-control. Changing the way absolute values ​​are calculated makes ...

+3
source

I suspect that the profiler does not give you a true result, since it is trying to profile (and thus add to the head) such a trivial "method". Without a profile, Math.abs can be turned into a small number of machine code instructions, and you cannot do it faster than this.

I suggest you do a micro test to confirm this. I expect data loading to be an order of magnitude more expensive.

 long a = 10, b = 6, c = -2, d = 3; int runs = 1000 * 1000 * 1000; long start = System.nanoTime(); for (int i = 0; i < runs; i += 2) { long r = Math.abs(i - a) - Math.abs(c - i); long r2 = Math.abs(i - b) - Math.abs(d - i); if (r + r2 < Integer.MIN_VALUE) throw new AssertionError(); } long time = System.nanoTime() - start; System.out.printf("Took an average of %.1f ns per abs-abs. %n", (double) time / runs); 

prints

 Took an average of 0.9 ns per abs-abs. 
+4
source

You can always try to expand the functions and optimize the hand, if you do not get more misses in the cache, it can be faster.

If I got the right to turn, it could be something like this:

  if(a<b) { if(c<d) { r=ba-d+c; } else { r=b-a+dc; } } else { if(c<d) { r=ab-d+c; } else { r=a-b+dc; } } 
+1
source

Are you sure the method itself is causing the problem? Perhaps this is a huge number of calls to this method, and you just see aggregated results (for example TIME_OF_METHOD_EXECUTION X NUMBER_OF_INVOCATIONS) in your profiler?

0
source

Source: https://habr.com/ru/post/1391813/


All Articles