You do not check the computational cost of the try/catch
. You really check the cost of handling exceptions . A fair test will do b= 2 ;
also in ExceptionCase
. I donβt know what extremely wrong conclusions you will make if you think that you are testing only try/catch
. I am frankly alarmed.
The reason the clock changes so much is because you execute the functions so many times that the JVM decided to compile and optimize them. Turn your loop into the outer
for(int e= 0 ; e < 17 ; e++ ) { for(int i= 0 ; i < arr.length ; i++) { System.out.println(arr[i] + "," + NormalCase(arr[i]) + "," + ExceptionCase(arr[i])); } }
and you will see more stable results at the end of the run.
I also think that in the case of NormalCase
optimizer "understands" that for
does not actually do anything and just skips it (for runtime 0). For some reason (probably a side effect of exceptions), it does not do the same with ExceptionCase
. To solve this bias, figure out something inside the loop and return it.
I don't want to change my code too much, so I use the trick to return the second value:
public static long NormalCase(int times,int[] result) { long firstTime=System.nanoTime(); int computation= 0 ; for(int i= 0; i < times; i++ ) { int a= i + 1 ; int b= 2 ; a= a / b ; computation+= a ; } result[0]= computation ; return System.nanoTime()-firstTime; }
You can call this with NormalCase(arr[i],result)
, which is preceded by an declaration int[] result= new int[1] ;
. Change ExceptionCase
in the same way and output result[0]
to avoid any other optimization. You will probably need one result
variable for each function.
source share