Why doesn't the Java compiler optimize a trivial method?

I have a simple class for illustrative purposes:

public class Test { public int test1() { int result = 100; result = 200; return result; } public int test2() { return 200; } } 

The bytecode generated by the compiler (checked by javap -c Test.class ) is as follows:

 public int test1(); Code: 0: bipush 100 2: istore_1 3: sipush 200 6: istore_1 7: iload_1 8: ireturn public int test2(); Code: 0: sipush 200 3: ireturn 

Why doesn't the compiler optimize the test1 method for the same bytecode created for the test2 method? I would expect at least to avoid over-initializing the result variable, given that it is easy to conclude that the value 100 is not used at all.

I have observed this with both the Eclipse compiler and javac .

javac version: 1.8.0_72 installed as part of the JDK along with Java:

 Java(TM) SE Runtime Environment (build 1.8.0_72-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode) 
+5
source share
2 answers

A typical Java virtual machine optimizes your program at runtime rather than at compile time. At run time, the JVM knows a lot more about your application, both about the actual behavior of your program and about the actual hardware on which your program runs.

A bytecode is simply a description of how your program should behave. The runtime can use any optimization for your byte code.

Of course, it can be argued that such trivial optimizations can be applied even during compilation, but in general it makes sense not to extend the optimizations to several stages. Any optimization actually causes a lack of information about the source program, and this may make other optimizations impossible. This suggests that not all "best optimizations" are always obvious. An easy approach to this is simply to discard (almost) all optimizations at compile time and apply them at runtime.

+4
source

The JVM optimizes bytecode by creating something called a code cache . Unlike C ++, the JVM can collect a lot of data about your program, for example, "How hot is this for the loop?". Is this block of code even worth optimizing ?, etc. Therefore, optimization here is very useful and often gives better results.

If you optimize when translating from java to bytecode (i.e. when calling javac), your code may be optimal for your computer, but not for any other platform . Therefore, it makes no sense to optimize here.

As an example , suppose your program uses AES encryption. Modern processors have a set of instructions based on AES, with special equipment to make encryption much faster.

If javac tries to optimize at compile time, it will either

  • Optimize the instructions at the software level, in which case you will take advantage of modern processors or
  • replace your AES instructions with equivalent CPU-AES instructions supported only on new processors, which will reduce your compatibility.

If instead javac leaves them as they are in byte code, a JVM running on newer processors can recognize them as AES and use this CPU feature, while JVM running on older processors can optimize them at the software level in runtime (code cache), providing you with both optimality and compatibility .

+4
source

Source: https://habr.com/ru/post/1242961/


All Articles