Why is Java faster when using JIT or compiling to machine code?

I heard that Java must use JIT to be fast. This makes sense when comparing with the interpretation, but why can't someone make a lead-time compiler that generates fast Java code? I know about gcj , but I don't think its output is usually faster than Hotspot, for example.

Are there any things that make this language difficult? I think it comes down to these things only:

  • Reflection
  • class loading

What am I missing? If I avoid these functions, is it possible to compile Java code once into native machine code and do it?

+45
java jit
Dec 10 '09 at 4:50
source share
9 answers

A true killer for any AOT compiler:

 Class.forName(...) 

This means that you cannot write an AOT compiler that covers ALL Java programs, since there is information available only at run time about the characteristics of the program. You can, however, do this in a subset of Java, which I believe gcj does.

Another typical example is the JIT capability for built-in methods, such as getX (), directly in the calling methods, if it is found to be safe, and if necessary, to cancel it, even if the programmer did not help clearly by saying that the method is final. JIT can see that in a running program this method is not overestimated and therefore in this case can be considered as final. In the next call, this may be different.

+21
Dec 10 '09 at 7:20
source share

The JIT compiler can be faster because machine code is generated on the exact machine in which it will also be executed. This means that JIT has the best information possible to create optimized code.

If you precompile the bytecode into machine code, the compiler cannot optimize for target machines, only for assembly.

+32
Dec 10 '09 at 4:52
source share

I will put in the interesting answer that James Gosling received in the book Programming Skill .

Well, I heard that he said that effectively you have two compilers in the Java world. You have a compiler for Java bytecode, and then you have your JIT, which basically recompiles everything again. All your scary optimizations are in JIT .

James: Right. These days beating up really good C and C ++ compilers almost always. When you switch to a dynamic compiler, you will get two advantages when compilers are running right at the last moment. One you know exactly which chipset you are running next. So many times, when people compile a piece of C code, they have to compile it to run according to the general x86 architecture. Almost none of the binaries that you receive are especially well-tuned for any of them. You download the latest copy of Mozilla, and itll work in almost all Intel CPU architecture. Pretty much a single linux file. Its pretty general, and its compiled using GCC, which is not a very good C compiler.

When HotSpot works, it knows exactly which chipset you are running. This knows exactly how the cache works. It knows exactly how the memory hierarchy works. He knows exactly how all inter-block locks work in the CPU. He knows what set of instructions for expanding this chip. This optimizes what kind of machine you are on. Then the other half is that he actually sees the application as its launch. His ability to have statistics, which is all important. Its ability is inline stuff that the C compiler could never do. What gets embedded in the Java world is pretty amazing. Then you, as storage management, work with modern garbage collectors. With a modern garbage collector, storage distribution is very fast.

Masterminds of programming

+23
Apr 10 2018-11-11T00:
source share

The Java JIT compiler is also lazy and adaptive.

Lazy

Being lazy, it only compiles methods when it gets them instead of compiling the entire program (very useful if you are not using part of the program). Class loading actually helps speed up JIT by letting it ignore classes that it has not yet encountered.

Adaptive

Being adaptive, it first generates a quick and dirty version of machine code, and then returns only and performs an end-to-end job, if this method is used often.

+21
Dec 10 '09 at 4:59
source share

In the end, it comes down to the fact that having more information can improve optimization. In this case, JIT has more information about the actual machine the code is running on (as Andrew mentioned), and also has a lot of runtime information that is not available at compile time.

+11
Dec 10 '09 at 4:55
source share

Java's ability to embed the boundaries of virtual methods and perform efficient interface scheduling requires runtime analysis before compilation β€” in other words, it requires JIT. Since all methods are virtual and interfaces are used "everywhere", this is of great importance.

+6
Dec 10 '09 at 4:54
source share

Theoretically, a JIT compiler has an advantage over AOT if it has enough time and available computing resources . For example, if you have an enterprise application running for several days and months on a multiprocessor server with a lot of RAM, the JIT compiler can create better code than any AOT compiler.

Now that you have a desktop application, things like quick start-up and initial response time (when the AOT is on) are becoming more important, and there may also be insufficient resources for the most advanced optimizations.

And if you have an embedded system with limited resources, JIT has no chance against AOT.

However, the foregoing was the whole theory. In practice, creating such an advanced JIT compiler is much more complicated than a decent AOT. What about some practical evidence ?

+5
Dec 10 '09 at 6:02
source share

JITs can identify and address certain conditions that can only be known at run time. A striking example is the elimination of virtual calls used by modern virtual machines - for example, when the JVM finds an invokevirtual or invokeinterface , if only one class overriding the called method has been loaded, the virtual machine can actually make this virtual call static and thus can inline him. On the other hand, for program C, a function pointer is always a pointer to a function, and a call to it cannot be embedded (in the general case, one way or another).

Here's a situation where the JVM can embed a virtual call:

 interface I { I INSTANCE = Boolean.getBoolean("someCondition")? new A() : new B(); void doIt(); } class A implements I { void doIt(){ ... } } class B implements I { void doIt(){ ... } } // later... I.INSTANCE.doIt(); 

Assuming we do not go around creating instances of A or B elsewhere and that someCondition set to true , the JVM knows that calling doIt() always means A.doIt and therefore can avoid looking up the method table and then embed the call. A similar design in a non-JIT environment would not be permeable.

+5
Dec 10 '09 at 7:45
source share

I think a significant part of this is the fact that the official Java compiler is a JIT compiler. How much time was spent optimizing the JVM against the Java machine code compiler?

+2
Dec 10 '09 at 7:27
source share



All Articles