Does Java retain optimization at runtime?

My professor made an unofficial benchmark for a small program, and Java time: 1.7 seconds for the first run and 0.8 seconds for runs.

  • Is this related to loading the runtime into a production environment?

    OR

  • Is Java affecting this, optimizing the code and saving the results of these optimizations (sorry, I don’t know the technical term for this)?

+4
source share
6 answers

I agree that the performance difference observed by the poster is most likely caused by the disk latency that brings the JRE to memory. The Just In Time (JIT) compiler will not affect the performance of a small application.

Java 1.6u10 ( http://download.java.net/jdk6/ ) deals with executable JARs in the background process (even if Java does not work) in order to maintain data in the disk cache. This significantly reduces startup time (which is a huge advantage for desktop applications, but is probably of little importance for server-side applications).

In large, long applications, JIT is important over time, but the time it takes for the JIT to accumulate enough statistics to run and optimize (5-10 seconds) is very, very short compared to the overall life of the application (most of They work for months and months). Saving and restoring JIT results is an interesting academic exercise, the practical improvement is not very large (therefore, the JIT team is more focused on things like GC strategies to minimize misses in the memory cache, etc.).

Precompiling runtime classes rarely helps desktop applications (like the previous 6u10 disk cache preload).

+1
source

Ok, I found where I read it. This is all from Learning Java (O'Reilly 2005):

The problem with traditional JIT compilation is that it takes time to optimize the code. Thus, the JIT compiler can produce decent results, but significant latency can occur when the application starts. This is usually not a problem for long-term server applications, but it is a serious problem for client software and applications running on small devices with limited capabilities. To solve this problem, a Sun compiler technology called HotSpot uses a trick called adaptive compilation. If you look at what programs actually spend their time, it turns out that they spend almost all of their time executing a relatively small part of the code again and again. A block of code that is executed repeatedly can make up only a small part of the overall program, but its behavior determines the overall performance of the program. Adaptive compilation also allows the Java environment to take advantage of new types of optimization that simply cannot be done in a statically compiled language, which is why it is argued that Java code can run faster than C / C ++ in some cases.

To take advantage of this fact, HotSpot runs as a regular Java bytecode interpreter, but with a difference: it measures (profiles) the code as it executes to see which parts are being executed repeatedly. As soon as he knows which parts of the code are critical to performance, HotSpot compiles these sections into optimal native machine code. Since he compiles only a small part of the program into machine code, he can afford to spend the time needed to optimize these parts. The rest of the program may not need to be compiled at all, just interpret - saving memory and time. In fact, Java's default Java VM can run in one of two modes: client and server, which tell you whether to emphasize fast startup time and memory storage or flat performance.

The natural question to ask at this point is: why throw away all this good profiling information every time the application closes? Well, Sun has partially covered this topic with the release of Java 5.0 using general read-only classes that are persistently stored in optimized form. This greatly reduces startup time and the overhead of running many Java applications on this machine. The technology for this is complex, but the idea is simple: optimize the part of the program that needs to develop rapidly, and do not worry about the rest.

I'm a little surprised at how far Sun has gone with it since Java 5.0.

+5
source

I don’t know which virtual machine is used universally, which saves usage statistics between program calls - but this is certainly an interesting opportunity for future research.

What you see is almost certainly related to disk caching.

+4
source

I agree that this is most likely the result of disk caching.

By the way, the IBM Java 6 VM contains a leading compiler (AOT). The code is not as optimized as JIT, but it is stored in virtual machines, I suppose, in some read-only memory. Its main advantage is increased startup performance. The IBM virtual machine, by default, calls the JIT method after it has been called 1000 times. If he knows that the method will be called 1000 times directly during the startup of the virtual machine (imagine a common method, such as java.lang.String.equals(...) ), then it will be useful to store it in the AOT cache so that it never Do not waste time compiling at runtime.

+2
source

You should describe how your Benchmark was executed. Especially at this moment you start to measure time.

If you enable the JVM startup time (which is useful for benchmarking for the user, but not so useful for optimizing Java code), this may be the effect of caching the file system or may be called by a function called "Transferring Java Class Data":

For the sun:

http://java.sun.com/j2se/1.5.0/docs/guide/vm/class-data-sharing.html

This is an option when the JVM saves the finished image of the runtime classes to a file to provide faster loading (and sharing) of these at the next start. You can control this with -Xshare: on or -Xshare: off with the Sun JVM. By default, -Xshare: auto is used, which will load the image of the shared classes, if present, and if not, it will record it at the first start, if the directory can write.

With IBM Java 5, this is even more powerful:

http://www.ibm.com/developerworks/java/library/j-ibmjava4/

I do not know of any core JVM that stores JIT statistics.

+1
source

The Java JVM (in fact, it may change from different JVM implementations) interprets the byte code on first run. As soon as he discovers that the code will work quite a few times, JIT is in his native machine language so that it works faster.

0
source

Source: https://habr.com/ru/post/1276721/


All Articles