Primer: Difference between javac and JIT
“A compiler or translator translates one language to another language. Usually a high level language (such as Java or C++ or whatever) gets translated to a lower level language such as machine code or bytecode. The latter actually is machine code for a non-existing machine. After translation (or compilation) is done, you still have to execute or interpret the resulting language, or instruction set as they call it when we’re talking low level languages, to get your results. Now, how does this all apply to this mysterious Java bizniz. Suppose you write something like this:
int a= 42;
int b= 1;
int c= a+b;
The Javac compiler might translate this code into something like this:
move a, 42
move b, 1
The stuff above represents the byte code, generated by the Javac compiler. When your JVM starts, it interprets these simple instructions and gets the job done, i.e. it acts as if it were a simple CPU executing (this is synonymous to interpreting) these simple instructions.
Hotspot compilers try to make an educated guess whether or not they should compile the byte code when the job has got to be done.
These simple instructions above can be further translated to real machine code, say a SPARC processor machine code or a Pentium machine code. When real machine code is generated, there is no need anymore for the JVM interpreter to interpret those byte codes; the real CPU can handle the job, causing a major speed increase. The task of translating those byte codes to real machine code is the responsibility of the JIT compiler; it translates Java byte code to real machine code. Of course, this real machine code differs per processor, i.e. Pentium machine code cannot be run on a SPARC processor or vice versa. That’s why those JIT compilers differ per platform.
Now, this is what every JVM is facing: should it further translate those byte codes (using that JIT compiler) or should it interpret (or execute) these things itself? JIT compilation is just what it says: just in time compilation. Facing the choice of interpreting those byte codes itself (being slow) or compiling it into real machine code (by activating the JIT compiler) first and then let the real processor do the job, so the real machine code could get executed (or interpreted) by the real CPU is quite a choice to make.
Suppose a particular piece of byte code must be executed or interpreted. What to do? Interpret it a bit slower or compile it to real machine code and let the CPU do the dirty job? What if this piece of code gets executed only once? Further compilation into real machine code could/would be a waste. But what if this piece of code happens to be the body of a loop that gets executed a zillion times? Compiling it into real machine code once wouldn’t be much overhead compared to the increase of speed. This is where the hotspot compilers come in, i.e. they try to make an educated guess whether or not they should compile the byte code when the job has got to be done.
The behaviour of those hot spot compilers it quite fascinating; loops start off quite slowly and after a couple of iterations the hot spot compiler sees that it’s dealing with a hot spot (hence the name) of code, so it translates the byte codes into machine code, so the JVM can relinquish control and let the real CPU do the job.”