So, not sure if this is necessary, but I just wanted to add to better understand the problem and why interpreter languages were introduced: Back in the days (1996/7) we used to program directly for a specific hardware platform, e.g. Alpha processor or Intel processor. Each of these processors had a different instruction set that it understood, and the code you wrote had to be compiled directly for the hardware platform it was intended to run on. So if, for some reason, you compiled for an Intel processor and then ran the program on an Alpha server, it simply would not work. Early implementations of interpreter languages like Java, and later .NET and others, eliminated this problem because you could program, interpret to byte code, and then run your software on any hardware platform where the Java interpreter or the .NET Framework was installed. The compilation to machine code was only done when the program was first started, and any updates to the byte code of the program would then trigger a recompilation. This made it much easier for software vendors to deliver software and software updates to customers using any type of hardware platform, as long as the interpreter software ran on that platform.