Hi Clément,

On Sat, Dec 14, 2013 at 12:04 AM, Clément Bera <bera.clement@gmail.com> wrote:

2013/12/13 Eliot Miranda <eliot.miranda@gmail.com>

On Fri, Dec 13, 2013 at 1:07 PM, askoh <askoh@askoh.com> wrote:

I am not a VM guy. But, is the Smalltalk in a C World article compiling
Smalltalk to machine code to run without the VM?

He talks about the VM being a relic of the past. Is that true?

Is the Java VM a relic of the past?  Given portable devices is compiled code a relic of the past?  Is a safe development environment with fast compile times a thing of the past?  Their own conclusions imply that the answer is not yet:

What do you mean by not yet ? Do you think that in a 10 or 20 years VMs will be obsolete ?

Personally no.  In fact I think the trend is towards ever more virtualization, ever more language-to-language translation, ever more modelling of computation (for example reversible debuggers).  The cloud actually implies that computation is better managed virtually because hardware improves, crashes, moves, etc, and if there are long-running computations tying them down to one piece of hardware seems limited.  Azul systems already build java-specific server that allows for processors to evolve their ISA over time.

There's 1 detail I am not sure. By JIT in this article (http://m.pocketnow.com/2013/11/13/dalvik-vs-art), does he mean the bytecode to native code generator only or the native code generator + inline cache management + adaptive recompiler. 

I don't know and I don't think the author knows enough to differentiate.  It's not a well-written article.  He claims AOT is a new concept; it's not.  Microsoft's .Net has done this for a while.  He claims "This code is mostly uncompiled. That means it’s slower than compiled code would be, but your device gets the “insulation” advantages that VMs provide" which is comparing apples to oranges (one can produce safe native code; a VM is an example of it, and there's no fundamental reason why jit code is slower than native code and in some cases it can be faster).  So I don't think it's worth worrying.  I'd rather read a well-written paper on KitKat than spend any more effort on this article.
I'm wondering, even if they have their code stored as native code instead of byte code, do they have some kind of native code generator for adaptive recompilation to reach such a performance ?

And how do they manage their inline caches ? As all methods are native, some monomorphic inline caches could be promoted to PIC due to 1 very rare case and then as they always keep the same n-code this send site would be slower forever. Does this mean they would need to empty inline caches sometimes ?

"Our current approach lacks some of the advantages of Smalltalk. The most obvious of these is debugging. Our current implementation emits very sparse DWARF debugging information and so is fairly limited in terms of debugging support even in comparison to C, and therefore a long way behind the state of the art for Smalltalk circa 1980. This is currently the focus of ongoing work. Once this is done, then implementing things like thisContext making use of debug metadata become possible. In our current implementation, run-time introspection is only available for objects and variables bound to blocks, not for activation records.

Closely related is the rest of the IDE. In traditional Smalltalk implementations, the IDE is closely integrated with the execution environment. GNU Smalltalk is the major exception, and provides a model close to ours. Building a good IDE and debugger is beyond the scope of the LanguageKit project, but building these tools on top of LanguageKit is a goal of E ́toile ́."


All the best,
Aik-Siong Koh

View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-for-better-performance-Pocketnow-tp4729727p4729982.html
Sent from the Squeak VM mailing list archive at Nabble.com.