On 24 Oct 2014, at 16:21, Thierry Goubier <thierry.goubier@gmail.com> wrote:2014-10-24 15:50 GMT+02:00 Clément Bera <bera.clement@gmail.com>:The current x2 speed boost is due only to spur, not to sista. Sista will provide additional performance, but we have still things to do before production.The performance gain reported is due to (from most important to less important):- the new GC has less overhead. 30% of the execution time used to be spent in the GC.
- the new object format speeds up some VM internal caches (especially inline caches for message sends due to an indirection for object classes with a class table).- the new object format allows some C code to be converted into machine code routines, including block creation, context creation, primitive #at:put:, which is faster because switching from jitted code to C then back to jitted code generate a little overhead.
- characters are now immediate objects, which speeds up String accessing.
- the new object format has a larger hash which speeds up big hashed collections such as big sets and dictionaries.
- become is faster.All this is really cool :) And if I remember well, there is 64 bitness coming as well.Will Spur also cover ARM ?Spur is an object format, it does not have anything to do with underlying architecture (well, at least in theory… Eliot should be able to say more on this).Cog, in the other side is a jitter, and it has everything to do with the architecture so is difficult to have it running on ARM (but there is work on that direction, so we hope it will be there eventually).It looks like there is a misunderstanding (probably not you, Thierry, but since I’ve seen it time to time, I take the chance to clarify): Spur is not a replacement for Cog, both are orthogonal (in fact, Spur runs in Stack vm too).Real new VM is not “Spur” vm, is "Cog+Spur" vm.
cheers,EstebanThierry2014-10-24 15:20 GMT+02:00 kilon alios <kilon.alios@gmail.com>:thanks max, i completely forgotten about esug videos, looks like i found what to watch during the weekend :DOn Fri, Oct 24, 2014 at 4:12 PM, Max Leske <maxleske@gmail.com> wrote:On 24.10.2014, at 15:06, kilon alios <kilon.alios@gmail.com> wrote:very niceso any more information to this, how exactly this optimization works and which kind of data will benefit from this ?Clément’s byte code set talk at ESUG: http://www.youtube.com/watch?v=e9J362QHwSA&index=64&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0XClément’s Sista talk at ESUG (2 parts):Eliot’s Spur talk at ESUG (3 parts):On Fri, Oct 24, 2014 at 3:47 PM, Sebastian Sastre <sebastian@flowingconcept.com> wrote:remarkable!!!
congratulations for the impressive results
thanks for sharing!
sebastian
o/
> On 23/10/2014, at 17:40, Max Leske <maxleske@gmail.com> wrote:
>
> For those of you who missed this on IRC:
>
> henriksp: estebanlm: Care to run a small bench Cog vs Spur for me?
> [3:32pm] henriksp: int := ZnUTF8Encoder new.
> [3:32pm] henriksp: [int decodeBytes:#[67 97 115 104 44 32 108 105 107 101 32 226 130 172 44 32 105 115 32 107 105 110 103 0]] bench.
> [3:32pm] henriksp: had a 16x speedup with assembly implementation vs Cog, if it's 8x vs Spur, that's just really impressive
> [3:44pm] Craig left the chat room. (Quit: Leaving.)
> [3:53pm] Craig joined the chat room.
> [4:08pm] VitamineD joined the chat room.
> [4:20pm] estebanlm: checking
> [4:21pm] estebanlm: Cog: 167,000 per second.
> [4:22pm] estebanlm: Cog[Spur]: 289,000 per second.
> [4:23pm] estebanlm: henriksp: ping
> [4:33pm] tinchodias left the chat room. (Ping timeout: 245 seconds)
> [4:33pm] tinchodias joined the chat room.
> [4:34pm] henriksp: 70% more work done, nice!
> [5:09pm]
>
>
> Yay! :)