On 29/10/2007, Andreas Raab andreas.raab@gmx.de wrote:
Igor Stasenko wrote:
Indeed. However, this is where E really helps you. Everything you can express in E is by definition deadlock-free (in the classic sense) so you stop worrying about these things and focus more on the solution to the problem at hand. Not so different to manual memory management where before the advent of GC you always had that nagging feeling in the back of your head saying "and who's gonna know when and where to free that reference?". The point is that as GC issues become tractable by not causing a segfault, concurrency issues become tractable by not having your system come to a screaching halt every time something goes wrong.
I'm not really sure, that GC example is good parallel for 'automatic deadlock-free'. What prevents me (or anyone) to write a deadlocks like following: [ self grabFork ] whileFalse: ["do nothing " ]
I know this can be hard to grok, but the above is sort of the equivalent of asking "how do I free an object in Smalltalk". It doesn't make sense in the way you are asking the question, because in the above you assume that #grabFork is a remote message returning a value. In E (and Croquet) remote message invocations *do not return values* they return promises (futures) which get resolved only *after* the current message is completed. In other words, writing a loop like this:
p := self future doSomething. [p isResolved] whileFalse.
will *never* get passed that line. Not once, not ever. So in order to do what you are trying to do you need to write it like here:
p := self future doSomething. p whenComplete:[:value| ...].
Which allows the message invoking the above to complete, and once the promise is resolved, the associated resolver block will be executed as a new message. BTW, the above *does* allow you to write a recursion similar to what you were trying to do in the above, e.g.,
getMyFork table future grabFork whenComplete:[:haveIt| haveIt ifFalse:[self getMyFork]. "retry" ].
but it is still deadlock-free since there is no wait involved (neither "classic" nor "busy-wait). In fact, we use recursions like the above in various places in Croquet.
See, unless you make all message sends in language as futures, you can't guarantee that some code will not end up with locking semantics. This is exactly what GC doing - it revoking all manual memory management control from developer. But can we do the same with all message sends?
Lets see , what is happen if we have only a future sends. Then, given code: a print. b print.
will not guarantee that a will be printed before b. Now you must ensure that you preserve imperative semantics, which may be done like following: futureA := a print. futureA whenComplete: [ b print ].
Yes, we can make 'futureA whenComplete:' check implicitly (by modifying VM), then we can preserve old code. But do we really need a futures everywhere?
As i personally see, an imperative code writing style is something, that stands on the opposite side from future message sends. If we using first, then its difficult to effectively support second and vise versa.
Or we give up with an imperative style and use something different which fits better with futures, or we give up with futures. Or, we using both of them by mixing.. (which i think is most appropriate).. But then, stating that such system can be really lock-free, is wrong, because it depends on decision of concrete developer and his code.
There is no such language (except from ones, which don't have loops/branches) which prevents from writing this. And the example above is 'busy waiting' which is even worser than waiting using semaphores, because it wasting CPU resources for evaluating same expression until some state will change by external entity.
You can *write* it, but it makes about as much sense as writing "Object new free". Because the promise will not resolve, not ever, while you are "busy-waiting" (just as the object will not get freed when you send the free message or assign nil to a slot) so you'll very quickly abstain from that practice ;-)
So, unless a developer writes a better code, we will have same issues.
Absolutely not. See above. There is no deadlock in it.
As for GC - you have automatic memory management instead of manual. But there's no automatic algorithm management and never will be , given any language :)
And what's that supposed to mean?
I pointed that futures as an 'automatic lock-free' approach is not quite parallel to 'automatic memory management by GC'.
Cheers,
- Andreas