I just uploaded 4 packages to the inbox:
- Compiler-FUTURESjcg.106 - Kernel-FUTURESjcg.330 - Network-FUTURESjcg.44 - System-FUTURESjcg.198
It seems to be OK to load in that order for testing, but if we decide to include this in the trunk, I'll devise a bullet-proof load order (not to mention writing some unit tests). But hey, at least there are good class comments!
What is this about? It's about convenient syntax for sending and obtaining results from asynchronous messages. The initial use-case in the trunk image is to have shorter code where we currently use #addDeferredUIMessage:; you can see a few such transformations in the Network and System packages. It probably doesn't pay it's way with this use-case alone, but the exciting future use-cases are to support more advanced concurrency constructs.
This code originated in Croquet as a convenient syntax for sending messages to replicated objects over the internet. We continue to use (an evolved version of) it at Qwaq/Teleplace, and have also layered other concurrency constructs on top of it. Many more uses are possible. For example, it would be very useful in a Hydra system to send messages to objects residing in other object-memories.
How does it work? This is an extension to the Compiler, plus a small amount of support code in Object and Project. Eliot left hooks for FutureNode in Compiler, so all we need to do is add FutureNode to the system (good, because I'm no compiler expert!).
How do you use it? It's getting late, so I'll just paste the class comment from FutureNode.
<class comment>
Compile-time transformation of #future and #future: messages. Use is best described through examples:
receiver future doSomething: arg1 withArgs: arg2. (receiver future: 2000) doSomethingElse
The first means to immediately schedule #doSomething:withArgs: for asyncronous evaluation. The second means to wait 2000 milliseconds before scheduling #doSomethingElse for asynchronous evaluation.
These are transformed into either #futureDo:at:args: or #futureSend:at:args:, depending on whether the result is used. Let's look at a few examples.
[receiver future foo. 2+2] value. true ifTrue: [^receiver future foo]. arraySize := receiver future getArray wait size. In the first case, the result is never used, so the message #futureDo:at:args: is generated. In the second case, the result is answered from the current method. Since we don't do any cross-method analysis, we have to assume that the result is needed for a later computation. The result is provided in the form of a Promise, which will resolve to a value when the asynchronous evaluation has completed. Creating and resolving this Promise is the responsibility of #futureSend:at:args:, which is generated instead of #futureDo:at:args: when code-analysis indicates that the result of the message might be used. The third example is another one where #futureSend:at:args: is generated.
See the default implementations of #futureDo:at:args: and #futureSend:at:args: in Object. Subclasses are free to override the default implementations to achieve specific effects. For example, this functionality originated in the Croquet class TFarRef. If you have a TFarRef to a replicated object, then sending 'aTFarRef future foo' results in a message being sent over the network to each replica of the object referenced by aTFarRef. We might also use far-refs, for example, to send a message to an object in another Hydra object-memory.
</class comment>
I should probably quit now, because I need my beauty-sleep. Hopefully this is enough to get a discussion going; I'll respond tomorrow.
Cheers, Josh
Hi Josh -
The code looks good but I'm noticing that you omitted the future proxy and the Object future/future: implementation. Any particular reason for it? I would prefer having those because otherwise future becomes a keyword instead of an (inlined) message.
Cheers, - Andreas
Josh Gargus wrote:
I just uploaded 4 packages to the inbox:
- Compiler-FUTURESjcg.106
- Kernel-FUTURESjcg.330
- Network-FUTURESjcg.44
- System-FUTURESjcg.198
It seems to be OK to load in that order for testing, but if we decide to include this in the trunk, I'll devise a bullet-proof load order (not to mention writing some unit tests). But hey, at least there are good class comments!
What is this about? It's about convenient syntax for sending and obtaining results from asynchronous messages. The initial use-case in the trunk image is to have shorter code where we currently use #addDeferredUIMessage:; you can see a few such transformations in the Network and System packages. It probably doesn't pay it's way with this use-case alone, but the exciting future use-cases are to support more advanced concurrency constructs.
This code originated in Croquet as a convenient syntax for sending messages to replicated objects over the internet. We continue to use (an evolved version of) it at Qwaq/Teleplace, and have also layered other concurrency constructs on top of it. Many more uses are possible. For example, it would be very useful in a Hydra system to send messages to objects residing in other object-memories.
How does it work? This is an extension to the Compiler, plus a small amount of support code in Object and Project. Eliot left hooks for FutureNode in Compiler, so all we need to do is add FutureNode to the system (good, because I'm no compiler expert!).
How do you use it? It's getting late, so I'll just paste the class comment from FutureNode.
<class comment>
Compile-time transformation of #future and #future: messages. Use is best described through examples:
receiver future doSomething: arg1 withArgs: arg2. (receiver future: 2000) doSomethingElse
The first means to immediately schedule #doSomething:withArgs: for asyncronous evaluation. The second means to wait 2000 milliseconds before scheduling #doSomethingElse for asynchronous evaluation.
These are transformed into either #futureDo:at:args: or #futureSend:at:args:, depending on whether the result is used. Let's look at a few examples.
[receiver future foo. 2+2] value. true ifTrue: [^receiver future foo]. arraySize := receiver future getArray wait size.
In the first case, the result is never used, so the message #futureDo:at:args: is generated. In the second case, the result is answered from the current method. Since we don't do any cross-method analysis, we have to assume that the result is needed for a later computation. The result is provided in the form of a Promise, which will resolve to a value when the asynchronous evaluation has completed. Creating and resolving this Promise is the responsibility of #futureSend:at:args:, which is generated instead of #futureDo:at:args: when code-analysis indicates that the result of the message might be used. The third example is another one where #futureSend:at:args: is generated.
See the default implementations of #futureDo:at:args: and #futureSend:at:args: in Object. Subclasses are free to override the default implementations to achieve specific effects. For example, this functionality originated in the Croquet class TFarRef. If you have a TFarRef to a replicated object, then sending 'aTFarRef future foo' results in a message being sent over the network to each replica of the object referenced by aTFarRef. We might also use far-refs, for example, to send a message to an object in another Hydra object-memory.
</class comment>
I should probably quit now, because I need my beauty-sleep. Hopefully this is enough to get a discussion going; I'll respond tomorrow.
Cheers, Josh
On Dec 17, 2009, at 10:30 AM, Andreas Raab wrote:
Hi Josh -
The code looks good but I'm noticing that you omitted the future proxy and the Object future/future: implementation. Any particular reason for it? I would prefer having those because otherwise future becomes a keyword instead of an (inlined) message.
No particular reason, just an oversight. I'll add it tonight.
Comments from others?
Cheers, Josh
Cheers,
- Andreas
Josh Gargus wrote:
I just uploaded 4 packages to the inbox:
- Compiler-FUTURESjcg.106
- Kernel-FUTURESjcg.330
- Network-FUTURESjcg.44
- System-FUTURESjcg.198
It seems to be OK to load in that order for testing, but if we decide to include this in the trunk, I'll devise a bullet-proof load order (not to mention writing some unit tests). But hey, at least there are good class comments! What is this about? It's about convenient syntax for sending and obtaining results from asynchronous messages. The initial use-case in the trunk image is to have shorter code where we currently use #addDeferredUIMessage:; you can see a few such transformations in the Network and System packages. It probably doesn't pay it's way with this use-case alone, but the exciting future use-cases are to support more advanced concurrency constructs. This code originated in Croquet as a convenient syntax for sending messages to replicated objects over the internet. We continue to use (an evolved version of) it at Qwaq/Teleplace, and have also layered other concurrency constructs on top of it. Many more uses are possible. For example, it would be very useful in a Hydra system to send messages to objects residing in other object-memories. How does it work? This is an extension to the Compiler, plus a small amount of support code in Object and Project. Eliot left hooks for FutureNode in Compiler, so all we need to do is add FutureNode to the system (good, because I'm no compiler expert!). How do you use it? It's getting late, so I'll just paste the class comment from FutureNode.
<class comment> Compile-time transformation of #future and #future: messages. Use is best described through examples: receiver future doSomething: arg1 withArgs: arg2. (receiver future: 2000) doSomethingElse The first means to immediately schedule #doSomething:withArgs: for asyncronous evaluation. The second means to wait 2000 milliseconds before scheduling #doSomethingElse for asynchronous evaluation. These are transformed into either #futureDo:at:args: or #futureSend:at:args:, depending on whether the result is used. Let's look at a few examples. [receiver future foo. 2+2] value. true ifTrue: [^receiver future foo]. arraySize := receiver future getArray wait size.
In the first case, the result is never used, so the message #futureDo:at:args: is generated. In the second case, the result is answered from the current method. Since we don't do any cross-method analysis, we have to assume that the result is needed for a later computation. The result is provided in the form of a Promise, which will resolve to a value when the asynchronous evaluation has completed. Creating and resolving this Promise is the responsibility of #futureSend:at:args:, which is generated instead of #futureDo:at:args: when code-analysis indicates that the result of the message might be used. The third example is another one where #futureSend:at:args: is generated. See the default implementations of #futureDo:at:args: and #futureSend:at:args: in Object. Subclasses are free to override the default implementations to achieve specific effects. For example, this functionality originated in the Croquet class TFarRef. If you have a TFarRef to a replicated object, then sending 'aTFarRef future foo' results in a message being sent over the network to each replica of the object referenced by aTFarRef. We might also use far-refs, for example, to send a message to an object in another Hydra object-memory. </class comment> I should probably quit now, because I need my beauty-sleep. Hopefully this is enough to get a discussion going; I'll respond tomorrow. Cheers, Josh
Josh Gargus wrote:
Comments from others?
I think people are probably not completely clear what kinds of simplifications future messages allow. I just found a great example to illustrate the difference: Dynamic scroll bar delays.
The current situation ===================== Currently, if you look at Scrollbar you'll find that continuous scrolling is handled in a fairly complex way via:
doScrollDown "Scroll automatically while mouse is down" (self waitForDelay1: 200 delay2: 40) ifFalse: [^ self]. self setValue: (value + scrollDelta + 0.000001 min: 1.0)
and then
waitForDelay1: delay1 delay2: delay2 "Return true if an appropriate delay has passed since the last scroll operation. The delay decreases exponentially from delay1 to delay2."
| now scrollDelay | timeOfLastScroll isNil ifTrue: [self resetTimer]. "Only needed for old instances" now := Time millisecondClockValue. (scrollDelay := currentScrollDelay) isNil ifTrue: [scrollDelay := delay1 "initial delay"]. currentScrollDelay := scrollDelay * 9 // 10 max: delay2. "decrease the delay" timeOfLastScroll := now. ^true
#doScrollDown itself is called repeatedly via #step (see also #scrollDownInit, #step, #wantsSteps, #stepTime etc). And of course there are variants on this theme for scrolling up, down, page up and down and more.
Using timed future: messages ============================ Now let's look at the implementation when using a timed future instead:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. self doScrollDown: 200. "i.e., delay1"
ScrollBar>>doScrollDown: delay "keep scrolling as long as the mouse is down" self setValue: (value + scrollDelta + 0.000001 min: 1.0). keepScrolling ifTrue:[ (self future: delay) doScrollDown: (delay * 9 // 10 max: 40). "delay2" ].
That's it. No #step, no #stepTime, no #wantsSteps, no #waitForDelay etc. Just a message shot into the future by a few milliseconds. We call the above pattern "recursion in time" since it sends messages recursively to itself at some later point in time.
Using non-timed future messages =============================== And if you don't like that pattern, there's an interesting alternative using non-timed futures and rather an explicitly forked block:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. [self doScrollDown] fork.
ScrollBar>>doScrollDown "keep scrolling as long as the mouse is down" delay := 200. [keepScrolling] whileTrue:[ self future setValue: (value + scrollDelta + 0.000001 min: 1.0). (Delay forMilliseconds: delay) wait. delay := delay * 9 // 10 max: 40. "delay2" ].
(the example has a small bug which I'll ignore for educational reasons - if you can spot it you're good ;-) In this example we use an untimed future message to synchronize the scrolling loop with the foreground morphic process. It's not quite as elegant as the first example but illustrates one of the primary uses for future messages - lock-free interprocess communication. Future messages allow you to use concurrency in cases where it would be very hard to synchronize with locks - in the above I would have to guard all the modifications of the scrollbar's value by some lock but using the future message allows us to introduce a level of concurrency that would be difficult to achieve otherwise.
Cheers, - Andreas
On Thu, Dec 17, 2009 at 4:28 PM, Andreas Raab andreas.raab@gmx.de wrote:
Future messages allow you to use concurrency in cases where it would be very hard to synchronize with locks - in the above I would have to guard all the modifications of the scrollbar's value by some lock but using the future message allows us to introduce a level of concurrency that would be difficult to achieve otherwise.
What is the scope of this safety with respect to shared state and mutation? Are you essentially treating the entire image as a vat (in the E sense)? Is it per morphic project? Or something else?
- Stephen
A combined response:
Stephen Pair wrote:
What is the scope of this safety with respect to shared state and mutation? Are you essentially treating the entire image as a vat (in the E sense)? Is it per morphic project? Or something else?
It depends on the implemenation. Josh's version is deliberately simple to get people accustomed to the idea and uses Project>>addDeferredUIMessage: for delivery. We'll definitely want to improve on that and introduce separate event loops and possibly even VATs but I think going slowly is advantageous because it helps people to learn.
Colin Putney wrote:
[regarding the bug] I'll take that as a challenge. :-) I assume that
If the mouse button is released then keepScrolling will be set to false. If the button is then pressed again before the delay wakes up, you'll have two processes doing the scrolling. It would seem that they'd cause scrolling to happen twice as fast.
Good one! But this could conceivably be fixed by implementing #finishedScrolling properly (for a certain meaning of "properly"). The bug I'm referring to is that in
self future setValue: (value + scrollDelta + 0.000001 min: 1.0).
we're reading both value and scrollDelta without synchronization. Properly written this should look more like:
[self setValue: (value + scrollDelta + 0.000001 min: 1.0)] future value.
to defer binding value and scrollDelta but I'm sure you can see why I'm preferring the alternative for illustrating the concept. It does point an interesting property of future messages namely that arguments are bound early by default. This is a question of choice but I find it in practice to be advantageous because the cases where you'd want to late bind them are relatively rare.
Bert Freudenberg wrote:
Do you have experience with using alternate main loops? Like, in a
server where you might want to block the UI thread, we still might want to drain the future queue.
Yes. We use EventLoop instances to encapsulate processes and their message queues. Those run in separate processes and communicate via future messages. There is *lot* of those in our servers (thus the point about lock-free communication; regardless where you are you can always just fire a foo future bar without worrying about locking etc).
Igor Stasenko wrote:
Hmm.. can't see how futures helping to deal with concurrency. Unless there some details which i don't see. A semantics of 'future' is guarantee that message will be sent eventually in future, but there is no need to guarantee that this message order will be preserved e.g.:
Messages delivered from the same unit of concurrency (A) and being sent to the same unit of concurrency (B) are ordered. As a consequence a series of self future foo. self future bar. self future baz. is always well-ordered (foo first, then bar, then baz). For the example (all messages inside a single concurrency unit) the ordering is even more strict than that: bar and baz *will* get executed before any future messages sent from executing foo.
And if some other code poking with your data and interrupted to handle future message send, you still might need to use a synchronization, if both accessing same state.
Yes, without stronger encapsulation (which we use for example in Croquet islands) there is still the chance to introduce "accidental sharing" (just as illustrated in the bug above). However, the main advantage is that in practical situation it's *always* safe to just use "self future foo" if you don't know whether the code you're is executed from the same concurrency unit or not. Classic example: If you don't know if the logging code can be executed from some other thread, just use "Transcript future show: 'Hello World'". This is safe no matter if you run it from a background process or from the Morphic UI process.
Cheers, - Andreas
2009/12/18 Andreas Raab andreas.raab@gmx.de:
A combined response:
Stephen Pair wrote:
What is the scope of this safety with respect to shared state and mutation? Are you essentially treating the entire image as a vat (in the E sense)? Is it per morphic project? Or something else?
It depends on the implemenation. Josh's version is deliberately simple to get people accustomed to the idea and uses Project>>addDeferredUIMessage: for delivery. We'll definitely want to improve on that and introduce separate event loops and possibly even VATs but I think going slowly is advantageous because it helps people to learn.
Colin Putney wrote:
[regarding the bug] I'll take that as a challenge. :-) I assume that If the mouse button is released then keepScrolling will be set to false. If the button is then pressed again before the delay wakes up, you'll have two processes doing the scrolling. It would seem that they'd cause scrolling to happen twice as fast.
Good one! But this could conceivably be fixed by implementing #finishedScrolling properly (for a certain meaning of "properly"). The bug I'm referring to is that in
self future setValue: (value + scrollDelta + 0.000001 min: 1.0).
we're reading both value and scrollDelta without synchronization. Properly written this should look more like:
[self setValue: (value + scrollDelta + 0.000001 min: 1.0)] future value.
to defer binding value and scrollDelta but I'm sure you can see why I'm preferring the alternative for illustrating the concept. It does point an interesting property of future messages namely that arguments are bound early by default. This is a question of choice but I find it in practice to be advantageous because the cases where you'd want to late bind them are relatively rare.
Bert Freudenberg wrote:
Do you have experience with using alternate main loops? Like, in a server where you might want to block the UI thread, we still might want to drain the future queue.
Yes. We use EventLoop instances to encapsulate processes and their message queues. Those run in separate processes and communicate via future messages. There is *lot* of those in our servers (thus the point about lock-free communication; regardless where you are you can always just fire a foo future bar without worrying about locking etc).
Igor Stasenko wrote:
Hmm.. can't see how futures helping to deal with concurrency. Unless there some details which i don't see. A semantics of 'future' is guarantee that message will be sent eventually in future, but there is no need to guarantee that this message order will be preserved e.g.:
Messages delivered from the same unit of concurrency (A) and being sent to the same unit of concurrency (B) are ordered. As a consequence a series of self future foo. self future bar. self future baz. is always well-ordered (foo first, then bar, then baz). For the example (all messages inside a single concurrency unit) the ordering is even more strict than that: bar and baz *will* get executed before any future messages sent from executing foo.
And if some other code poking with your data and interrupted to handle future message send, you still might need to use a synchronization, if both accessing same state.
Yes, without stronger encapsulation (which we use for example in Croquet islands) there is still the chance to introduce "accidental sharing" (just as illustrated in the bug above). However, the main advantage is that in practical situation it's *always* safe to just use "self future foo" if you don't know whether the code you're is executed from the same concurrency unit or not. Classic example: If you don't know if the logging code can be executed from some other thread, just use "Transcript future show: 'Hello World'". This is safe no matter if you run it from a background process or from the Morphic UI process.
i don't know much about implementation detail, but binding every future sends to a single process - Morphic UI process (and in this way you of course relaxing the concurrency problems) creates a potential bottleneck, since all future sends is using single thread for evaluation , means that:
foo future fooz. bar future baz.
will unable to run concurrently, despite the potential possibility to safely run without interference with each other. So a given scheme ends using a single global synchronization semaphore to handle all future sends, and that's not very good.
But i agree, its simple, yet it far from being useful for employing a highly scalable concurrent schemes.
Cheers, - Andreas
On Dec 18, 2009, at 2:35 AM, Igor Stasenko wrote:
i don't know much about implementation detail, but binding every future sends to a single process - Morphic UI process (and in this way you of course relaxing the concurrency problems) creates a potential bottleneck, since all future sends is using single thread for evaluation , means that:
foo future fooz. bar future baz.
will unable to run concurrently,
This assumes that the two messages are scheduled for execution on the same event-loop. In practice, we have many dozens of distinct event loops, each running in it's own Process. The trick is to be able to determine on which event-loop a future message should be scheduled. We commonly use two approaches, but there is lots of room for exploration. One approach is to use a per-Process property to look up the event-loop; this isn't that different from asking "Project current". Another is for an object to directly know where to schedule a message. For example,
(aHydraProxy future doSomeWork: input) whenResolved: [:result | self future workWasFinished: input withResult: result]
could cause the #doSomeWork: message to be sent across a Hydra channel, where it would be scheduled for execution on the appropriate event-loop. When the work is finished, the remote Hydra image sends the result back on another channel, causing the promise to be resolved. We have real concurrency even if #workWasFinished:withResult: is scheduled on the Morphic loop (again, it needn't be).
Cheers, Josh
despite the potential possibility to safely run without interference with each other. So a given scheme ends using a single global synchronization semaphore to handle all future sends, and that's not very good.
But i agree, its simple, yet it far from being useful for employing a highly scalable concurrent schemes.
Cheers,
- Andreas
-- Best regards, Igor Stasenko AKA sig.
On 2009-12-17, at 1:28 PM, Andreas Raab wrote:
Using timed future: messages
Now let's look at the implementation when using a timed future instead:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. self doScrollDown: 200. "i.e., delay1"
ScrollBar>>doScrollDown: delay "keep scrolling as long as the mouse is down" self setValue: (value + scrollDelta + 0.000001 min: 1.0). keepScrolling ifTrue:[ (self future: delay) doScrollDown: (delay * 9 // 10 max: 40). "delay2" ].
That's it. No #step, no #stepTime, no #wantsSteps, no #waitForDelay etc. Just a message shot into the future by a few milliseconds. We call the above pattern "recursion in time" since it sends messages recursively to itself at some later point in time.
We want these future sends to happen in the Morphic UI process, so the Morphic implementation of #future (via #futureDo:at:args:) implements this via #addDeferredUIMessage: ? Is there some process that waits on a delay and then queues the deferred UI message?
Using non-timed future messages
And if you don't like that pattern, there's an interesting alternative using non-timed futures and rather an explicitly forked block:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. [self doScrollDown] fork.
ScrollBar>>doScrollDown "keep scrolling as long as the mouse is down" delay := 200. [keepScrolling] whileTrue:[ self future setValue: (value + scrollDelta + 0.000001 min: 1.0). (Delay forMilliseconds: delay) wait. delay := delay * 9 // 10 max: 40. "delay2" ].
(the example has a small bug which I'll ignore for educational reasons - if you can spot it you're good ;-)
I'll take that as a challenge. :-) I assume that If the mouse button is released then keepScrolling will be set to false. If the button is then pressed again before the delay wakes up, you'll have two processes doing the scrolling. It would seem that they'd cause scrolling to happen twice as fast.
In this example we use an untimed future message to synchronize the scrolling loop with the foreground morphic process. It's not quite as elegant as the first example but illustrates one of the primary uses for future messages - lock-free interprocess communication. Future messages allow you to use concurrency in cases where it would be very hard to synchronize with locks - in the above I would have to guard all the modifications of the scrollbar's value by some lock but using the future message allows us to introduce a level of concurrency that would be difficult to achieve otherwise.
Pretty interesting. I'm a bit ambivalent about a magic compiler change like this, but you make a pretty strong case for it's usefulness.
Colin
On Dec 17, 2009, at 2:13 PM, Colin Putney wrote:
On 2009-12-17, at 1:28 PM, Andreas Raab wrote:
Using timed future: messages
Now let's look at the implementation when using a timed future instead:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. self doScrollDown: 200. "i.e., delay1"
ScrollBar>>doScrollDown: delay "keep scrolling as long as the mouse is down" self setValue: (value + scrollDelta + 0.000001 min: 1.0). keepScrolling ifTrue:[ (self future: delay) doScrollDown: (delay * 9 // 10 max: 40). "delay2" ].
That's it. No #step, no #stepTime, no #wantsSteps, no #waitForDelay etc. Just a message shot into the future by a few milliseconds. We call the above pattern "recursion in time" since it sends messages recursively to itself at some later point in time.
We want these future sends to happen in the Morphic UI process, so the Morphic implementation of #future (via #futureDo:at:args:) implements this via #addDeferredUIMessage: ?
That's right.
Is there some process that waits on a delay and then queues the deferred UI message?
Yes. You can see this in #future:do:at:args: and #future:send:at:args:
The current implementation forks a separate process for each #future: message (for unary #future messages, no fork occurs; the message is scheduled immediately). This means that the strict ordering properties that Andreas describes in another message only hold for unary #future messages. I consider this to be a bug in the current implementation, although it doesn't matter for typical uses of #future: (such as the ScrollBar example).
However, even if we were to use a single scheduler process, there is still some ambiguity about what should be the execution order of #future: messages. For example:
(self future: 2000) foo. (self future: 1999) bar.
Which should execute first? If a high-priority process interrupts, then the second line might not run for another 4ms. Croquet ensures a deterministic order by not advancing the clock while executing queued messages in an island, and by making it illegal to send #future: messages from outside of the island (only #future messages are allowed).
Despite this ambiguity, we can still do better than the current implementation.
In this example we use an untimed future message to synchronize the scrolling loop with the foreground morphic process. It's not quite as elegant as the first example but illustrates one of the primary uses for future messages - lock-free interprocess communication. Future messages allow you to use concurrency in cases where it would be very hard to synchronize with locks - in the above I would have to guard all the modifications of the scrollbar's value by some lock but using the future message allows us to introduce a level of concurrency that would be difficult to achieve otherwise.
Pretty interesting. I'm a bit ambivalent about a magic compiler change like this, but you make a pretty strong case for it's usefulness.
That's why Andreas asked me to add the FutureMaker class and the Object future/future: implementations. The same semantics are implemented without any compiler magic; the addition of FutureNode to the compiler is then merely an optimization.
Cheers, Josh
Colin
BTW, I just noticed Andreas's terminology of "timed" #future messages vs "untimed" #future: messages... I should have used it in my response to Colin. For years I've been saying "future" vs "future colon", so I have a bit of a habit to break, but this terminology is much better.
(I'm pretty sure I've heard Andreas say "future colon" too. Haven't I? :-) )
Cheers, Josh
On Dec 18, 2009, at 1:01 AM, Josh Gargus wrote:
On Dec 17, 2009, at 2:13 PM, Colin Putney wrote:
On 2009-12-17, at 1:28 PM, Andreas Raab wrote:
Using timed future: messages
Now let's look at the implementation when using a timed future instead:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. self doScrollDown: 200. "i.e., delay1"
ScrollBar>>doScrollDown: delay "keep scrolling as long as the mouse is down" self setValue: (value + scrollDelta + 0.000001 min: 1.0). keepScrolling ifTrue:[ (self future: delay) doScrollDown: (delay * 9 // 10 max: 40). "delay2" ].
That's it. No #step, no #stepTime, no #wantsSteps, no #waitForDelay etc. Just a message shot into the future by a few milliseconds. We call the above pattern "recursion in time" since it sends messages recursively to itself at some later point in time.
We want these future sends to happen in the Morphic UI process, so the Morphic implementation of #future (via #futureDo:at:args:) implements this via #addDeferredUIMessage: ?
That's right.
Is there some process that waits on a delay and then queues the deferred UI message?
Yes. You can see this in #future:do:at:args: and #future:send:at:args:
The current implementation forks a separate process for each #future: message (for unary #future messages, no fork occurs; the message is scheduled immediately). This means that the strict ordering properties that Andreas describes in another message only hold for unary #future messages. I consider this to be a bug in the current implementation, although it doesn't matter for typical uses of #future: (such as the ScrollBar example).
However, even if we were to use a single scheduler process, there is still some ambiguity about what should be the execution order of #future: messages. For example:
(self future: 2000) foo. (self future: 1999) bar.
Which should execute first? If a high-priority process interrupts, then the second line might not run for another 4ms. Croquet ensures a deterministic order by not advancing the clock while executing queued messages in an island, and by making it illegal to send #future: messages from outside of the island (only #future messages are allowed).
Despite this ambiguity, we can still do better than the current implementation.
In this example we use an untimed future message to synchronize the scrolling loop with the foreground morphic process. It's not quite as elegant as the first example but illustrates one of the primary uses for future messages - lock-free interprocess communication. Future messages allow you to use concurrency in cases where it would be very hard to synchronize with locks - in the above I would have to guard all the modifications of the scrollbar's value by some lock but using the future message allows us to introduce a level of concurrency that would be difficult to achieve otherwise.
Pretty interesting. I'm a bit ambivalent about a magic compiler change like this, but you make a pretty strong case for it's usefulness.
That's why Andreas asked me to add the FutureMaker class and the Object future/future: implementations. The same semantics are implemented without any compiler magic; the addition of FutureNode to the compiler is then merely an optimization.
Cheers, Josh
Colin
On Fri, Dec 18, 2009 at 4:01 AM, Josh Gargus josh@schwa.ca wrote:
However, even if we were to use a single scheduler process, there is still some ambiguity about what should be the execution order of #future: messages. For example:
(self future: 2000) foo. (self future: 1999) bar.
Which should execute first? If a high-priority process interrupts, then the second line might not run for another 4ms. Croquet ensures a deterministic order by not advancing the clock while executing queued messages in an island, and by making it illegal to send #future: messages from outside of the island (only #future messages are allowed).
Here are my thoughts on this.
I think the important thing here is to clearly define the semantics before lots of code gets written that uses this...so, I would say that #future messages should be the equivalent of "future: 0" messages and that any #future: messages sent with the same delay should be queued in the order they are sent. Any message sent with a smaller delay should be queued before messages sent with a larger delay. These semantics would apply for messages sent from within the same process (which is what I believe we consider the unit in squeak that maps to Miller's VAT). Messages being queued from different originating processes would have no guaranteed ordering (but, if we were to introduce Croquet style replication of processes (within or between squeak images), once an ordering is established, it would be maintained for all replicas...also I think that would necessitate introducing a psuedo-time like TeaTime). As I mentioned, I think a squeak Process should be the unit where these queues are maintained and be the squeak equivalent of a VAT (though, not necessarily all Processes should be made into such a "light VAT"). And, large subsystems (like a Morphic project) where things should logically be contained within a single VAT should be refactored as necessary and over time to fit in that model (i.e. anything running outside the Morphic project's VAT should updated to communicate with morphic using only future sends). Of course, these "light-VATs" won't do anything to help with the shared state concurrency issues that can (and will) arise...the burden of dealing with that will have to be on the person writing the code (for now).
So, what about deadlines? Is there a way to set a deadline for a future send to be executed (after which your promise would yield some kind of timeout exception)? And, related to that, what about some provisions for rejecting messages (i.e. if a message queue has reached some maximum capacity or it is falling behing? How does Croquet handle such things (especially when it would have to be handled is such a way that guarantees identical results across replicated islands running on hardware with potentially very different characteristics)?
- Stephen
P.S. I think Mark Miller's thesis is something everyone should read if they haven't already: http://www.erights.org/talks/thesis/ ...I believe it is essential reading for anyone trying to think about how this stuff should work in squeak.
Stephen Pair wrote:
Here are my thoughts on this.
I think the important thing here is to clearly define the semantics before lots of code gets written that uses this...so, I would say that #future messages should be the equivalent of "future: 0" messages and that any #future: messages sent with the same delay should be queued in the order they are sent. Any message sent with a smaller delay should be queued before messages sent with a larger delay. These semantics would apply for messages sent from within the same process (which is what I believe we consider the unit in squeak that maps to Miller's VAT). Messages being queued from different originating processes would have no guaranteed ordering (but, if we were to introduce Croquet style replication of processes (within or between squeak images), once an ordering is established, it would be maintained for all replicas...also I think that would necessitate introducing a psuedo-time like TeaTime).
All of the above is exactly as intended. I'll expand on one issue: The argument-less #future is logically considered a message sent "ASAP", i.e., in the next available time slot. This has little meaning for in-process communication and is exactly equivalent to "self future: 0" but in a remote messaging environment (like Croquet) the distinction becomes meaningful because the message takes a round trip for obtaining the time stamp and consequently "ASAP" is never equal to zero in real-time. Because of this property we defined early on that timed futures between islands are ill-defined (depending on round-trip latency they may never arrive "in time") and don't allow them.
As I mentioned, I think a squeak Process should be the unit where these queues are maintained and be the squeak equivalent of a VAT (though, not necessarily all Processes should be made into such a "light VAT"). And, large subsystems (like a Morphic project) where things should logically be contained within a single VAT should be refactored as necessary and over time to fit in that model (i.e. anything running outside the Morphic project's VAT should updated to communicate with morphic using only future sends). Of course, these "light-VATs" won't do anything to help with the shared state concurrency issues that can (and will) arise...the burden of dealing with that will have to be on the person writing the code (for now).
Agree on all of these points.
So, what about deadlines? Is there a way to set a deadline for a future send to be executed (after which your promise would yield some kind of timeout exception)? And, related to that, what about some provisions for rejecting messages (i.e. if a message queue has reached some maximum capacity or it is falling behing? How does Croquet handle such things (especially when it would have to be handled is such a way that guarantees identical results across replicated islands running on hardware with potentially very different characteristics)?
Croquet handles these issues as failures on the transport level, i.e., just like loosing the network connection. Also, for timeouts keep in mind the comment above (no timed inter-island futures) and that otherwise all Croquet time is simulation time anyway, that is the advance of time is determined by the flow of messages from the router. As a consequence, 'timeouts' that happen in island time are always replicated since it only depends on the sequence of messages. In practice we use this for example for detecting lack of presence indications from participants in Teleplace (you get a red exclamation mark next to the user if we've timed out the replicated heartbeat from that participant).
P.S. I think Mark Miller's thesis is something everyone should read if they haven't already: http://www.erights.org/talks/thesis/ ...I believe it is essential reading for anyone trying to think about how this stuff should work in squeak.
Yes, definitely. Mark has been greatly influential on all of this.
Cheers, - Andreas
On Fri, Dec 18, 2009 at 7:51 AM, Andreas Raab andreas.raab@gmx.de wrote:
So, what about deadlines? Is there a way to set a deadline for a future
send to be executed (after which your promise would yield some kind of timeout exception)? And, related to that, what about some provisions for rejecting messages (i.e. if a message queue has reached some maximum capacity or it is falling behing? How does Croquet handle such things (especially when it would have to be handled is such a way that guarantees identical results across replicated islands running on hardware with potentially very different characteristics)?
Croquet handles these issues as failures on the transport level, i.e., just like loosing the network connection. Also, for timeouts keep in mind the comment above (no timed inter-island futures) and that otherwise all Croquet time is simulation time anyway, that is the advance of time is determined by the flow of messages from the router. As a consequence, 'timeouts' that happen in island time are always replicated since it only depends on the sequence of messages. In practice we use this for example for detecting lack of presence indications from participants in Teleplace (you get a red exclamation mark next to the user if we've timed out the replicated heartbeat from that participant).
Ok, I'm not sure I get all this...if there were three participants in a replicated island, are you saying that if for some reason a message cannot be delivered successfully to one participate (and I'm thinking specifically of the case where that participant is overloaded), that effectively that replica goes into some disconnected state? Presumably it would later re-sync state with one or more of the other islands once it recovers. However, I thought Croquet had some kind of a graceful degradation that wouldn't result in a less powerful machine being ejected outright. (btw, sorry that I'm asking these questions and haven't studied Croquet more closely, I've read the papers and played around with it a little, but haven't dug into the code much)
So...thinking about how Croquet might allow more graceful degradation: does Croquet enforce these strict requirements around successful queuing of messages to all replicas, but the replicated state effectively consists of just the model of the world and not any (or not a lot) of the objects involved in rendering the world? And, since most of the computational load would be in rendering, Croquet would, in fact, allow for a lot of degradation of the rendering and a participant would not get disconnected just because the rendering can't keep up with 30 fps? As long as the replicated island (consisting of just an abstract model of the world and not requiring a lot of computational load) is able to keep up with the message flow, the participant would remain connected?
Also, how does Croquet deal with really large worlds? Do you subdivide a large world into hectares or something (or the 3D equivalent of hectares...is there a word for that?) and do you just replicate the hectare the avatar currently resides in and the immediately adjacent hectares...and make the hectares sufficiently large that you would always have enough to render sufficiently far into the distance? What about objects that might sit on the boundary of multiple hectares?
- Stephen
Stephen Pair wrote:
Ok, I'm not sure I get all this...if there were three participants in a replicated island, are you saying that if for some reason a message cannot be delivered successfully to one participate (and I'm thinking specifically of the case where that participant is overloaded), that effectively that replica goes into some disconnected state? Presumably it would later re-sync state with one or more of the other islands once it recovers. However, I thought Croquet had some kind of a graceful degradation that wouldn't result in a less powerful machine being ejected outright. (btw, sorry that I'm asking these questions and haven't studied Croquet more closely, I've read the papers and played around with it a little, but haven't dug into the code much)
Not only that, there's also the issue that what we have as application policy and what is required by Croquet messaging isn't necessarily the same thing ;-)
In this case, the application policy is that we consider any participant (avatar) who hasn't been sending his heartbeat within a specified period of time to be "in trouble". What the reason for this trouble is doesn't matter - whether that's a disconnect or whether it's a computation taking too much real time. We indicate this by said exclamation mark but that has no bearing on the connection aspect. So a participant in this state can indeed catch up and recover and we see this regularly. So what you're saying above is basically correct except from the "effectively disconnected" bit because whether the client is disconnected or not is unknown and only if it's not disconnected it will be able to recover "gracefully" (which here means without having to resynchronize its state which is always the alternative).
So...thinking about how Croquet might allow more graceful degradation: does Croquet enforce these strict requirements around successful queuing of messages to all replicas, but the replicated state effectively consists of just the model of the world and not any (or not a lot) of the objects involved in rendering the world? And, since most of the computational load would be in rendering, Croquet would, in fact, allow for a lot of degradation of the rendering and a participant would not get disconnected just because the rendering can't keep up with 30 fps?
Yes, that's exactly the case. Rendering is decoupled from message processing and basically we're rendering "stable states" inbetween messages. Since message processing executes at higher priority, your rendering rate will degrade slowly if more time is needed for the replicated computation.
As long as the replicated island (consisting of just an abstract model of the world and not requiring a lot of computational load) is able to keep up with the message flow, the participant would remain connected?
Yes. Although, in the case of our application policy, if there are issues in the non-replicated bits that affect the sending of your regular client heartbeat we would still treat this as being "in trouble" from the application perspective.
Also, how does Croquet deal with really large worlds? Do you subdivide a large world into hectares or something (or the 3D equivalent of hectares...is there a word for that?) and do you just replicate the hectare the avatar currently resides in and the immediately adjacent hectares...and make the hectares sufficiently large that you would always have enough to render sufficiently far into the distance? What about objects that might sit on the boundary of multiple hectares?
Portals. Spaces can be seamlessly linked to each other by portals which we usually represent as doors etc. Each portal is also a potentially separate authentication domain meaning that you can protect your own spaces and only people with the proper permissions will be able to enter (replicate and join) them. It does get a bit weird when you see people vanishing into doorways that you can't open due to lack of permission but such is the strangeness of virtual spaces :-)
Cheers, - Andreas
On Fri, Dec 18, 2009 at 9:21 AM, Andreas Raab andreas.raab@gmx.de wrote:
Stephen Pair wrote:
Also, how does Croquet deal with really large worlds? Do you subdivide a large world into hectares or something (or the 3D equivalent of hectares...is there a word for that?) and do you just replicate the hectare the avatar currently resides in and the immediately adjacent hectares...and make the hectares sufficiently large that you would always have enough to render sufficiently far into the distance? What about objects that might sit on the boundary of multiple hectares?
Portals. Spaces can be seamlessly linked to each other by portals which we usually represent as doors etc. Each portal is also a potentially separate authentication domain meaning that you can protect your own spaces and only people with the proper permissions will be able to enter (replicate and join) them. It does get a bit weird when you see people vanishing into doorways that you can't open due to lack of permission but such is the strangeness of virtual spaces :-)
Wonderfully strange. ;) Sorry for taking this thread down a Croquet rathole...but I do have some additional curiosities. Yes, I knew portals were a solution to having to replicate vast spaces. But, I certainly can imagine wanting to be able to create vast plains and mountain ranges in a virtual world where having to jump through portals every so often as one roamed around would ruin the motif. I gather this is just a problem that Croquet has yet to address. I can imagine subdividing a world, but I can also imagine a lot of devilish details with that (ie. how do you handle objects straddling the boundaries, what about cases where someone zooms far out so that many spaces are well within the field of view, etc). Maybe you could sub-divide into overlapping spaces where objects are replicated between adjacent spaces...or even have spaces covering large areas, overlapping many smaller spaces, but modeled in less detail in order to support the "zoom out" scenario).
- Stephen
On Dec 18, 2009, at 4:32 AM, Stephen Pair wrote:
On Fri, Dec 18, 2009 at 4:01 AM, Josh Gargus josh@schwa.ca wrote: However, even if we were to use a single scheduler process, there is still some ambiguity about what should be the execution order of #future: messages. For example:
(self future: 2000) foo. (self future: 1999) bar.
Which should execute first? If a high-priority process interrupts, then the second line might not run for another 4ms. Croquet ensures a deterministic order by not advancing the clock while executing queued messages in an island, and by making it illegal to send #future: messages from outside of the island (only #future messages are allowed).
Here are my thoughts on this.
I think the important thing here is to clearly define the semantics before lots of code gets written that uses this...so, I would say that #future messages should be the equivalent of "future: 0"
They are.
messages and that any #future: messages sent with the same delay should be queued in the order they are sent.
Without question. This is a limitation of the current implementation.
Any message sent with a smaller delay should be queued before messages sent with a larger delay.
That's my point above. Should #bar be scheduled before #foo? What about if there is a statement inserted between the two lines? What if there are 10 statements? What if one of the statements is waiting on a semaphore?
Within a Croquet island, it is guaranteed that clock-time would not advance while processing the statements, so it is guaranteed that #bar would be scheduled for execution before #foo. If we're using "Time millisecondClockValue", then we have no such guarantee.
These semantics would apply for messages sent from within the same process (which is what I believe we consider the unit in squeak that maps to Miller's VAT). Messages being queued from different originating processes would have no guaranteed ordering (but, if we were to introduce Croquet style replication of processes (within or between squeak images), once an ordering is established, it would be maintained for all replicas...also I think that would necessitate introducing a psuedo-time like TeaTime). As I mentioned, I think a squeak Process should be the unit where these queues are maintained and be the squeak equivalent of a VAT
Agreed...
(though, not necessarily all Processes should be made into such a "light VAT").
Agreed...
And, large subsystems (like a Morphic project) where things should logically be contained within a single VAT should be refactored as necessary and over time to fit in that model (i.e. anything running outside the Morphic project's VAT should updated to communicate with morphic using only future sends).
And agreed!
Of course, these "light-VATs" won't do anything to help with the shared state concurrency issues that can (and will) arise...the burden of dealing with that will have to be on the person writing the code (for now).
So, what about deadlines? Is there a way to set a deadline for a future send to be executed (after which your promise would yield some kind of timeout exception)?
Not currently. Instead, you can use #waitTimeoutMSecs: to wait on a promise until it has resolved, or until the time-limit is exceeded.
And, related to that, what about some provisions for rejecting messages (i.e. if a message queue has reached some maximum capacity or it is falling behing? How does Croquet handle such things (especially when it would have to be handled is such a way that guarantees identical results across replicated islands running on hardware with potentially very different characteristics)?
- Stephen
P.S. I think Mark Miller's thesis is something everyone should read if they haven't already: http://www.erights.org/talks/thesis/ ...I believe it is essential reading for anyone trying to think about how this stuff should work in squeak.
+1 Mark's thesis is a great read.
Cheers, Josh
On 18.12.2009, at 19:58, Josh Gargus wrote:
messages and that any #future: messages sent with the same delay should be queued in the order they are sent.
Without question. This is a limitation of the current implementation.
Thinking about this, I'd suspect using #addAlarm:withArguments:for:at: instead of #addDeferredUIMessage: might be both more efficient and possibly order-preserving. It uses a Heap to sort future message sends.
- Bert -
On Jan 6, 2010, at 8:21 AM, Bert Freudenberg wrote:
On 18.12.2009, at 19:58, Josh Gargus wrote:
messages and that any #future: messages sent with the same delay should be queued in the order they are sent.
Without question. This is a limitation of the current implementation.
Thinking about this, I'd suspect using #addAlarm:withArguments:for:at: instead of #addDeferredUIMessage: might be both more efficient and possibly order-preserving. It uses a Heap to sort future message sends.
That's a great idea. I'll look into that this weekend.
The alarm heap isn't order-preserving as-is. However, if we add a sequence number to MorphicAlarm and include it in the sort-block (as is done for replicated Croquet messages), then it will be. I like this a lot... we'll be improving existing code in Morphic instead of adding another queue.
Implementation detail: I'll probably have to use an #addDeferredUIMessage: to send the #addAlarm:withArguments:for:at:, since access to the alarm heap isn't synchronized.
Thanks, Josh
- Bert -
On 2009-12-18, at 1:01 AM, Josh Gargus wrote:
Pretty interesting. I'm a bit ambivalent about a magic compiler change like this, but you make a pretty strong case for it's usefulness.
That's why Andreas asked me to add the FutureMaker class and the Object future/future: implementations. The same semantics are implemented without any compiler magic; the addition of FutureNode to the compiler is then merely an optimization.
In that case I'd say let's hold off on the compiler changes, until we find that we're using #future so much that it should be inlined.
BTW, what's the larger plan here? Do you want to introduce Islands as well? Bring Squeak and Croquet closer together so that (eventually) Croquet can be based on a Squeak base image? I do see a lot of value in this stuff, and I think it would be a great way to formalize the implicit event loop that we have with Morphic. On the other hand, this is the tip of a very large iceberg, and navigating might be easier if we had a destination in mind.
Colin
Colin Putney wrote:
BTW, what's the larger plan here? Do you want to introduce Islands as well? Bring Squeak and Croquet closer together so that (eventually) Croquet can be based on a Squeak base image? I do see a lot of value in this stuff, and I think it would be a great way to formalize the implicit event loop that we have with Morphic. On the other hand, this is the tip of a very large iceberg, and navigating might be easier if we had a destination in mind.
Bringing Squeak and Croquet closer together is one of the less important goals for me personally. But future messaging is valuable outside of Croquet; and the real goal for me is to improve the support for concurrency and distributed computation in general (think Erlang). Lock-free communication is a really interesting starting point because it enables many other things. Croquet is a natural part of this but it's not the end goal.
Cheers, - Andreas
On 2009-12-18, at 8:29 AM, Andreas Raab wrote:
Colin Putney wrote:
BTW, what's the larger plan here? Do you want to introduce Islands as well? Bring Squeak and Croquet closer together so that (eventually) Croquet can be based on a Squeak base image? I do see a lot of value in this stuff, and I think it would be a great way to formalize the implicit event loop that we have with Morphic. On the other hand, this is the tip of a very large iceberg, and navigating might be easier if we had a destination in mind.
Bringing Squeak and Croquet closer together is one of the less important goals for me personally. But future messaging is valuable outside of Croquet; and the real goal for me is to improve the support for concurrency and distributed computation in general (think Erlang). Lock-free communication is a really interesting starting point because it enables many other things. Croquet is a natural part of this but it's not the end goal.
Ok, good to hear. Interest in alternative models of concurrency has been gaining steam in the community for the last year or so, but this is the first time it's appeared in the trunk. I think we should collectively think and talk about where we want to end up with this stuff. It's got enormous potential, but if we're not careful we could end up with a mishmash of paradigms, and leave things less stable and more difficult to understand than we have today.
On 2009-12-18, at 9:13 AM, Andreas Raab wrote:
Colin Putney wrote:
On 2009-12-18, at 1:01 AM, Josh Gargus wrote:
Pretty interesting. I'm a bit ambivalent about a magic compiler change like this, but you make a pretty strong case for it's usefulness.
That's why Andreas asked me to add the FutureMaker class and the Object future/future: implementations. The same semantics are implemented without any compiler magic; the addition of FutureNode to the compiler is then merely an optimization.
In that case I'd say let's hold off on the compiler changes, until we find that we're using #future so much that it should be inlined.
Oops, sorry forgot to comment on this one. Although strictly speaking the inlining is optional, it is very, very desirable because one of the things the compiler does is omitting the need to generate return promises for every single future message being sent. Since the compiler knows whether the result of a message is being used or not, we can avoid creating promises for messages where the result isn't used. In practice, this turns out to be a significant improvement; for example in Teleplace the ratio is 4:1 for futures without result promises compared to those with (886:196 to be precise; this also gives you a feel for how much we use future messages - they are used more often than #collect: which has 976 senders in the same image). I'd like to retain that optimization if at all possible.
Ok, sounds good.
Colin
Colin Putney wrote:
On 2009-12-18, at 1:01 AM, Josh Gargus wrote:
Pretty interesting. I'm a bit ambivalent about a magic compiler change like this, but you make a pretty strong case for it's usefulness.
That's why Andreas asked me to add the FutureMaker class and the Object future/future: implementations. The same semantics are implemented without any compiler magic; the addition of FutureNode to the compiler is then merely an optimization.
In that case I'd say let's hold off on the compiler changes, until we find that we're using #future so much that it should be inlined.
Oops, sorry forgot to comment on this one. Although strictly speaking the inlining is optional, it is very, very desirable because one of the things the compiler does is omitting the need to generate return promises for every single future message being sent. Since the compiler knows whether the result of a message is being used or not, we can avoid creating promises for messages where the result isn't used. In practice, this turns out to be a significant improvement; for example in Teleplace the ratio is 4:1 for futures without result promises compared to those with (886:196 to be precise; this also gives you a feel for how much we use future messages - they are used more often than #collect: which has 976 senders in the same image). I'd like to retain that optimization if at all possible.
Cheers, - Andreas
On 17.12.2009, at 22:28, Andreas Raab wrote:
Josh Gargus wrote:
Comments from others?
I think people are probably not completely clear what kinds of simplifications future messages allow. I just found a great example to illustrate the difference: Dynamic scroll bar delays.
Nice. I want that :)
Do you have experience with using alternate main loops? Like, in a server where you might want to block the UI thread, we still might want to drain the future queue.
- Bert -
2009/12/17 Andreas Raab andreas.raab@gmx.de:
Josh Gargus wrote:
Comments from others?
I think people are probably not completely clear what kinds of simplifications future messages allow. I just found a great example to illustrate the difference: Dynamic scroll bar delays.
The current situation
Currently, if you look at Scrollbar you'll find that continuous scrolling is handled in a fairly complex way via:
doScrollDown "Scroll automatically while mouse is down" (self waitForDelay1: 200 delay2: 40) ifFalse: [^ self]. self setValue: (value + scrollDelta + 0.000001 min: 1.0)
and then
waitForDelay1: delay1 delay2: delay2 "Return true if an appropriate delay has passed since the last scroll operation. The delay decreases exponentially from delay1 to delay2."
| now scrollDelay | timeOfLastScroll isNil ifTrue: [self resetTimer]. "Only needed for old instances" now := Time millisecondClockValue. (scrollDelay := currentScrollDelay) isNil ifTrue: [scrollDelay := delay1 "initial delay"]. currentScrollDelay := scrollDelay * 9 // 10 max: delay2. "decrease the delay" timeOfLastScroll := now. ^true
#doScrollDown itself is called repeatedly via #step (see also #scrollDownInit, #step, #wantsSteps, #stepTime etc). And of course there are variants on this theme for scrolling up, down, page up and down and more.
Using timed future: messages
Now let's look at the implementation when using a timed future instead:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. self doScrollDown: 200. "i.e., delay1"
ScrollBar>>doScrollDown: delay "keep scrolling as long as the mouse is down" self setValue: (value + scrollDelta + 0.000001 min: 1.0). keepScrolling ifTrue:[ (self future: delay) doScrollDown: (delay * 9 // 10 max: 40). "delay2" ].
That's it. No #step, no #stepTime, no #wantsSteps, no #waitForDelay etc. Just a message shot into the future by a few milliseconds. We call the above pattern "recursion in time" since it sends messages recursively to itself at some later point in time.
Using non-timed future messages
And if you don't like that pattern, there's an interesting alternative using non-timed futures and rather an explicitly forked block:
ScrollBar>>scrollDownInit downButton borderInset. keepScrolling := true. [self doScrollDown] fork.
ScrollBar>>doScrollDown "keep scrolling as long as the mouse is down" delay := 200. [keepScrolling] whileTrue:[ self future setValue: (value + scrollDelta + 0.000001 min: 1.0). (Delay forMilliseconds: delay) wait. delay := delay * 9 // 10 max: 40. "delay2" ].
(the example has a small bug which I'll ignore for educational reasons - if you can spot it you're good ;-)
IMO, the bug is setting a value in future, but not immediately.
In this example we use an untimed future message to synchronize the scrolling loop with the foreground morphic process. It's not quite as elegant as the first example but illustrates one of the primary uses for future messages - lock-free interprocess communication. Future messages allow you to use concurrency in cases where it would be very hard to synchronize with locks - in the above I would have to guard all the modifications of the scrollbar's value by some lock but using the future message allows us to introduce a level of concurrency that would be difficult to achieve otherwise.
Hmm.. can't see how futures helping to deal with concurrency. Unless there some details which i don't see. A semantics of 'future' is guarantee that message will be sent eventually in future, but there is no need to guarantee that this message order will be preserved e.g.:
self future foo. self future bar. you having same chances to receive #foo then #bar, as well as #bar then #foo.
And if some other code poking with your data and interrupted to handle future message send, you still might need to use a synchronization, if both accessing same state.
Cheers, - Andreas
On Dec 17, 2009, at 10:30 AM, Andreas Raab wrote:
Hi Josh -
The code looks good but I'm noticing that you omitted the future proxy and the Object future/future: implementation. Any particular reason for it? I would prefer having those because otherwise future becomes a keyword instead of an (inlined) message.
I just checked the core changes into trunk (Object protocol, Project protocol, FutureNode, FutureMaker, and Promise), along with a Monticello configuration (-jcg.82). No code currently sends any #future messages, but now that it's in, we can start.
BTW, I made a bit of a mess of the inbox while I was at it. All 6 package versions from me can be thrown away, sorry about that.
Cheers, Josh
Cheers,
- Andreas
Josh Gargus wrote:
I just uploaded 4 packages to the inbox:
- Compiler-FUTURESjcg.106
- Kernel-FUTURESjcg.330
- Network-FUTURESjcg.44
- System-FUTURESjcg.198
It seems to be OK to load in that order for testing, but if we decide to include this in the trunk, I'll devise a bullet-proof load order (not to mention writing some unit tests). But hey, at least there are good class comments! What is this about? It's about convenient syntax for sending and obtaining results from asynchronous messages. The initial use-case in the trunk image is to have shorter code where we currently use #addDeferredUIMessage:; you can see a few such transformations in the Network and System packages. It probably doesn't pay it's way with this use-case alone, but the exciting future use-cases are to support more advanced concurrency constructs. This code originated in Croquet as a convenient syntax for sending messages to replicated objects over the internet. We continue to use (an evolved version of) it at Qwaq/Teleplace, and have also layered other concurrency constructs on top of it. Many more uses are possible. For example, it would be very useful in a Hydra system to send messages to objects residing in other object-memories. How does it work? This is an extension to the Compiler, plus a small amount of support code in Object and Project. Eliot left hooks for FutureNode in Compiler, so all we need to do is add FutureNode to the system (good, because I'm no compiler expert!). How do you use it? It's getting late, so I'll just paste the class comment from FutureNode.
<class comment> Compile-time transformation of #future and #future: messages. Use is best described through examples: receiver future doSomething: arg1 withArgs: arg2. (receiver future: 2000) doSomethingElse The first means to immediately schedule #doSomething:withArgs: for asyncronous evaluation. The second means to wait 2000 milliseconds before scheduling #doSomethingElse for asynchronous evaluation. These are transformed into either #futureDo:at:args: or #futureSend:at:args:, depending on whether the result is used. Let's look at a few examples. [receiver future foo. 2+2] value. true ifTrue: [^receiver future foo]. arraySize := receiver future getArray wait size.
In the first case, the result is never used, so the message #futureDo:at:args: is generated. In the second case, the result is answered from the current method. Since we don't do any cross-method analysis, we have to assume that the result is needed for a later computation. The result is provided in the form of a Promise, which will resolve to a value when the asynchronous evaluation has completed. Creating and resolving this Promise is the responsibility of #futureSend:at:args:, which is generated instead of #futureDo:at:args: when code-analysis indicates that the result of the message might be used. The third example is another one where #futureSend:at:args: is generated. See the default implementations of #futureDo:at:args: and #futureSend:at:args: in Object. Subclasses are free to override the default implementations to achieve specific effects. For example, this functionality originated in the Croquet class TFarRef. If you have a TFarRef to a replicated object, then sending 'aTFarRef future foo' results in a message being sent over the network to each replica of the object referenced by aTFarRef. We might also use far-refs, for example, to send a message to an object in another Hydra object-memory. </class comment> I should probably quit now, because I need my beauty-sleep. Hopefully this is enough to get a discussion going; I'll respond tomorrow. Cheers, Josh
squeak-dev@lists.squeakfoundation.org