On 2009-11-10, at 12:00 AM, Andreas Raab wrote:
Now, obviously this is inefficient, we'd want to replace the ioProcessEvents() call with something more elaborate that reacts to the incoming OS events, takes the next scheduled delay into account, checks for socket handles etc. But I'm sure you're getting the idea. Instead of wasting our time in the idleProcess, we just return when there's no more work to do and it's up to the caller to run interpret () as often or as rarely as desired.
Well I'm not sure what exactly you are trying to fix or optimize, but let me review what happens for the rest of the platforms.
When ioRelinquishProcessorForMicroseconds is called and given a bogus magic wait value, on OS-X we calculate the wait time based on getNextWakeupTick() and the current time with regard to if the getNextWakeupTick() is zero or in the past.
The getNextWakeupTick() is the time in ms that the Delay logic thinks it has to wait for the set of delays sorted by time to terminate. I note that this value could be zero on non-morphic systems, but on morphic it's always about 16-20 ms because the Morphic stepper logic wakes up every 1/50 second.
I believe on unix system it just uses the bogus wait value which is 1000 microseconds.
We then schedule a wait using either a somewhat correct wait value or in the unix/linux case 1000 microseconds.
How long the wait takes is determined by the wait logic and the flavor of unix, it could be 1 ms, or it could be 10, or 100 ms. Apple attempts to ensure the time you ask to wait will be the time given. Typically unix/linux users see high CPU usage for idle squeak images since the wait time could be 1ms, but might be 10 ms.
Now the wait can be terminated early because a interrupt event (i/o, sockets, ui) happen. if so any async file io, or socket interrupts are serviced immediately setting flags/semaphores for the squeak vm to service within the next few ms.
Issues: If an external pthread signals a squeak semaphore this does not wake the sleeping squeak vm pthread.
also see http://isqueak.org/ioRelinquishProcessorForMicroseconds
Now as for the proposed changes, we would just call the ioRelinquishProcessorForMicroseconds() logic after the interpreter() ends. On register rich machines (ancient powerpc machines this change would be technically more expensive since it loads up 20 some registers on the entry to interpret() but give it's call rate of 50 times a second no-one will notice.
-- = = = ======================================================================== John M. McIntosh johnmci@smalltalkconsulting.com Twitter: squeaker68882 Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com = = = ========================================================================