Hello all.
I have noticed some discrpancies in the Linux VM as opposed to the Win32 VM.
Here is a summary (was on squeak-dev):
----------------------------
Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays (<500ms) wait for significantly longer than under Windows (3.10.4)?
As an example for the average time spent waiting when requesting a 10ms wait:
((1 to: 100) collect: [:i | Time millisecondsToRun: [(Delay forMilliseconds: 10) wait]]) sum / 100.0
This yields (on my machine) 10.35 for Windows and 43.48 for Linux. For a 100ms wait: 100.5 (Windows) and 131.39 (Linux). For a 1000ms wait: 1000.36 (Windows) and 1013.16 (Linux).
There seems to be a fixed overhead per call that is significantly larger under Linux.
Any thoughts would be appreciated.
------------------------------
Even with a loadavg (via uptime) of around 0.01 it was still around 44ms (for a 10ms delay) for me. Found the culprit though...
In sqUnixMain.c
sqInt ioRelinquishProcessorForMicroseconds(sqInt us) { int now; dpy->ioRelinquishProcessorForMicroseconds(us); now= ioLowResMSecs(); if (now - lastInterruptCheck > (1000/25)) /* avoid thrashing intr checks from 1ms loop in idle proc */ { setInterruptCheckCounter(-1000); /* ensure timely poll for semaphore activity */ lastInterruptCheck= now; } return 0; }
Note the (1000/25) which equals 40ms, around the same as the latency. Called from #relinquishProcessorForMicroseconds: in ProcessorScheduler, By the the idle process with a 1000 microsecond value. This reliquishes the processor for the 1000 microseconds then doesn't check for interrupts (including Pending delay semaphores) until at least 40ms have passed.
Perhaps this could be made into a VM parameter?