On Fri, Mar 25, 2016 at 09:22:09AM -0700, Eliot Miranda wrote:
On Fri, Mar 25, 2016 at 8:51 AM, David T. Lewis lewis@mail.msen.com wrote:
On Thu, Mar 24, 2016 at 10:05:17PM -0700, Eliot Miranda wrote:
On Mar 23, 2016, at 6:55 PM, David T. Lewis lewis@mail.msen.com
wrote:
On Wed, Mar 23, 2016 at 05:50:19PM -0700, Eliot Miranda wrote:
On Wed, Mar 23, 2016 at 4:51 PM, David T. Lewis lewis@mail.msen.com
wrote:
> On Wed, Mar 23, 2016 at 04:22:21PM -0700, Eliot Miranda wrote: > > Turns out this isn't needed for Cog. I have ioLocalSecondsOffset
which
> answers a value determined at start-up and only changed > via ioUpdateVMTimezone, which itself is controlled by > primitiveUpdateTimezone, #243. So ioUTCMicroseconds is all that's
needed
> to get at the clock and timezone atomically.
If it is updated at start-up, then it's wrong. Think of daylight
savings
time transitions.
So update it automatically once a second or some such?
Are you joking, or is that a serious question?
Yes. I see two or three system calls in the code below. gettimeofday,
one inside localtime and one inside gmtime. That's expensive.
It's gettimeofday() and localtime(). The #else is fallback for older unix platforms.
As I said, two or three.
In any case, caching the value and updating it periodically does not sound
like a good idea to me.
Why not? Why does the time zone need to be determined on every time call even though it only has a resolution of seconds? If including the timezone in every time call slows down accessing the time by, say, 33%, is it a good idea, when the VM can easily eliminate this overhead?
| c | c := LargePositiveInteger. [1 to: 10000000 do: [:i| c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8]] timeToRun 884
[1 to: 10000000 do: [:i| Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock]] timeToRun 6412
6412 / 884.0 7.253393665158371
So the overhead of the system calls involved in accessing time are much greater than the costs of allocating and garbage collecting 64-bit large integer results; much larger.
I tested with the trunk ioUtcWithOffset(), interpreter VM, and calling the primitive with pre-allocated array to avoid garbage collection. I compared the primitive versus a hacked version with the offset value hard coded (equivalant to you suggestion of caching it).
You're right, the "cached" version is almost 4 times faster.
I still think that it is needless complexity. But yes it's faster.
Dave