On Fri, Mar 25, 2016 at 8:51 AM, David T. Lewis lewis@mail.msen.com wrote:
On Thu, Mar 24, 2016 at 10:05:17PM -0700, Eliot Miranda wrote:
On Mar 23, 2016, at 6:55 PM, David T. Lewis lewis@mail.msen.com
wrote:
On Wed, Mar 23, 2016 at 05:50:19PM -0700, Eliot Miranda wrote:
On Wed, Mar 23, 2016 at 4:51 PM, David T. Lewis lewis@mail.msen.com
wrote:
On Wed, Mar 23, 2016 at 04:22:21PM -0700, Eliot Miranda wrote:
Turns out this isn't needed for Cog. I have ioLocalSecondsOffset
which
answers a value determined at start-up and only changed via ioUpdateVMTimezone, which itself is controlled by primitiveUpdateTimezone, #243. So ioUTCMicroseconds is all that's
needed
to get at the clock and timezone atomically.
If it is updated at start-up, then it's wrong. Think of daylight
savings
time transitions.
So update it automatically once a second or some such?
Are you joking, or is that a serious question?
Yes. I see two or three system calls in the code below. gettimeofday,
one inside localtime and one inside gmtime. That's expensive.
It's gettimeofday() and localtime(). The #else is fallback for older unix platforms.
As I said, two or three.
In any case, caching the value and updating it periodically does not sound
like a good idea to me.
Why not? Why does the time zone need to be determined on every time call even though it only has a resolution of seconds? If including the timezone in every time call slows down accessing the time by, say, 33%, is it a good idea, when the VM can easily eliminate this overhead?
| c | c := LargePositiveInteger. [1 to: 10000000 do: [:i| c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8. c basicNew: 8]] timeToRun 884
[1 to: 10000000 do: [:i| Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock. Time utcMicrosecondClock]] timeToRun 6412
6412 / 884.0 7.253393665158371
So the overhead of the system calls involved in accessing time are much greater than the costs of allocating and garbage collecting 64-bit large integer results; much larger.
Dave
Confused, Dave
/* implementation of ioUtcWithOffset(), defined in config.h to /* override default definition in src/vm/interp.h */ sqInt sqUnixUtcWithOffset(sqLong *microSeconds, int *offset) { struct timeval timeval;
if (gettimeofday(&timeval, NULL) == -1) return -1; time_t seconds= timeval.tv_sec; suseconds_t usec= timeval.tv_usec; *microSeconds= seconds * 1000000 + usec; #if defined(HAVE_TM_GMTOFF) *offset= localtime(&seconds)->tm_gmtoff; #else { struct tm *local= localtime(&seconds); struct tm *gmt= gmtime(&seconds); int d= local->tm_yday - gmt->tm_yday; int h= ((d < -1 ? 24 : 1 < d ? -24 : d * 24) + local->tm_hour -
gmt->tm_hour);
int m= h * 60 + local->tm_min - gmt->tm_min; *offset= m * 60; } #endif return 0; }