On Sun, Mar 20, 2016 at 12:39:38AM +0100, Levente Uzonyi wrote:
The loss of the nanosecond part has another side effect. utcMicroseconds will be a Fraction when the resolution of the parsed input is too high. This is somewhat compatible, but it makes things slower. E.g.:
'2002-05-16T17:20:45.000000009+01:01' asDateAndTime utcMicroseconds "==> (1021565985000000009/1000)"
This is an interesting problem.
The fractional representation of utcMicroseconds seems reasonable to me in this case, at least now that you have fixed the bugs that I introduced :-)
It might however cause issues for Magma and similar applications. I would be interested to hear from Chris Muller is that is a problem.
My expectation would be that microseconds should be the unit of measure for time magnitude, but that there should be no limit on precision. Or to say it another way, clocks may have ticks, but time itself should be thought of as continous. Thus I expect utcMicroseconds to be a Number, but not necessarily an Integer.
Microseconds is a reasonable unit of measure because it is the highest level of precision available from the clocks on the Linux, OS X, and Windows systems that we use, and because it implies clock accuracy that is well beyond the real time measurement capabilities of those platforms.
As far as I know, the practical use of nanosecond precision in DateAndTime would be in the case of adding incremental nanoseconds to the time value in order to create the illusion that repeated calls to the system clock will always result in monotonically increasing values. If so, then I suspect that the same practical result could be achieved by artificially incrementing the value in units of milliseconds rather than nanoseconds, which would ensure unique integer values.
After all, it does not seem likely that any application involving a database would be querying the system clock at a rate anywhere close to a million per second. But I have not actually tried this, so I may be misunderstanding the requirement.
Dave