This is due to a macOS bug in 64bits libm! (Hard to believe, isn't it?)

I've opened a ticket https://bugreport.apple.com/web/?problemID=48021471

Area:
Something not on this list

Summary: ldexp incorrectly rounds gradual underflow for some specific values.

Steps to Reproduce:
#include <math.h>
#include <stdio.h>
int main() {
int exp=-54; double y=ldexp(11.0,exp); /* y is binary 1.0112^-51 /
double u=ldexp(1.0,-1074); /
u is the minimal denormalized IEEE754 double /
double v=ldexp(y,-1023); /
v is binary 1.011
2^-1074 and should round to u */
printf("u=%g v=%g\n",u,v);
return 0;
}

Expected Results:
v should be rounded to 1.0*2^-1074 (round to nearest, tie to even default rule)
Thus we should have u == v.

Actual Results:
v is rounded upward to binary 10.02^-1074 = 1.02^-1073
output is u=4.94066e-324 v=9.88131e-324

Note 1: this fails for 4 different values of exp -54,-53,+968,+969
(replace exponent -1023 by -1074-3-exp, that is -1023,-1024,-2045,-2046)
Note 2: this did not happen previously with 32bits libm version.
Note 3: this sounds a bit like this bug https://stackoverflow.com/questions/32150888/should-ldexp-round-correctly

Version/Build:
macOS HighSierra 10.13.16

Configuration:
compiled with
clang --version
Apple LLVM version 10.0.0 (clang-1000.11.45.5)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Xcode Version 10.1 (10B61)


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.