Quantcast
Channel: Why do I need 17 significant digits (and not 16) to represent a double? - Stack Overflow
Viewing all articles
Browse latest Browse all 6

Answer by Nemo for Why do I need 17 significant digits (and not 16) to represent a double?

$
0
0

My other answer was dead wrong.

#include <stdio.h>intmain(int argc, char *argv[]){    unsigned long long n = 1ULL << 53;    unsigned long long a = 2*(n-1);    unsigned long long b = 2*(n-2);    printf("%llu\n%llu\n%d\n", a, b, (double)a == (double)b);    return 0;}

Compile and run to see:

18014398509481982180143985094819800

a and b are just 2*(253-1) and 2*(253-2).

Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.


Viewing all articles
Browse latest Browse all 6

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>