My other answer was dead wrong.
#include <stdio.h>intmain(int argc, char *argv[]){ unsigned long long n = 1ULL << 53; unsigned long long a = 2*(n-1); unsigned long long b = 2*(n-2); printf("%llu\n%llu\n%d\n", a, b, (double)a == (double)b); return 0;}
Compile and run to see:
18014398509481982180143985094819800
a and b are just 2*(253-1) and 2*(253-2).
Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.