# Why is float outputting a different result than double?

1281 views
1

I have a program in which if I use floats it outputs `29.325001`, but if I use double it's `29.325000` (that's the right result).

I thought that float rounds up to 14 decimal places, but it's producing a different result than expected?

Code below:

``````    int elements, i;

scanf("%d", &elements);

if (elements > 10){
return 0;
}

float ppp, takeHowMuch, total;

for (i = 0; i < elements; i++){
scanf("%f", &ppp[i]);
}

for (i = 0; i < elements; i++){
scanf("%f", &takeHowMuch[i]);
}

total = (takeHowMuch * ppp) + (takeHowMuch * ppp) + (takeHowMuch * ppp) + (takeHowMuch * ppp);

printf("%.6f", total);

return 0;
``````

Why would you expect 14 decimal places of precision from a 32-bit float (which has at best 23 bits of significand)?

"I thought that float rounds up to 14 decimal places," --> Typical `float` shown as decimal encodes as expected to at least 6 leading significant digits. `29.325001` is 8. What is the source of your 14?

Thanks chux, that does explain it

7

"I thought that float rounds up to 14 decimal places,"

Code precision expectations are too high.

Typical `float` shown as decimal encodes as expected to at least 6 leading significant digits. `29.325001` is 8.

posted this