Why is float outputting a different result than double?

1281 views c
1

I have a program in which if I use floats it outputs 29.325001, but if I use double it's 29.325000 (that's the right result).

I thought that float rounds up to 14 decimal places, but it's producing a different result than expected?

Code below:

    int elements, i;

scanf("%d", &elements);

if (elements > 10){
    return 0;
}

float ppp[10], takeHowMuch[10], total;

for (i = 0; i < elements; i++){
    scanf("%f", &ppp[i]);
}

for (i = 0; i < elements; i++){
    scanf("%f", &takeHowMuch[i]);
}

total = (takeHowMuch[0] * ppp[0]) + (takeHowMuch[1] * ppp[1]) + (takeHowMuch[2] * ppp[2]) + (takeHowMuch[3] * ppp[3]);

printf("%.6f", total);

return 0;

answered question

Why would you expect 14 decimal places of precision from a 32-bit float (which has at best 23 bits of significand)?

"I thought that float rounds up to 14 decimal places," --> Typical float shown as decimal encodes as expected to at least 6 leading significant digits. 29.325001 is 8. What is the source of your 14?

Thanks chux, that does explain it

1 Answer

7

"I thought that float rounds up to 14 decimal places,"

Code precision expectations are too high.

Typical float shown as decimal encodes as expected to at least 6 leading significant digits. 29.325001 is 8.

posted this

Have an answer?

JD

Please login first before posting an answer.