# Why is float outputting a different result than double?

I have a program in which if I use floats it outputs `29.325001`

, but if I use double it's `29.325000`

(**that's the right result**).

I thought that float rounds up to 14 decimal places, but it's producing a different result than expected?

Code below:

```
int elements, i;
scanf("%d", &elements);
if (elements > 10){
return 0;
}
float ppp[10], takeHowMuch[10], total;
for (i = 0; i < elements; i++){
scanf("%f", &ppp[i]);
}
for (i = 0; i < elements; i++){
scanf("%f", &takeHowMuch[i]);
}
total = (takeHowMuch[0] * ppp[0]) + (takeHowMuch[1] * ppp[1]) + (takeHowMuch[2] * ppp[2]) + (takeHowMuch[3] * ppp[3]);
printf("%.6f", total);
return 0;
```

chux
answered question

chux
commented

"I thought that float rounds up to 14 decimal places," --> Typical `float`

shown as decimal encodes as expected to at least 6 leading significant digits. `29.325001`

is 8. What is the source of your 14?

yano
commented

This is worth a look: stackoverflow.com/questions/588004/…

### 1 Answer

"I thought that float rounds up to 14 decimal places,"

Code precision expectations are too high.

Typical `float`

shown as decimal encodes as expected to at least 6 leading significant digits. `29.325001`

is 8.

chux
posted this

## Have an answer?

JD

Why would you expect 14 decimal places of precision from a 32-bit float (which has at best 23 bits of significand)?