Why does pointer typecasting to float doesn't work?

4177 views c

Hi everyone I am learning C and i am trying to print values of different data types using a single char pointer. I am able to print almost every data type but having problems to print float. Can anyone explain why this is an issue ?! And how can i print the exact float value using a single pointer. Sorry for the noob question, if it turns out to be one. I understand the alignment problem. But really need a solution to this.

UPDATE--> Sorry for the stupid typecasting. Now consider the update code.

    #include <stdio.h>

int main()
    long int a = 10;
    char ch = 'a';
    float b = 16.89;
    unsigned int c = 20;
    short int d = 30;
    char *p;
    p = &a;
    printf("Value of a: %d\n", *p);
    p = &ch;    
    printf("Value of ch: %c\n", *p);
    (float*)p = &b;                                     
    printf("Value of b(float): %f\n", *p);
    printf("Value of b(decimal): %d\n", (int) *((float*)p));    
    p = &c;
    printf("Value of c: %u\n", *p);
    p = &d;
    printf("Value of d: %i\n", *p);
    return 0;

The output of the above program is :-
Value of a: 10
Value of ch: a
Value of b(float): 0.000000
Value of b(decimal): 16
Value of c: 20
Value of d: 30

answered question

@david, I do understand alignment issues. But there must be a work around for this right? I want to understand that !

The type casts you do in this sample program do not make any sense. F.e. p = (long int*)&a;. The type of a is long int and this makes the type of &a to be long int *. The type cast in front of &a is a no-op.

You're not typecasting anything for the print of 0.000000!

@DavidC.Rankin I think you're confusing something here... there is no strict aliasing violation

3 Answers


The type casts you do in this sample program do not make any sense.

F.e. p = (long int*)&a;: the type of a is long int and this makes the type of &a to be long int *.
The type cast in front of &a is a no-op.

The type of p is char * and this makes the type of *p to be char. A char value is stored on 1 byte. Use values larger than 256 in your program for a, c and d and you'll discover that it prints incorrect values for them.

The value you pass to printf() must match the format specifier you use, otherwise you get unexpected results. Passing an integer (*p) and formatting it using %f does not magically do any type conversion, it just forces printf() interpret the sequence of bits of *p in an incorrect way and produce unexpected results because it is undefined behaviour.

posted this

The typecasts you are doing here

p = (float*)&b;

is in implicit cast and is not required. Already, &b is of type float* and does not need a cast.

You actually need a cast in the printf statement. Like this..

printf("Value of b(float): %f\n", *(float*)p);

You need similar casts in all other printf statements too.

What you are doing here is of not much use, if you want to print an int use an int pointer, not a char pointer.

posted this

This kind of works by accident. p is a char pointer; so *p is always a char.

  • You take the address of a, converting it into a long int *. It already was a long int *, because you took an address of a long int - so this doesn't really do anything. Then you stuff it into p, which is a char *. It cannot remain a long int * while in a char * variable; it's now a char *. Now, you are apparently on a machine that uses little-endian architecture, like the majority of us, so the least byte comes first: your long int a is represented as [10, 0, 0, 0], and *p reads the first byte. Then another idiosyncracy of C comes into play: all integral parameters smaller than int are passed as int; so the byte 10 gets converted into an int and pushed to the stack, which is then correctly read off by printf using %d format, expecting an int.

  • For char ch, *p doesn't read the first byte, it reads the only byte; and the rest is all the same. Ditto for the rest of integral types, where you were kind enough never to go over 255 (or you'd see an unexpected result).

  • However, float b is a different story. Floats are represented in memory in a very different way than integers; when *p reads the first byte, it gets something quite unrelated to 16.89 - kind of like trying to guess a painting while looking through a kaleidoscope. The first byte gets expanded to int on stack, then you read it as if it were float (where you're fortunate you don't get Segmentation fault, sizes end up compatible, yay! but this is a happy accident). Remember, this is mostly because your pointer ends up as a char *, because of the way you store it.

  • In decimal(b), you do something very different: you cast the pointer where it matters. So here, the value of p gets to be a float * for a bit; then you dereference it, and correctly get 16.89; then you convert that to an int, and end up with 16 (which you print with %d, so all is fine).

tl;dr: Type of C variables is set in stone. Any casts you do only matter while you're not storing them - as soon as you put them in a variable, the type is the variable's type.

posted this

Have an answer?


Please login first before posting an answer.