The %d specifier tells printf to expect an integer. So the first four (or two, depending on the platform) bytes of the float are intepreted as an integer. If they happen to be zero, a zero is printed
The binary representation of 1234.5 is something like
1.00110100101 * 2^10 (exponent is decimal ...)
With a C compiler which represents float actually as IEEE754 double values, the bytes would be (if I made no mistake)
On an Intel (x86) system with little endianess (i.e. the least significant byte coming first), this byte sequence gets reversed so that the first four bytes are zero. That is, what printf prints out ...
The reason is that printf() is a pretty dumb function. It does not check types at all. If you say the first argument is an int (and this is what you are saying with %d), it believes you and it takes just the bytes needed for an int. In this case, asuming your machine uses four-byte int and eight-byte double (the float is converted to a double inside printf()), the first four bytes of a will be just zeroes, and this gets printed.
Because you invoked undefined behaviour: you violated the contract of the printf() method by lying to it about its parameter types, so the compiler is free to do whatever it pleases. It could make the program output "dksjalk is a ninnyhead!!!" and technically it would still be right.
That's because %d expects an int but you've provided a float.
Use %e/%f/%g to print the float.
On why 0 is printed: The floating point number is converted to double before sending to printf. The number 1234.5 in double representation in little endian is
00 00 00 00 00 4A 93 40
A %d consumes a 32-bit integer, so a zero is printed. (As a test, you could printf("%d, %d\n", 1234.5f); You could get on output 0, 1083394560.)
As for why the float is converted to double, as the prototype of printf is int printf(const char*, ...), from 6.5.2.2/7,
The ellipsis notation in a function prototype declarator causes argument type conversion to stop after the last declared parameter. The default argument promotions are performed on trailing arguments.
and from 6.5.2.2/6,
If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type ABC0 are promoted to double. These are called the default argument promotions.
Technically speaking there is no theprintf, each library implements its own, and therefore, your method of trying to study printf's behavior by doing what you are doing is not going to be of much use. You could be trying to study the behavior of printf on your system, and if so, you should read the documentation, and look at the source code for printf if it is available for your library.
For example, on my Macbook, I get the output 1606416304 with your program.
Having said that, when you pass a float to a variadic function, the float is passed as a double. So, your program is equivalent to having declared a as a double.
To examine the bytes of a double, you can see this answer to a recent question here on SO.
Let's do that:
#include <stdio.h>
int main(void)
{
double a = 1234.5f;
unsigned char *p = (unsigned char *)&a;
size_t i;
printf("size of double: %zu, int: %zu\n", sizeof(double), sizeof(int));
for (i=0; i < sizeof a; ++i)
printf("%02x ", p[i]);
putchar('\n');
return 0;
}
When I run the above program, I get:
size of double: 8, int: 4
00 00 00 00 00 4a 93 40
So, the first four bytes of the double turned out to be 0, which may be why you got 0 as the output of your printf call.
For more interesting results, we can change the program a bit:
#include <stdio.h>
int main(void)
{
double a = 1234.5f;
int b = 42;
printf("%d %d\n", a, b);
return 0;
}
When I run the above program on my Macbook, I get: