It is well known that in C, floating point literals (e.g. 1.23
) have type double
. As a consequence, any calculation that involves them is promoted to double.
I'm working on an embedded real-time system that has a floating point unit that supports only single precision (float
) numbers. All my variables are float
, and this precision is sufficient. I don't need (nor can afford) double
at all. But every time something like
if (x < 2.5) ...
is written, disaster happens: the slowdown can be up to two orders of magnitude. Of course, the direct answer is to write
if (x < 2.5f) ...
but this is so easy to miss (and difficult to detect until too late), especially when a 'configuration' value is #define
'd in a separate file by a less disciplined (or just new) developer.
So, is there a way to force the compiler to treat all (floating point) literals as float, as if with suffix f
? Even if it's against the specs, I don't care. Or any other solutions? The compiler is gcc, by the way.