数字后面的 F

数字后面的 f表示什么?这是来自 C 还是 Objective-C?不把这个加到常数里有什么区别吗?

CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

你能解释一下为什么我不写:

CGRect frame = CGRectMake(0, 0, 320, 50);
74677 次浏览

It's a C thing - floating point literals are double precision (double) by default. Adding an f suffix makes them single precision (float).

You can use ints to specify the values here and in this case it will make no difference, but using the correct type is a good habit to get into - consistency is a good thing in general, and if you need to change these values later you'll know at first glance what type they are.

It is almost certainly from C and reflects the desire to use a 'float' rather than a 'double' type. It is similar to suffixes such as L on numbers to indicate they are long integers. You can just use integers and the compiler will auto convert as appropriate (for this specific scenario).

CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)

CGRect frame = CGRectMake(0, 0, 320, 50);

uses ints which will be automatically converted to floats.

In this case, there's no (practical) difference between the two.

From C. It means float literal constant. You can omit both "f" and ".0" and use ints in your example because of implicit conversion of ints to floats.

Sometimes there is a difference.

float f = 0.3; /* OK, throw away bits to convert 0.3 from double to float */
assert ( f == 0.3 ); /* not OK, f is converted from float to double
and the value of 0.3 depends on how many bits you use to represent it. */
assert ( f == 0.3f ); /* OK, comparing two floats, although == is finicky. */

A floating point literal in your source code is parsed as a double. Assigning it to a variable that is of type float will lose precision. A lot of precision, you're throwing away 7 significant digits. The "f" postfix let's you tell the compiler: "I know what I'm doing, this is intentional. Don't bug me about it".

The odds of producing a bug isn't that small btw. Many a program has keeled over on an ill-conceived floating point comparison or assuming that 0.1 is exactly representable.

When in doubt check the assembler output. For instance write a small, minimal snippet ie like this

#import <Cocoa/Cocoa.h>


void test() {
CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
NSLog(@"%f", r.size.width);
}

Then compile it to assembler with the -S option.

gcc -S test.m

Save the assembler output in the test.s file and remove .0f from the constants and repeat the compile command. Then do a diff of the new test.s and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.

It usually tells the compiler that the value is a float, i.e. a floating point integer. This means that it can store integers, decimal values and exponentials, e.g. 1, 0.4 or 1.2e+22.

It tells the computer that this is a floating point number (I assume you are talking about c/c++ here). If there is no f after the number, it is considered a double or an integer (depending on if there is a decimal or not).

3.0f -> float
3.0 -> double
3 -> integer

The f that you are talking about is probably meant to tell the compiler that it's working with a float. When you omit the f, it is usually translated to a double.

Both are floating point numbers, but a float uses less bits (thus smaller and less precise) than a double.