在 C + + 0x 中缩小转换范围。是只有我这么觉得,还是这听起来像是一个突破性的改变?

C + + 0x 将使下面的代码和类似的代码变得格式不正确,因为它需要所谓的 缩小转换范围(doubleint)。

int a[] = { 1.0 };

我想知道这种初始化在现实世界的代码中是否经常使用。此更改将破坏多少代码?如果您的代码受到任何影响,那么在您的代码中修复这个问题是否需要很大的努力?


参见 n3225的8.5.4/6

收缩转换是隐式转换

  • 从浮点类型转换为整数类型,或者
  • 从 long double 到 double 或 float,或者从 double 到 float,除非源是一个常量表达式,并且转换后的实际值在可表示的值范围内(即使它不能精确表示) ,或者
  • 从整数类型或未作用域的枚举类型转换为浮点类型,但源是常量表达式且转换后的实际值将适合目标类型并在转换回原始类型时生成原始值的情况除外,或者
  • 从整数类型或无作用域的枚举类型到不能表示原始类型的所有值的整数类型,除非源是一个常数表达式,并且转换后的实际值将适合目标类型,并且在转换回原始类型时将产生原始值。
95785 次浏览

I would be surprised and disappointed in myself to learn that any of the C++ code I wrote in the last 12 years had this sort of problem. But most compilers would have spewed warnings about any compile-time "narrowings" all along, unless I'm missing something.

Are these also narrowing conversions?

unsigned short b[] = { -1, INT_MAX };

If so, I think they might come up a bit more often than your floating-type to integral-type example.

I wouldn't be all that surprised if somebody gets caught out by something like:

float ra[] = {0, CHAR_MAX, SHORT_MAX, INT_MAX, LONG_MAX};

(on my implementation, the last two don't produce the same result when converted back to int/long, hence are narrowing)

I don't remember ever writing this, though. It's only useful if an approximation to the limits is useful for something.

This seems at least vaguely plausible too:

void some_function(int val1, int val2) {
float asfloat[] = {val1, val2};    // not in C++0x
double asdouble[] = {val1, val2};  // not in C++0x
int asint[] = {val1, val2};        // OK
// now do something with the arrays
}

but it isn't entirely convincing, because if I know I have exactly two values, why put them in arrays rather than just float floatval1 = val1, floatval1 = val2;? What's the motivation, though, why that should compile (and work, provided the loss of precision is within acceptable accuracy for the program), while float asfloat[] = {val1, val2}; shouldn't? Either way I'm initializing two floats from two ints, it's just that in one case the two floats happen to be members of an aggregate.

That seems particularly harsh in cases where a non-constant expression results in a narrowing conversion even though (on a particular implementation), all values of the source type are representable in the destination type and convertible back to their original values:

char i = something();
static_assert(CHAR_BIT == 8);
double ra[] = {i}; // how is this worse than using a constant value?

Assuming there's no bug, presumably the fix is always to make the conversion explicit. Unless you're doing something odd with macros, I think an array initializer only appears close to the type of the array, or at least to something representing the type, which could be dependent on a template parameter. So a cast should be easy, if verbose.

I ran into this breaking change when I used GCC. The compiler printed an error for code like this:

void foo(const unsigned long long &i)
{
unsigned int a[2] = {i & 0xFFFFFFFF, i >> 32};
}

In function void foo(const long long unsigned int&):

error: narrowing conversion of (((long long unsigned int)i) & 4294967295ull) from long long unsigned int to unsigned int inside { }

error: narrowing conversion of (((long long unsigned int)i) >> 32) from long long unsigned int to unsigned int inside { }

Fortunately, the error messages were straightforward and the fix was simple:

void foo(const unsigned long long &i)
{
unsigned int a[2] = {static_cast<unsigned int>(i & 0xFFFFFFFF),
static_cast<unsigned int>(i >> 32)};
}

The code was in an external library, with only two occurrences in one file. I don't think the breaking change will affect much code. Novices might get confused, though.

Narrowing conversion errors interact badly with implicit integer promotion rules.

I had an error with code which looked like

struct char_t {
char a;
}


void function(char c, char d) {
char_t a = { c+d };
}

Which produces an narrowing conversion error (which is correct according to the standard). The reason is that c and d implicitly get promoted to int and the resulting int isn't allowed to be narrowed back to char in an initializer list.

OTOH

void function(char c, char d) {
char a = c+d;
}

is of course still fine (otherwise all hell would break loose). But surprisingly, even

template<char c, char d>
void function() {
char_t a = { c+d };
}

is ok and compiles without a warning if the sum of c and d is less than CHAR_MAX. I still think this is a defect in C++11, but the people there think otherwise - possibly because it isn't easy to fix without get rid of either implicit integer conversion (which is a relict from the past, when people wrote code like char a=b*c/d and expected it to work even if (b*c) > CHAR_MAX) or narrowing conversion errors (which are possibly a good thing).

It looks like GCC-4.7 no longer gives errors for narrowing conversions, but warnings instead.

A practical instance that I have encountered:

float x = 4.2; // an input argument
float a[2] = {x-0.5, x+0.5};

The numeric literal is implicitly double which causes promotion.

Try adding -Wno-narrowing to your CFLAGS, for example :

CFLAGS += -std=c++0x -Wno-narrowing

It was indeed a breaking change as real life experience with this feature has shown gcc had turned narrowing into a warning from an error for many cases due to real life pains with porting C++03 code bases to C++11. See this comment in a gcc bug report:

The standard only requires that "a conforming implementation shall issue at least one diagnostic message" so compiling the program with a warning is allowed. As Andrew said, -Werror=narrowing allows you to make it an error if you want.

G++ 4.6 gave an error but it was changed to a warning intentionally for 4.7 because many people (myself included) found that narrowing conversions where one of the most commonly encountered problems when trying to compile large C++03 codebases as C++11. Previously well-formed code such as char c[] = { i, 0 }; (where i will only ever be within the range of char) caused errors and had to be changed to char c[] = { (char)i, 0 }