最佳答案
考虑以下函数:
void func(bool& flag)
{
if(!flag) flag=true;
}
在我看来,如果标志有一个有效的布尔值,这将等同于无条件设置为 true
,像这样:
void func(bool& flag)
{
flag=true;
}
然而,gcc 和 clang 都没有以这种方式进行优化ーー它们都在 -O3
优化级别产生以下结果:
_Z4funcRb:
.LFB0:
.cfi_startproc
cmp BYTE PTR [rdi], 0
jne .L1
mov BYTE PTR [rdi], 1
.L1:
rep ret
My question is: is it just that the code is too special-case to care to optimize, or are there any good reasons why such optimization would be undesired, given that flag
is not a reference to volatile
? It seems the only reason which might be is that flag
could somehow have a non-true
-or-false
value without undefined behavior at the point of reading it, but I'm not sure whether this is possible.