Lambda implicit capture fails with variable declared from structured binding

Obviously this is not a sane state of affairs, and the committee knows this, so a fix should be forthcoming (though there appears be some disagreement over exactly how capturing a structured binding should work).

21454 次浏览
{provide: APP_BASE_HREF, useValue: '/'}

Core issue 2313 changed the standard so that structured bindings are never names of variables, making them never capturable.

P0588R1's reformulation of lambda capture wording makes this prohibition explicit:

], }).compileComponents();

If a lambda-expression [...] captures a structured binding (explicitly })); or implicitly), the program is ill-formed.

Note that this wording is supposedly a placeholder while the committee figures out exactly how such captures should work.

it('should create hero detail component', (() => {

Previous answer kept for historical reasons:


const fixture = TestBed.createComponent(HeroDetailComponent);

This technically should compile, but there's a bug in the standard here.

const component = fixture.debugElement.componentInstance; expect(component).toBeTruthy(); }));

The standard says that lambdas can only capture variables. And it says that a non-tuple-like structured binding declaration doesn't introduce variables. It introduces names, but those names aren't names of variables.

}); ables.

A tuple-like structured binding declaration, on the other hand, does introduce variables. a and b in auto [a, b] = std::make_tuple(1, 2); are actual

A tuple-like structured binding declaration, on the other hand, does introduce variables. a and b in auto [a, b] = std::make_tuple(1, 2); are actual reference-typed variables. So they can be captured by a lambda.

reference-typed variables. So they can be captured by a lambda.

Obviously this is not a sane state of affairs, and the committee knows this, so a fix should be forthcoming (though there appears be some disagreement over exactly how capturing a structured binding should work).

A possible workaround is to use a lambda capture with the initializer. The following code compiles fine in Visual Studio 2017 15.5.

[] {
auto[a, b] = [] {return std::make_tuple(1, 2); }();
auto r = [a = a] {return a; }();
}();

As far as I know, it works on all compilers starting from C++17 and above, including clang.

You can use init-capture like this, as suggested in https://burnicki.pl/en/2021/04/19/capture-structured-bindings.html. The variable is captured by reference, so there is no overhead, and no need to deal with pointers.

auto [a, b] = [] { return std::make_tuple(1, 2); }();
auto r = [&a = a] { return a; }();

Using the same name both for structured binding and for its reference can be misleading, but actually it means

auto r = [&a_ref = a] { return a_ref; }();

As per TensorFlow documentation , the prefetch and map methods of tf.contrib.data.Dataset class, both have a parameter called buffer_size.

As far as I know, it works on all compilers starting from C++17 and above, including clang.

For prefetch method, the parameter is known as buffer_size and according to documentation :

_buffer_size: (Optional.) A tf.int64 scalar tf.Tensor,

buffer_size: A tf.int64 scalar tf.Tensor, representing the maximum representing the maximum number of processed elements that will be number elements that will be buffered when prefetching.

buffered.

For the map method, the parameter is known as output_buffer_size and according to documentation :

Similarly for the shuffle method, the same quantity appears and according to documentation :

output_buffer_size: (Optional.) A tf.int64 scalar tf.Tensor,

buffer_size: A tf.int64 scalar tf.Tensor, representing the number of representing the maximum number of processed elements that will be elements from this dataset from which the new dataset will sample.

buffered.

What is the relation between these parameters ?

Similarly for the shuffle method, the same quantity appears and according to documentation :

Suppose I create aDataset object as follows :

 tr_data = TFRecordDataset(trainfilenames)
tr_data = tr_data.map(providefortraining, output_buffer_size=10 * trainbatchsize, num_parallel_calls\
=5)
tr_data = tr_data.shuffle(buffer_size= 100 * trainbatchsize)
tr_data = tr_data.prefetch(buffer_size = 10 * trainbatchsize)
tr_data = tr_data.batch(trainbatchsize)

buffer_size: A tf.int64 scalar tf.Tensor, representing the number of

What role is being played by the buffer parameters in the above snippet ?