L1缓存小姐的费用是多少?

编辑 : 出于参考目的(如果有人碰到这个问题) ,Igor Ostrovsky 写了一个关于缓存丢失的 很棒的帖子。它讨论了几个不同的问题,并显示了示例数字。 结束编辑

我对 <long story goes here>进行了一些测试,想知道性能差异是否是由于内存缓存丢失造成的。下面的代码演示了这个问题,并将其归结为关键的计时部分。下面的代码有两个循环,它们以随机顺序访问内存,然后以升序地址顺序访问内存。

我在 XP 机器(用 VS2005: cl/O2编译)和 Linux 机器(gcc-Os)上运行它。两者产生了相似的时间。这些时间以毫秒为单位。我相信所有的循环都在运行,并且没有优化(否则它会“立即”运行)。

*** Testing 20000 nodes
Total Ordered Time: 888.822899
Total Random Time: 2155.846268

这些数字说得通吗?这种差异主要是由于 L1缓存丢失还是其他原因造成的?存在20,000 ^ 2个内存访问,如果每个访问都是缓存错过,那么每次错过大约是3.2纳秒。我测试的 XP (P4)机器是3.2 GHz,我怀疑(但不知道)有一个32KB L1缓存和512KB L2。对于20,000个条目(80KB) ,我假设不会有大量的 L2错过。这就是 (3.2*10^9 cycles/second) * 3.2*10^-9 seconds/miss) = 10.1 cycles/miss。我觉得挺高的。也许不是,也许我数学不好。我试过用 VTune 测量缓存未命中率但我得到了一个 BSOD。现在我无法让它连接到许可证服务器(grrrr)。

typedef struct stItem
{
long     lData;
//char     acPad[20];
} LIST_NODE;






#if defined( WIN32 )
void StartTimer( LONGLONG *pt1 )
{
QueryPerformanceCounter( (LARGE_INTEGER*)pt1 );
}


void StopTimer( LONGLONG t1, double *pdMS )
{
LONGLONG t2, llFreq;


QueryPerformanceCounter( (LARGE_INTEGER*)&t2 );
QueryPerformanceFrequency( (LARGE_INTEGER*)&llFreq );
*pdMS = ((double)( t2 - t1 ) / (double)llFreq) * 1000.0;
}
#else
// doesn't need 64-bit integer in this case
void StartTimer( LONGLONG *pt1 )
{
// Just use clock(), this test doesn't need higher resolution
*pt1 = clock();
}


void StopTimer( LONGLONG t1, double *pdMS )
{
LONGLONG t2 = clock();
*pdMS = (double)( t2 - t1 ) / ( CLOCKS_PER_SEC / 1000 );
}
#endif






long longrand()
{
#if defined( WIN32 )
// Stupid cheesy way to make sure it is not just a 16-bit rand value
return ( rand() << 16 ) | rand();
#else
return rand();
#endif
}


// get random value in the given range
int randint( int m, int n )
{
int ret = longrand() % ( n - m + 1 );
return ret + m;
}


// I think I got this out of Programming Pearls (Bentley).
void ShuffleArray
(
long *plShuffle,  // (O) return array of "randomly" ordered integers
long lNumItems    // (I) length of array
)
{
long i;
long j;
long t;


for ( i = 0; i < lNumItems; i++ )
plShuffle[i] = i;


for ( i = 0; i < lNumItems; i++ )
{
j = randint( i, lNumItems - 1 );


t = plShuffle[i];
plShuffle[i] = plShuffle[j];
plShuffle[j] = t;
}
}






int main( int argc, char* argv[] )
{
long          *plDataValues;
LIST_NODE     *pstNodes;
long          lNumItems = 20000;
long          i, j;
LONGLONG      t1;  // for timing
double dms;


if ( argc > 1 && atoi(argv[1]) > 0 )
lNumItems = atoi( argv[1] );


printf( "\n\n*** Testing %u nodes\n", lNumItems );


srand( (unsigned int)time( 0 ));


// allocate the nodes as one single chunk of memory
pstNodes = (LIST_NODE*)malloc( lNumItems * sizeof( LIST_NODE ));
assert( pstNodes != NULL );


// Create an array that gives the access order for the nodes
plDataValues = (long*)malloc( lNumItems * sizeof( long ));
assert( plDataValues != NULL );


// Access the data in order
for ( i = 0; i < lNumItems; i++ )
plDataValues[i] = i;


StartTimer( &t1 );


// Loop through and access the memory a bunch of times
for ( j = 0; j < lNumItems; j++ )
{
for ( i = 0; i < lNumItems; i++ )
{
pstNodes[plDataValues[i]].lData = i * j;
}
}


StopTimer( t1, &dms );
printf( "Total Ordered Time: %f\n", dms );


// now access the array positions in a "random" order
ShuffleArray( plDataValues, lNumItems );


StartTimer( &t1 );


for ( j = 0; j < lNumItems; j++ )
{
for ( i = 0; i < lNumItems; i++ )
{
pstNodes[plDataValues[i]].lData = i * j;
}
}


StopTimer( t1, &dms );
printf( "Total Random Time: %f\n", dms );


}
37000 次浏览

Well yeah that does look like it will mainly be L1 cache misses.

10 cycles for an L1 cache miss does sound about reasonable, probably a little on the low side.

A read from RAM is going to take of the order of 100s or may be even 1000s (Am too tired to attempt to do the maths right now ;)) of cycles so its still a huge win over that.

It's difficult to say anything for sure without a lot more testing, but in my experience that scale of difference definitely can be attributed to the CPU L1 and/or L2 cache, especially in a scenario with randomized access. You could probably make it even worse by ensuring that each access is at least some minimum distance from the last.

The easiest thing to do is to take a scaled photograph of the target cpu and physically measure the distance between the core and the level-1 cache. Multiply that distance by the distance electrons can travel per second in copper. Then figure out how many clock-cycles you can have in that same time. That's the minimum number of cpu cycles you'll waste on a L1 cache miss.

You can also work out the minimum cost of fetching data from RAM in terms of the number of CPU cycles wasted in the same way. You might be amazed.

Notice that what you're seeing here definitely has something to do with cache-misses (be it L1 or both L1 and L2) because normally the cache will pull out data on the same cache line once you access anything on that cache-line requiring less trips to RAM.

However, what you're probably also seeing is the fact that RAM (even though it's calls Random Access Memory) still preferres linear memory access.

3.2ns for an L1 cache miss is entirely plausible. For comparison, on one particular modern multicore PowerPC CPU, an L1 miss is about 40 cycles -- a little longer for some cores than others, depending on how far they are from the L2 cache (yes really). An L2 miss is at least 600 cycles.

Cache is everything in performance; CPUs are so much faster than memory now that you're really almost optimizing for the memory bus instead of the core.

While I can't offer an answer to whether or not the numbers make sense (I'm not well versed in the cache latencies, but for the record ~10 cycle L1 cache misses sounds about right), I can offer you Cachegrind as a tool to help you actually see the differences in cache performance between your 2 tests.

Cachegrind is a Valgrind tool (the framework that powers the always-lovely memcheck) which profiles cache and branch hits/misses. It will give you an idea of how many cache hits/misses you are actually getting in your program.

Some numbers for a 3.4GHz P4 from a Lavalys Everest run:

  • the L1 dcache is 8K (cacheline 64 bytes)
  • L2 is 512K
  • L1 fetch latency is 2 cycles
  • L2 fetch latency is about double what you are seeing: 20 cycles

More here: http://www.freeweb.hu/instlatx64/GenuineIntel0000F25_P4_Gallatin_MemLatX86.txt

(for the latencies look at the bottom of the page)

If you plan on using cachegrind, please note that it is a cache hit/miss simulator only. It won't always be accurate. For example: if you access some memory location, say 0x1234 in a loop 1000 times, cachegrind will always show you that there was only one cache miss (the first access) even if you have something like:

clflush 0x1234 in your loop.

On x86, this will cause all 1000 cache misses.

Here is an attempt to provide insight into the relative cost of cache misses by analogy with baking chocolate chip cookies ...

Your hands are your registers. It takes you 1 second to drop chocolate chips into the dough.

The kitchen counter is your L1 cache, twelve times slower than registers. It takes 12 x 1 = 12 seconds to step to the counter, pick up the bag of walnuts, and empty some into your hand.

The fridge is your L2 cache, four times slower than L1. It takes 4 x 12 = 48 seconds to walk to the fridge, open it, move last night's leftovers out of the way, take out a carton of eggs, open the carton, put 3 eggs on the counter, and put the carton back in the fridge.

The cupboard is your L3 cache, three times slower than L2. It takes 3 x 48 = 2 minutes and 24 seconds to take three steps to the cupboard, bend down, open the door, root around to find the baking supply tin, extract it from the cupboard, open it, dig to find the baking powder, put it on the counter and sweep up the mess you spilled on the floor.

And main memory? That's the corner store, 5 times slower than L3. It takes 5 x 2:24 = 12 minutes to find your wallet, put on your shoes and jacket, dash down the street, grab a litre of milk, dash home, take off your shoes and jacket, and get back to the kitchen.

Note that all these accesses are constant complexity -- O(1) -- but the differences between them can have a huge impact on performance. Optimizing purely for big-O complexity is like deciding whether to add chocolate chips to the batter 1 at a time or 10 at a time, but forgetting to put them on your grocery list.

Moral of the story: Organize your memory accesses so the CPU has to go for groceries as rarely as possible.

Numbers were taken from the CPU Cache Flushing Fallacy blog post, which indicates that for a particular 2012-era Intel processor, the following is true:

  • register access = 4 instructions per cycle
  • L1 latency = 3 cycles (12 x register)
  • L2 latency = 12 cycles (4 x L1, 48 x register)
  • L3 latency = 38 cycles (3 x L2, 12 x L1, 144 x register)
  • DRAM latency = 65 ns = 195 cycles on a 3 GHz CPU (5 x L3, 15 x L2, 60 x L1, 720 x register)

The Gallery of Processor Cache Effects also makes good reading on this topic.

Mmmm, cookies ...