使用 MemoryCache 的多个实例

我希望使用 System.Runtime.Caching名称空间向应用程序添加缓存功能,并且可能希望在多个位置和不同的上下文中使用缓存。 为此,我想使用几个 MemoryCache 实例。

但是,我看到 给你不鼓励使用 MemoryCache 的多个实例:

MemyCache 不是一个单实例,但是您应该只创建几个或者可能只创建一个 memyCache 实例,缓存项的代码应该使用这些实例。

多个 MemoryCache 实例如何影响我的应用程序? 我发现这种情况很奇怪,因为在我看来,在一个应用程序中使用多个缓存是一种非常常见的情况。

编辑: 更具体地说,我有一个类应该为每个实例保留一个缓存。我是否应该避免使用 MemoryCache并寻找不同的缓存解决方案?在这种情况下使用 MemoryCache是否被认为是不好的,如果是,原因是什么?

27802 次浏览

I use several too. Generally one per type.

Looking at the MemoryCache I see that it hooks into AppDomain events and maintains performance counters. I suspect then there's some overhead resource-wise by using more than one (e.g. CPU, counters, and memory) and that's why it's discouraged.

I recently went through this myself as well. Considering an in memory cache will be process specific (not shared across multiple instances of a website or native business app or multiple servers) there is really no benefit to having multiple MemoryCache instances except for code organizational reasons (which can be achieved in other ways).

The Memory cache is intended to be used alone mostly because of its memory management capabilities. In addition to the performance counters (which do have some overhead) the MemoryCache is also able to expire items when it runs out of allocated memory.

If the current instance of the cache exceeds the limit on memory set by the CacheMemoryLimit property, the cache implementation removes cache entries. Each cache instance in the application can use the amount of memory that is specified by the CacheMemoryLimit property.

from MemoryCache.CacheMemoryLimit Property

By using only one instance of the MemoryCache it can apply this memory management efficiently across the entire application instance. Expiring the least important items across the entire application. This ensures maximum memory use, without exceeding your hardware capabilities. By limiting the scope of any one MemoryCache (like to one instance of a class) it can no longer effectively manage memory for your application (as it can't "see" everything). If all of these cache's were "busy" you may have a harder time managing memory and it will never be nearly as efficient.

This is particularly sensitive in applications which don't have the luxury of a dedicated server. Imagine you are running your app on a shared server where you've only been allocated 150mb RAM (common cheap $10/month hosting) you need to count on your cache to use that to the max without exceeding it. If you exceed this memory usage your app pool will be recycled and your app loses all in memory caches! (common cheap hosting practice) The same could apply to a non-web app hosted in house on some shared corporate server. Same deal, you're told not to hog all the memory on that machine and to peacefully co-exist with some other line of business apps.

That memory-limit, app pool recycle, lose caches thing is a common "Achilles heel" to web apps. When the apps are their busiest, they reset the most often due to exceeding memory allocations, losing all cache entries and therefor doing the most work re-fetching stuff that should have been cached in the first place. Meaning the app actually loses performance at max load instead of gaining.

I know MemoryCache is the non-web specific version of System.Web.Caching.Cache implementation, but this illustrates the logic behind cache implementation. The same logic can apply in a non-web project if you don't have exclusive use of the hardware. Remember if your cache forces the machine to start doing pagefile swaps then your cache is no longer any faster than caching on disk. You'll always want a limit somewhere, even if that limit is 2gb or something.

In my case after reading up about this, I switched to using one 'public static MemoryCache' in my app and I simply segregated cached items by their cache keys. For example if you want to cache on a per instance you could have a cache key like something like "instance-{instanceId}-resourceName-{resourceId}". Think of it as name spacing your cache entries.

Hope that helps!