在 Javascript 中减少垃圾收集器活动的最佳实践

我有一个相当复杂的 Javascript 应用程序,它有一个每秒调用60次的主循环。似乎有很多垃圾收集正在进行(基于 Chrome 开发工具内存时间轴的“锯齿”输出) ,这通常会影响应用程序的性能。

因此,我正在研究减少垃圾收集器必须完成的工作量的最佳实践。(我在网上找到的大部分信息都是关于避免内存泄漏的,这是一个稍微不同的问题——我的内存正在被释放,只是有太多的垃圾收集在进行。)我假设这主要归结为尽可能多地重用对象,但是当然细节是关键。

该应用程序是按照 John Resig 的简单 JavaScript 继承的思路构建的“类”。

我认为一个问题是,有些函数可以每秒调用数千次(因为它们在主循环的每次迭代中被使用数百次) ,也许这些函数中的本地工作变量(字符串、数组等)可能是问题所在。

我知道对大型或重型对象的对象池(我们在一定程度上使用这种方法) ,但我正在寻找可以全面应用的技术,特别是与紧密循环中多次调用的函数相关的技术。

我可以使用什么技术来减少垃圾收集器必须完成的工作量?

而且,也许还可以使用什么技术来确定哪些对象被垃圾收集得最多?(这是一个非常庞大的代码库,因此比较堆的快照并不是很有效)

42557 次浏览

As a general principle you'd want to cache as much as possible and do as little creating and destroying for each run of your loop.

The first thing that pops in my head is to reduce the use of anonymous functions (if you have any) inside your main loop. Also it'd be easy to fall into the trap of creating and destroying objects that are passed into other functions. I'm by no means a javascript expert, but I would imagine that this:

var options = {var1: value1, var2: value2, ChangingVariable: value3};
function loopfunc()
{
//do something
}


while(true)
{
$.each(listofthings, loopfunc);


options.ChangingVariable = newvalue;
someOtherFunction(options);
}

would run much faster than this:

while(true)
{
$.each(listofthings, function(){
//do something on the list
});


someOtherFunction({
var1: value1,
var2: value2,
ChangingVariable: newvalue
});
}

Is there ever any downtime for your program? Maybe you need it to run smoothly for a second or two (e.g. for an animation) and then it has more time to process? If this is the case I could see taking objects that would normally be garbage collected throughout the animation and keeping a reference to them in some global object. Then when the animation ends you can clear all the references and let the garbage collector do it's work.

Sorry if this is all a bit trivial compared to what you've already tried and thought of.

I'd make one or few objects in the global scope (where I'm sure garbage collector is not allowed to touch them), then I'd try to refactor my solution to use those objects to get the job done, instead of using local variables.

Of course it couldn't be done everywhere in the code, but generally that's my way to avoid garbage collector.

P.S. It might make that specific part of code a little bit less maintainable.

A lot of the things you need to do to minimize GC churn go against what is considered idiomatic JS in most other scenarios, so please keep in mind the context when judging the advice I give.

Allocation happens in modern interpreters in several places:

  1. When you create an object via new or via literal syntax [...], or {}.
  2. When you concatenate strings.
  3. When you enter a scope that contains function declarations.
  4. When you perform an action that triggers an exception.
  5. When you evaluate a function expression: (function (...) { ... }).
  6. When you perform an operation that coerces to Object like Object(myNumber) or Number.prototype.toString.call(42)
  7. When you call a builtin that does any of these under the hood, like Array.prototype.slice.
  8. When you use arguments to reflect over the parameter list.
  9. When you split a string or match with a regular expression.

Avoid doing those, and pool and reuse objects where possible.

Specifically, look out for opportunities to:

  1. Pull inner functions that have no or few dependencies on closed-over state out into a higher, longer-lived scope. (Some code minifiers like Closure compiler can inline inner functions and might improve your GC performance.)
  2. Avoid using strings to represent structured data or for dynamic addressing. Especially avoid repeatedly parsing using split or regular expression matches since each requires multiple object allocations. This frequently happens with keys into lookup tables and dynamic DOM node IDs. For example, lookupTable['foo-' + x] and document.getElementById('foo-' + x) both involve an allocation since there is a string concatenation. Often you can attach keys to long-lived objects instead of re-concatenating. Depending on the browsers you need to support, you might be able to use Map to use objects as keys directly.
  3. Avoid catching exceptions on normal code-paths. Instead of try { op(x) } catch (e) { ... }, do if (!opCouldFailOn(x)) { op(x); } else { ... }.
  4. When you can't avoid creating strings, e.g. to pass a message to a server, use a builtin like JSON.stringify which uses an internal native buffer to accumulate content instead of allocating multiple objects.
  5. Avoid using callbacks for high-frequency events, and where you can, pass as a callback a long-lived function (see 1) that recreates state from the message content.
  6. Avoid using arguments since functions that use that have to create an array-like object when called.

I suggested using JSON.stringify to create outgoing network messages. Parsing input messages using JSON.parse obviously involves allocation, and lots of it for large messages. If you can represent your incoming messages as arrays of primitives, then you can save a lot of allocations. The only other builtin around which you can build a parser that does not allocate is String.prototype.charCodeAt. A parser for a complex format that only uses that is going to be hellish to read though.

The Chrome developer tools have a very nice feature for tracing memory allocation. It's called the Memory Timeline. This article describes some details. I suppose this is what you're talking about re the "sawtooth"? This is normal behavior for most GC'ed runtimes. Allocation proceeds until a usage threshold is reached triggering a collection. Normally there are different kinds of collections at different thresholds.

Memory Timeline in Chrome

Garbage collections are included in the event list associated with the trace along with their duration. On my rather old notebook, ephemeral collections are occurring at about 4Mb and take 30ms. This is 2 of your 60Hz loop iterations. If this is an animation, 30ms collections are probably causing stutter. You should start here to see what's going on in your environment: where the collection threshold is and how long your collections are taking. This gives you a reference point to assess optimizations. But you probably won't do better than to decrease the frequency of the stutter by slowing the allocation rate, lengthening the interval between collections.

The next step is to use the Profiles | Record Heap Allocations feature to generate a catalog of allocations by record type. This will quickly show which object types are consuming the most memory during the trace period, which is equivalent to allocation rate. Focus on these in descending order of rate.

The techniques are not rocket science. Avoid boxed objects when you can do with an unboxed one. Use global variables to hold and reuse single boxed objects rather than allocating fresh ones in each iteration. Pool common object types in free lists rather than abandoning them. Cache string concatenation results that are likely reusable in future iterations. Avoid allocation just to return function results by setting variables in an enclosing scope instead. You will have to consider each object type in its own context to find the best strategy. If you need help with specifics, post an edit describing details of the challenge you're looking at.

I advise against perverting your normal coding style throughout an application in a shotgun attempt to produce less garbage. This is for the same reason you should not optimize for speed prematurely. Most of your effort plus much of the added complexity and obscurity of code will be meaningless.