ThreadLocal 变量的性能

ThreadLocal变量读取的速度比从常规字段读取的速度慢多少?

更具体地说,简单的对象创建比访问 ThreadLocal变量更快还是更慢?

我假设它足够快,因此每次创建 MessageDigest实例时,拥有 ThreadLocal<MessageDigest>实例要比创建 MessageDigest实例快得多。但是这也适用于 byte [10]或 byte [1000]吗?

编辑: 问题是当调用 ThreadLocal的 get 时到底发生了什么?如果这只是一个领域,像其他领域一样,那么答案应该是“它总是最快的”,对吗?

32593 次浏览

Build it and measure it.

Also, you only need one threadlocal if you encapsulate your message digesting behaviour into an object. If you need a local MessageDigest and a local byte[1000] for some purpose, create an object with a messageDigest and a byte[] field and put that object into the ThreadLocal rather than both individually.

@Pete is correct test before you optimise.

I would be very surprised if constructing a MessageDigest has any serious overhead when compared to actaully using it.

Miss using ThreadLocal can be a source of leaks and dangling references, that don't have a clear life cycle, generally I don't ever use ThreadLocal without a very clear plan of when a particular resource will be removed.

In 2009, some JVMs implemented ThreadLocal using an unsynchronised HashMap in the Thread.currentThread() object. This made it extremely fast (though not nearly as fast as using a regular field access, of course), as well as ensuring that the ThreadLocal object got tidied up when the Thread died. Updating this answer in 2016, it seems most (all?) newer JVMs use a ThreadLocalMap with linear probing. I am uncertain about the performance of those – but I cannot imagine it is significantly worse than the earlier implementation.

Of course, new Object() is also very fast these days, and the garbage collectors are also very good at reclaiming short-lived objects.

Unless you are certain that object creation is going to be expensive, or you need to persist some state on a thread by thread basis, you are better off going for the simpler allocate when needed solution, and only switching over to a ThreadLocal implementation when a profiler tells you that you need to.

Running unpublished benchmarks, ThreadLocal.get takes around 35 cycle per iteration on my machine. Not a great deal. In Sun's implementation a custom linear probing hash map in Thread maps ThreadLocals to values. Because it is only ever accessed by a single thread, it can be very fast.

Allocation of small objects take a similar number of cycles, although because of cache exhaustion you may get somewhat lower figures in a tight loop.

Construction of MessageDigest is likely to be relatively expensive. It has a fair amount of state and construction goes through the Provider SPI mechanism. You may be able to optimise by, for instance, cloning or providing the Provider.

Just because it may be faster to cache in a ThreadLocal rather than create does not necessarily mean that the system performance will increase. You will have additional overheads related to GC which slows everything down.

Unless your application very heavily uses MessageDigest you might want to consider using a conventional thread-safe cache instead.

Good question, I've been asking myself that recently. To give you definite numbers, the benchmarks below (in Scala, compiled to virtually the same bytecodes as the equivalent Java code):

var cnt: String = ""
val tlocal = new java.lang.ThreadLocal[String] {
override def initialValue = ""
}


def loop_heap_write = {
var i = 0
val until = totalwork / threadnum
while (i < until) {
if (cnt ne "") cnt = "!"
i += 1
}
cnt
}


def threadlocal = {
var i = 0
val until = totalwork / threadnum
while (i < until) {
if (tlocal.get eq null) i = until + i + 1
i += 1
}
if (i > until) println("thread local value was null " + i)
}

available here, were performed on an AMD 4x 2.8 GHz dual-cores and a quad-core i7 with hyperthreading (2.67 GHz).

These are the numbers:

i7

Specs: Intel i7 2x quad-core @ 2.67 GHz Test: scala.threads.ParallelTests

Test name: loop_heap_read

Thread num.: 1 Total tests: 200

Run times: (showing last 5) 9.0069 9.0036 9.0017 9.0084 9.0074 (avg = 9.1034 min = 8.9986 max = 21.0306 )

Thread num.: 2 Total tests: 200

Run times: (showing last 5) 4.5563 4.7128 4.5663 4.5617 4.5724 (avg = 4.6337 min = 4.5509 max = 13.9476 )

Thread num.: 4 Total tests: 200

Run times: (showing last 5) 2.3946 2.3979 2.3934 2.3937 2.3964 (avg = 2.5113 min = 2.3884 max = 13.5496 )

Thread num.: 8 Total tests: 200

Run times: (showing last 5) 2.4479 2.4362 2.4323 2.4472 2.4383 (avg = 2.5562 min = 2.4166 max = 10.3726 )

Test name: threadlocal

Thread num.: 1 Total tests: 200

Run times: (showing last 5) 91.1741 90.8978 90.6181 90.6200 90.6113 (avg = 91.0291 min = 90.6000 max = 129.7501 )

Thread num.: 2 Total tests: 200

Run times: (showing last 5) 45.3838 45.3858 45.6676 45.3772 45.3839 (avg = 46.0555 min = 45.3726 max = 90.7108 )

Thread num.: 4 Total tests: 200

Run times: (showing last 5) 22.8118 22.8135 59.1753 22.8229 22.8172 (avg = 23.9752 min = 22.7951 max = 59.1753 )

Thread num.: 8 Total tests: 200

Run times: (showing last 5) 22.2965 22.2415 22.3438 22.3109 22.4460 (avg = 23.2676 min = 22.2346 max = 50.3583 )

AMD

Specs: AMD 8220 4x dual-core @ 2.8 GHz Test: scala.threads.ParallelTests

Test name: loop_heap_read

Total work: 20000000 Thread num.: 1 Total tests: 200

Run times: (showing last 5) 12.625 12.631 12.634 12.632 12.628 (avg = 12.7333 min = 12.619 max = 26.698 )

Test name: loop_heap_read Total work: 20000000

Run times: (showing last 5) 6.412 6.424 6.408 6.397 6.43 (avg = 6.5367 min = 6.393 max = 19.716 )

Thread num.: 4 Total tests: 200

Run times: (showing last 5) 3.385 4.298 9.7 6.535 3.385 (avg = 5.6079 min = 3.354 max = 21.603 )

Thread num.: 8 Total tests: 200

Run times: (showing last 5) 5.389 5.795 10.818 3.823 3.824 (avg = 5.5810 min = 2.405 max = 19.755 )

Test name: threadlocal

Thread num.: 1 Total tests: 200

Run times: (showing last 5) 200.217 207.335 200.241 207.342 200.23 (avg = 202.2424 min = 200.184 max = 245.369 )

Thread num.: 2 Total tests: 200

Run times: (showing last 5) 100.208 100.199 100.211 103.781 100.215 (avg = 102.2238 min = 100.192 max = 129.505 )

Thread num.: 4 Total tests: 200

Run times: (showing last 5) 62.101 67.629 62.087 52.021 55.766 (avg = 65.6361 min = 50.282 max = 167.433 )

Thread num.: 8 Total tests: 200

Run times: (showing last 5) 40.672 74.301 34.434 41.549 28.119 (avg = 54.7701 min = 28.119 max = 94.424 )

Summary

A thread local is around 10-20x that of the heap read. It also seems to scale well on this JVM implementation and these architectures with the number of processors.

Here it goes another test. The results shows that ThreadLocal is a bit slower than a regular field, but in the same order. Aprox 12% slower

public class Test {
private static final int N = 100000000;
private static int fieldExecTime = 0;
private static int threadLocalExecTime = 0;


public static void main(String[] args) throws InterruptedException {
int execs = 10;
for (int i = 0; i < execs; i++) {
new FieldExample().run(i);
new ThreadLocaldExample().run(i);
}
System.out.println("Field avg:"+(fieldExecTime / execs));
System.out.println("ThreadLocal avg:"+(threadLocalExecTime / execs));
}


private static class FieldExample {
private Map<String,String> map = new HashMap<String, String>();


public void run(int z) {
System.out.println(z+"-Running  field sample");
long start = System.currentTimeMillis();
for (int i = 0; i < N; i++){
String s = Integer.toString(i);
map.put(s,"a");
map.remove(s);
}
long end = System.currentTimeMillis();
long t = (end - start);
fieldExecTime += t;
System.out.println(z+"-End field sample:"+t);
}
}


private static class ThreadLocaldExample{
private ThreadLocal<Map<String,String>> myThreadLocal = new ThreadLocal<Map<String,String>>() {
@Override protected Map<String, String> initialValue() {
return new HashMap<String, String>();
}
};


public void run(int z) {
System.out.println(z+"-Running thread local sample");
long start = System.currentTimeMillis();
for (int i = 0; i < N; i++){
String s = Integer.toString(i);
myThreadLocal.get().put(s, "a");
myThreadLocal.get().remove(s);
}
long end = System.currentTimeMillis();
long t = (end - start);
threadLocalExecTime += t;
System.out.println(z+"-End thread local sample:"+t);
}
}
}'

Output:

0-Running field sample

0-End field sample:6044

0-Running thread local sample

0-End thread local sample:6015

1-Running field sample

1-End field sample:5095

1-Running thread local sample

1-End thread local sample:5720

2-Running field sample

2-End field sample:4842

2-Running thread local sample

2-End thread local sample:5835

3-Running field sample

3-End field sample:4674

3-Running thread local sample

3-End thread local sample:5287

4-Running field sample

4-End field sample:4849

4-Running thread local sample

4-End thread local sample:5309

5-Running field sample

5-End field sample:4781

5-Running thread local sample

5-End thread local sample:5330

6-Running field sample

6-End field sample:5294

6-Running thread local sample

6-End thread local sample:5511

7-Running field sample

7-End field sample:5119

7-Running thread local sample

7-End thread local sample:5793

8-Running field sample

8-End field sample:4977

8-Running thread local sample

8-End thread local sample:6374

9-Running field sample

9-End field sample:4841

9-Running thread local sample

9-End thread local sample:5471

Field avg:5051

ThreadLocal avg:5664

Env:

openjdk version "1.8.0_131"

Intel® Core™ i7-7500U CPU @ 2.70GHz × 4

Ubuntu 16.04 LTS