是否有任何 JVM 的 JIT 编译器生成使用向量化浮点指令的代码?

假设我的 Java 程序的瓶颈在于计算一堆矢量点积的紧循环。是的,我已经分析过了,是的,这是瓶颈,是的,这很重要,是的,这就是算法的原理,是的,我已经运行了 ProGuard 来优化字节码,等等。

工作本质上是点积。比如,我有两个 float[50],我需要计算成对乘积的和。我知道存在处理器指令集来快速批量地执行这类操作,比如 SSE 或 MMX。

是的,我可以通过用 JNI 编写一些本地代码来访问它们。JNI 调用结果是非常昂贵的。

我知道你不能保证什么 JIT 将编译或不编译。有人听说过使用这些指令的 JIT 生成代码吗?如果是这样的话,Java 代码有没有什么特别之处可以帮助我们用这种方式编译它呢?

可能是“没有”值得一问。

27825 次浏览

So, basically, you want your code to run faster. JNI is the answer. I know you said it didn't work for you, but let me show you that you are wrong.

Here's Dot.java:

import java.nio.FloatBuffer;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;


@Platform(include = "Dot.h", compiler = "fastfpu")
public class Dot {
static { Loader.load(); }


static float[] a = new float[50], b = new float[50];
static float dot() {
float sum = 0;
for (int i = 0; i < 50; i++) {
sum += a[i]*b[i];
}
return sum;
}
static native @MemberGetter FloatPointer ac();
static native @MemberGetter FloatPointer bc();
static native @NoException float dotc();


public static void main(String[] args) {
FloatBuffer ab = ac().capacity(50).asBuffer();
FloatBuffer bb = bc().capacity(50).asBuffer();


for (int i = 0; i < 10000000; i++) {
a[i%50] = b[i%50] = dot();
float sum = dotc();
ab.put(i%50, sum);
bb.put(i%50, sum);
}
long t1 = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
a[i%50] = b[i%50] = dot();
}
long t2 = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
float sum = dotc();
ab.put(i%50, sum);
bb.put(i%50, sum);
}
long t3 = System.nanoTime();
System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
System.out.println("dotc(): "  + (t3 - t2)/10000000 + " ns");
}
}

and Dot.h:

float ac[50], bc[50];


inline float dotc() {
float sum = 0;
for (int i = 0; i < 50; i++) {
sum += ac[i]*bc[i];
}
return sum;
}

We can compile and run that with JavaCPP using this command:

$ java -jar javacpp.jar Dot.java -exec

With an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, Fedora 30, GCC 9.1.1, and OpenJDK 8 or 11, I get this kind of output:

dot(): 39 ns
dotc(): 16 ns

Or roughly 2.4 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.

I dont believe most if any VMs are ever smart enough for this sort of optimisations. To be fair most optimisations are much simpler, such as shifting instead of multiplication whena power of two. The mono project introduced their own vector and other methods with native backings to help performance.

You could write OpenCl kernel to do the computing and run it from java http://www.jocl.org/.

Code can be run on CPU and/or GPU and OpenCL language supports also vector types so you should be able to take explicitly advantage of e.g. SSE3/4 instructions.

To address some of the scepticism expressed by others here I suggest anyone who wants to prove to themselves or other use the following method:

  • Create a JMH project
  • Write a small snippet of vectorizable math.
  • Run their benchmark flipping between -XX:-UseSuperWord and -XX:+UseSuperWord(default)
  • If no difference in performance is observed, your code probably didn't get vectorized
  • To make sure, run your benchmark such that it prints out the assembly. On linux you can enjoy the perfasm profiler('-prof perfasm') have a look and see if the instructions you expect get generated.

Example:

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE) //makes looking at assembly easier
public void inc() {
for (int i=0;i<a.length;i++)
a[i]++;// a is an int[], I benchmarked with size 32K
}

The result with and without the flag (on recent Haswell laptop, Oracle JDK 8u60): -XX:+UseSuperWord : 475.073 ± 44.579 ns/op (nanoseconds per op) -XX:-UseSuperWord : 3376.364 ± 233.211 ns/op

The assembly for the hot loop is a bit much to format and stick in here but here's a snippet(hsdis.so is failing to format some of the AVX2 vector instructions so I ran with -XX:UseAVX=1): -XX:+UseSuperWord(with '-prof perfasm:intelSyntax=true')

  9.15%   10.90%  │││ │↗    0x00007fc09d1ece60: vmovdqu xmm1,XMMWORD PTR [r10+r9*4+0x18]
10.63%    9.78%  │││ ││    0x00007fc09d1ece67: vpaddd xmm1,xmm1,xmm0
12.47%   12.67%  │││ ││    0x00007fc09d1ece6b: movsxd r11,r9d
8.54%    7.82%  │││ ││    0x00007fc09d1ece6e: vmovdqu xmm2,XMMWORD PTR [r10+r11*4+0x28]
│││ ││                                                  ;*iaload
│││ ││                                                  ; - psy.lob.saw.VectorMath::inc@17 (line 45)
10.68%   10.36%  │││ ││    0x00007fc09d1ece75: vmovdqu XMMWORD PTR [r10+r9*4+0x18],xmm1
10.65%   10.44%  │││ ││    0x00007fc09d1ece7c: vpaddd xmm1,xmm2,xmm0
10.11%   11.94%  │││ ││    0x00007fc09d1ece80: vmovdqu XMMWORD PTR [r10+r11*4+0x28],xmm1
│││ ││                                                  ;*iastore
│││ ││                                                  ; - psy.lob.saw.VectorMath::inc@20 (line 45)
11.19%   12.65%  │││ ││    0x00007fc09d1ece87: add    r9d,0x8            ;*iinc
│││ ││                                                  ; - psy.lob.saw.VectorMath::inc@21 (line 44)
8.38%    9.50%  │││ ││    0x00007fc09d1ece8b: cmp    r9d,ecx
│││ │╰    0x00007fc09d1ece8e: jl     0x00007fc09d1ece60  ;*if_icmpge

Have fun storming the castle!

In HotSpot versions beginning with Java 7u40, the server compiler provides support for auto-vectorisation. According to JDK-6340864

However, this seems to be true only for "simple loops" - at least for the moment. For example, accumulating an array cannot be vectorised yet JDK-7192383

I'm guessing you wrote this question before you found out about netlib-java ;-) it provides exactly the native API you require, with machine optimised implementations, and does not have any cost at the native boundary due thanks to memory pinning.

Have a look at Performance comparison between Java and JNI for optimal implementation of computational micro-kernels. They show that Java HotSpot VM server compiler supports auto-vectorization using Super-word Level Parallelism, which is limited to simple cases of inside the loop parallelism. This article will also give you some guidance whether your data size is large enough to justify going JNI route.

Here is nice article about experimenting with Java and SIMD instructions written by my friend: http://prestodb.rocks/code/simd/

Its general outcome is that you can expect JIT to use some SSE operations in 1.8 (and some more in 1.9). Though you should not expect much and you need to be careful.

Java 16 introduced the Vector API (JEP 417, JEP 414, JEP 338). It is currently "incubating" (ie, beta), although anyone can use it. It will probably become GA in Java 19 or 20.

It's a little verbose, but is meant to be reliable and portable.

The following code can be rewritten:

void scalarComputation(float[] a, float[] b, float[] c) {
assert a.length == b.length && b.length == c.length;
for (int i = 0; i < a.length; i++) {
c[i] = (a[i] * a[i] + b[i] * b[i]) * -1.0f;
}
}

Using the Vector API:

static final VectorSpecies<Float> SPECIES = FloatVector.SPECIES_PREFERRED;


void vectorComputation(float[] a, float[] b, float[] c) {
assert a.length == b.length && b.length == c.length;
int i = 0;
int upperBound = SPECIES.loopBound(a.length);
for (; i < upperBound; i += SPECIES.length()) {
// FloatVector va, vb, vc;
var va = FloatVector.fromArray(SPECIES, a, i);
var vb = FloatVector.fromArray(SPECIES, b, i);
var vc = va.mul(va)
.add(vb.mul(vb))
.neg();
vc.intoArray(c, i);
}
for (; i < a.length; i++) {
c[i] = (a[i] * a[i] + b[i] * b[i]) * -1.0f;
}
}

Newer builds (ie, Java 18) are trying to get rid of that last for loop using predicate instructions, but support for that is still supposedly spotty.