用 C、 Clojure、 Python、 Ruby、 Scala 和其他语言解释基准测试

免责声明

我知道人为的基准是邪恶的。它们只能在非常特定的狭窄情况下显示结果。我不认为一种语言比另一种语言好就因为那个愚蠢的长椅。然而,我想知道为什么结果会如此不同。请看下面我的问题。

数学基准描述

基准测试是一种简单的数学计算,用来找出相差6的素数对(即所谓的 性感超级英雄) 例如,低于100的性感素数是: (5 11) (7 13) (11 17) (13 19) (17 23) (23 29) (31 37) (37 43) (41 47) (47 53) (53 59) (61 67) (67 73) (73 79) (83 89) (97 103)

结果表

表中: 以秒为单位的计算时间 正在运行: 除了 Factor 之外的所有操作都在 VirtualBox (Debian 不稳定的 amd64客户端,Windows 7 x64主机)中运行 CPU: AMD A4-3305M

  Sexy primes up to:        10k      20k      30k      100k


Bash                    58.00   200.00     [*1]      [*1]


C                        0.20     0.65     1.42     15.00


Clojure1.4               4.12     8.32    16.00    137.93


Clojure1.4 (optimized)   0.95     1.82     2.30     16.00


Factor                    n/a      n/a    15.00    180.00


Python2.7                1.49     5.20    11.00       119


Ruby1.8                  5.10    18.32    40.48    377.00


Ruby1.9.3                1.36     5.73    10.48    106.00


Scala2.9.2               0.93     1.41     2.73     20.84


Scala2.9.2 (optimized)   0.32     0.79     1.46     12.01

[ * 1]-我不敢想象这需要多少时间

代码列表

答:

int isprime(int x) {
int i;
for (i = 2; i < x; ++i)
if (x%i == 0) return 0;
return 1;
}


void findprimes(int m) {
int i;
for ( i = 11; i < m; ++i)
if (isprime(i) && isprime(i-6))
printf("%d %d\n", i-6, i);
}


main() {
findprimes(10*1000);
}

露比:

def is_prime?(n)
(2...n).all?{|m| n%m != 0 }
end


def sexy_primes(x)
(9..x).map do |i|
[i-6, i]
end.select do |j|
j.all?{|j| is_prime? j}
end
end


a = Time.now
p sexy_primes(10*1000)
b = Time.now
puts "#{(b-a)*1000} mils"

Scala:

def isPrime(n: Int) =
(2 until n) forall { n % _ != 0 }


def sexyPrimes(n: Int) =
(11 to n) map { i => List(i-6, i) } filter { _ forall(isPrime(_)) }


val a = System.currentTimeMillis()
println(sexyPrimes(100*1000))
val b = System.currentTimeMillis()
println((b-a).toString + " mils")

Scala 优化了 isPrime(与 Clojure 优化的想法相同) :

import scala.annotation.tailrec


@tailrec // Not required, but will warn if optimization doesn't work
def isPrime(n: Int, i: Int = 2): Boolean =
if (i == n) true
else if (n % i != 0) isPrime(n, i + 1)
else false

Clojure:

(defn is-prime? [n]
(every? #(> (mod n %) 0)
(range 2 n)))


(defn sexy-primes [m]
(for [x (range 11 (inc m))
:let [z (list (- x 6) x)]
:when (every? #(is-prime? %) z)]
z))


(let [a (System/currentTimeMillis)]
(println (sexy-primes (* 10 1000)))
(let [b (System/currentTimeMillis)]
(println (- b a) "mils")))

Clojure 优化 is-prime?:

(defn ^:static is-prime? [^long n]
(loop [i (long 2)]
(if (= (rem n i) 0)
false
(if (>= (inc i) n) true (recur (inc i))))))

巨蟒

import time as time_


def is_prime(n):
return all((n%j > 0) for j in xrange(2, n))


def primes_below(x):
return [[j-6, j] for j in xrange(9, x+1) if is_prime(j) and is_prime(j-6)]


a = int(round(time_.time() * 1000))
print(primes_below(10*1000))
b = int(round(time_.time() * 1000))
print(str((b-a)) + " mils")

因素

MEMO:: prime? ( n -- ? )
n 1 - 2 [a,b] [ n swap mod 0 > ] all? ;


MEMO: sexyprimes ( n n -- r r )
[a,b] [ prime? ] filter [ 6 + ] map [ prime? ] filter dup [ 6 - ] map ;


5 10 1000 * sexyprimes . .

Bash (zsh) :

#!/usr/bin/zsh
function prime {
for (( i = 2; i < $1; i++ )); do
if [[ $[$1%i] == 0 ]]; then
echo 1
exit
fi
done
echo 0
}


function sexy-primes {
for (( i = 9; i <= $1; i++ )); do
j=$[i-6]
if [[ $(prime $i) == 0 && $(prime $j) == 0 ]]; then
echo $j $i
fi
done
}


sexy-primes 10000

问题

  1. 为什么 Scala 这么快? 是因为 静态打字吗? 还是因为它非常有效地使用了 JVM?
  2. 为什么 Ruby 和 Python 之间有这么大的区别?我觉得这两者没有什么完全不同。也许我的代码是错的。请告诉我!谢谢。是的,我的代码出错了。Python 和 Ruby 1.9是相当平等的。
  3. Ruby 版本之间生产力的飞跃令人印象深刻。
  4. 我可以通过添加类型声明来优化 Clojure 代码吗?
15749 次浏览

I'll answer just #2, since it's the only one I've got anything remotely intelligent to say, but for your Python code, you're creating an intermediate list in is_prime, whereas you're using .map in your all in Ruby which is just iterating.

If you change your is_prime to:

def is_prime(n):
return all((n%j > 0) for j in range(2, n))

they're on par.

I could optimize the Python further, but my Ruby isn't good enough to know when I've given more of an advantage (e.g., using xrange makes Python win on my machine, but I don't remember if the Ruby range you used creates an entire range in memory or not).

EDIT: Without being too silly, making the Python code look like:

import time


def is_prime(n):
return all(n % j for j in xrange(2, n))


def primes_below(x):
return [(j-6, j) for j in xrange(9, x + 1) if is_prime(j) and is_prime(j-6)]


a = int(round(time.time() * 1000))
print(primes_below(10*1000))
b = int(round(time.time() * 1000))
print(str((b-a)) + " mils")

which doesn't change much more, puts it at 1.5s for me, and, with being extra silly, running it with PyPy puts it at .3s for 10K, and 21s for 100K.

In Scala try using Tuple2 instead of List, it should go faster. Just remove the word 'List' since (x, y) is a Tuple2.

Tuple2 is specialized for Int, Long and Double meaning it won't have to box/unbox those raw datatypes. Tuple2 source. List isn't specialized. List source.

Rough answers:

  1. Scala's static typing is helping it quite a bit here - this means that it uses the JVM pretty efficiently without too much extra effort.
  2. I'm not exactly sure on the Ruby/Python difference, but I suspect that (2...n).all? in the function is-prime? is likely to be quite well optimised in Ruby (EDIT: sounds like this is indeed the case, see Julian's answer for more detail...)
  3. Ruby 1.9.3 is just much better optimised
  4. Clojure code can certainly be accelerated a lot! While Clojure is dynamic by default, you can use type hints, primitive maths etc. to get close to Scala / pure Java speed in many cases when you need to.

Most important optimisation in the Clojure code would be to use typed primitive maths within is-prime?, something like:

(set! *unchecked-math* true) ;; at top of file to avoid using BigIntegers


(defn ^:static is-prime? [^long n]
(loop [i (long 2)]
(if (zero? (mod n i))
false
(if (>= (inc i) n) true (recur (inc i))))))

With this improvement, I get Clojure completing 10k in 0.635 secs (i.e. the second fastest on your list, beating Scala)

P.S. note that you have printing code inside your benchmark in some cases - not a good idea as it will distort the results, especially if using a function like print for the first time causes initialisation of IO subsystems or something like that!

You can make the Scala a lot faster by modifying your isPrime method to

  def isPrime(n: Int, i: Int = 2): Boolean =
if (i == n) true
else if (n % i != 0) isPrime(n, i + 1)
else false

Not quite as concise but the program runs in 40% of the time!

We cut out the superfluous Range and anonymous Function objects, the Scala compiler recognizes the tail-recursion and turns it into a while-loop, which the JVM can turn into more or less optimal machine code, so it shouldn't be too far off the C version.

See also: How to optimize for-comprehensions and loops in Scala?

Never mind the benchmarks; the problem got me interested and I made some fast tweaks. This uses the lru_cache decorator, which memoizes a function; so when we call is_prime(i-6) we basically get that prime check for free. This change cuts the work roughly in half. Also, we can make the range() calls step through just the odd numbers, cutting the work roughly in half again.

http://en.wikipedia.org/wiki/Memoization

http://docs.python.org/dev/library/functools.html

This requires Python 3.2 or newer to get lru_cache, but could work with an older Python if you install a Python recipe that provides lru_cache. If you are using Python 2.x you should really use xrange() instead of range().

http://code.activestate.com/recipes/577479-simple-caching-decorator/

from functools import lru_cache
import time as time_


@lru_cache()
def is_prime(n):
return n%2 and all(n%i for i in range(3, n, 2))


def primes_below(x):
return [(i-6, i) for i in range(9, x+1, 2) if is_prime(i) and is_prime(i-6)]


correct100 = [(5, 11), (7, 13), (11, 17), (13, 19), (17, 23), (23, 29),
(31, 37), (37, 43), (41, 47), (47, 53), (53, 59), (61, 67), (67, 73),
(73, 79), (83, 89)]
assert(primes_below(100) == correct100)


a = time_.time()
print(primes_below(30*1000))
b = time_.time()


elapsed = b - a
print("{} msec".format(round(elapsed * 1000)))

The above took only a very short time to edit. I decided to take it one step further, and make the primes test only try prime divisors, and only up to the square root of the number being tested. The way I did it only works if you check numbers in order, so it can accumulate all the primes as it goes; but this problem was already checking the numbers in order so that was fine.

On my laptop (nothing special; processor is a 1.5 GHz AMD Turion II "K625") this version produced an answer for 100K in under 8 seconds.

from functools import lru_cache
import math
import time as time_


known_primes = set([2, 3, 5, 7])


@lru_cache(maxsize=128)
def is_prime(n):
last = math.ceil(math.sqrt(n))
flag = n%2 and all(n%x for x in known_primes if x <= last)
if flag:
known_primes.add(n)
return flag


def primes_below(x):
return [(i-6, i) for i in range(9, x+1, 2) if is_prime(i) and is_prime(i-6)]


correct100 = [(5, 11), (7, 13), (11, 17), (13, 19), (17, 23), (23, 29),
(31, 37), (37, 43), (41, 47), (47, 53), (53, 59), (61, 67), (67, 73),
(73, 79), (83, 89)]
assert(primes_below(100) == correct100)


a = time_.time()
print(primes_below(100*1000))
b = time_.time()


elapsed = b - a
print("{} msec".format(round(elapsed * 1000)))

The above code is pretty easy to write in Python, Ruby, etc. but would be more of a pain in C.

You can't compare the numbers on this version against the numbers from the other versions without rewriting the others to use similar tricks. I'm not trying to prove anything here; I just thought the problem was fun and I wanted to see what sort of easy performance improvements I could glean.

Here's the code for the Go (golang.org) version:

package main


import (
"fmt"
)




func main(){
findprimes(10*1000)
}


func isprime(x int) bool {
for i := 2; i < x; i++ {
if x%i == 0 {
return false
}
}
return true
}


func findprimes(m int){
for i := 11; i < m; i++ {
if isprime(i) && isprime(i-6) {
fmt.Printf("%d %d\n", i-6, i)
}
}
}

It ran just as fast as the C version.

Using an Asus u81a Intel Core 2 Duo T6500 2.1GHz, 2MB L2 cache, 800MHz FSB. 4GB RAM

The 100k version: C: 2.723s Go: 2.743s

With 1000000 (1M instead of 100K): C: 3m35.458s Go: 3m36.259s

But I think that it would be fair to use Go's built in multithreading capabilities and compare that version with the regular C version (without multithreading), just because it's almost too easy to do multithreading with Go.

Update: I did a parallel version using Goroutines in Go:

package main


import (
"fmt"
"runtime"
)


func main(){
runtime.GOMAXPROCS(4)
printer := make(chan string)
printer2 := make(chan string)
printer3 := make(chan string)
printer4 := make(chan string)
finished := make(chan int)


var buffer, buffer2, buffer3 string


running := 4
go findprimes(11, 30000, printer, finished)
go findprimes(30001, 60000, printer2, finished)
go findprimes(60001, 85000, printer3, finished)
go findprimes(85001, 100000, printer4, finished)


for {
select {
case i := <-printer:
// batch of sexy primes received from printer channel 1, print them
fmt.Printf(i)
case i := <-printer2:
// sexy prime list received from channel, store it
buffer = i
case i := <-printer3:
// sexy prime list received from channel, store it
buffer2 = i
case i := <-printer4:
// sexy prime list received from channel, store it
buffer3 = i
case <-finished:
running--
if running == 0 {
// all goroutines ended
// dump buffer to stdout
fmt.Printf(buffer)
fmt.Printf(buffer2)
fmt.Printf(buffer3)
return
}
}
}
}


func isprime(x int) bool {
for i := 2; i < x; i++ {
if x%i == 0 {
return false
}
}
return true
}


func findprimes(from int, to int, printer chan string, finished chan int){
str := ""
for i := from; i <= to; i++ {
if isprime(i) && isprime(i-6) {
str = str + fmt.Sprintf("%d %d\n", i-6, i)
}
}
printer <- str
//fmt.Printf("Finished %d to %d\n", from, to)
finished <- 1
}

The parallelized version used in average 2.743 seconds, the exact same time that the regular version used.

The parallelized version completed in 1.706 seconds. It used less than 1.5 Mb RAM.

One odd thing: My dual core kubuntu 64bit never peaked in both cores. It looked like Go was using just one core. Fixed with a call to runtime.GOMAXPROCS(4)

Update: I ran the paralellized version up to 1M numbers. One of My CPU cores was at 100% all the time, while the other wasn't used at all (odd). It took a whole minute more than the C and the regular Go versions. :(

With 1000000 (1M instead of 100K):

C: 3m35.458s Go: 3m36.259s Go using goroutines:3m27.137s2m16.125s

The 100k version:

C: 2.723s Go: 2.743s Go using goroutines: 1.706s

Here's a fast Clojure version, using the same basic algorithms:

(set! *unchecked-math* true)


(defn is-prime? [^long n]
(loop [i 2]
(if (zero? (unchecked-remainder-int n i))
false
(if (>= (inc i) n)
true
(recur (inc i))))))


(defn sexy-primes [m]
(for [x (range 11 (inc m))
:when (and (is-prime? x) (is-prime? (- x 6)))]
[(- x 6) x]))

It runs about 20x faster than your original on my machine. And here's a version that leverages the new reducers library in 1.5 (requires Java 7 or JSR 166):

(require '[clojure.core.reducers :as r]) ;'


(defn sexy-primes [m]
(->> (vec (range 11 (inc m)))
(r/filter #(and (is-prime? %) (is-prime? (- % 6))))
(r/map #(list (- % 6) %))
(r/fold (fn ([] []) ([a b] (into a b))) conj)))

This runs about 40x faster than your original. On my machine, that's 100k in 1.5 seconds.

Here is my scala version in both parallel and no-parallel, just for fun: (In my dual core compute, the parallel version takes 335ms while the no-parallel version takes 655ms)

object SexyPrimes {
def isPrime(n: Int): Boolean =
(2 to math.sqrt(n).toInt).forall{ n%_ != 0 }


def isSexyPrime(n: Int): Boolean = isPrime(n) && isPrime(n-6)


def findPrimesPar(n: Int) {
for(k <- (11 to n).par)
if(isSexyPrime(k)) printf("%d %d\n",k-6,k)
}


def findPrimes(n: Int) {
for(k <- 11 to n)
if(isSexyPrime(k)) printf("%d %d\n",k-6,k)
}




def timeOf(call : =>Unit) {
val start = System.currentTimeMillis
call
val end = System.currentTimeMillis
println((end-start)+" mils")
}


def main(args: Array[String]) {
timeOf(findPrimes(100*1000))
println("------------------------")
timeOf(findPrimesPar(100*1000))
}
}

EDIT: According to Emil H's suggestion, I have changed my code to avoid the effects of IO and jvm warmup:

The result shows in my compute:

List(3432, 1934, 3261, 1716, 3229, 1654, 3214, 1700)

object SexyPrimes {
def isPrime(n: Int): Boolean =
(2 to math.sqrt(n).toInt).forall{ n%_ != 0 }


def isSexyPrime(n: Int): Boolean = isPrime(n) && isPrime(n-6)


def findPrimesPar(n: Int) {
for(k <- (11 to n).par)
if(isSexyPrime(k)) ()//printf("%d %d\n",k-6,k)
}


def findPrimes(n: Int) {
for(k <- 11 to n)
if(isSexyPrime(k)) ()//printf("%d %d\n",k-6,k)
}




def timeOf(call : =>Unit): Long = {
val start = System.currentTimeMillis
call
val end = System.currentTimeMillis
end - start
}


def main(args: Array[String]) {
val xs = timeOf(findPrimes(1000*1000))::timeOf(findPrimesPar(1000*1000))::
timeOf(findPrimes(1000*1000))::timeOf(findPrimesPar(1000*1000))::
timeOf(findPrimes(1000*1000))::timeOf(findPrimesPar(1000*1000))::
timeOf(findPrimes(1000*1000))::timeOf(findPrimesPar(1000*1000))::Nil
println(xs)
}
}

Don't forget Fortran! (Mostly joking, but I would expect similar performance to C). The statements with exclamation points are optional, but good style. (! is a comment character in fortran 90)

logical function isprime(n)
IMPLICIT NONE !
integer :: n,i
do i=2,n
if(mod(n,i).eq.0)) return .false.
enddo
return .true.
end


subroutine findprimes(m)
IMPLICIT NONE !
integer :: m,i
logical, external :: isprime


do i=11,m
if(isprime(i) .and. isprime(i-6))then
write(*,*) i-6,i
endif
enddo
end


program main
findprimes(10*1000)
end

Just for the fun of it, here is a parallel Ruby version.

require 'benchmark'


num = ARGV[0].to_i


def is_prime?(n)
(2...n).all?{|m| n%m != 0 }
end


def sexy_primes_default(x)
(9..x).map do |i|
[i-6, i]
end.select do |j|
j.all?{|j| is_prime? j}
end
end


def sexy_primes_threads(x)
partition = (9..x).map do |i|
[i-6, i]
end.group_by do |x|
x[0].to_s[-1]
end
threads = Array.new
partition.each_key do |k|
threads << Thread.new do
partition[k].select do |j|
j.all?{|j| is_prime? j}
end
end
end
threads.each {|t| t.join}
threads.map{|t| t.value}.reject{|x| x.empty?}
end


puts "Running up to num #{num}"


Benchmark.bm(10) do |x|
x.report("default") {a = sexy_primes_default(num)}
x.report("threads") {a = sexy_primes_threads(num)}
end

On my 1.8GHz Core i5 MacBook Air, the performance results are:

# Ruby 1.9.3
$ ./sexyprimes.rb 100000
Running up to num 100000
user     system      total        real
default     68.840000   0.060000  68.900000 ( 68.922703)
threads     71.730000   0.090000  71.820000 ( 71.847346)


# JRuby 1.6.7.2 on JVM 1.7.0_05
$ jruby --1.9 --server sexyprimes.rb 100000
Running up to num 100000
user     system      total        real
default    56.709000   0.000000  56.709000 ( 56.708000)
threads    36.396000   0.000000  36.396000 ( 36.396000)


# JRuby 1.7.0.preview1 on JVM 1.7.0_05
$ jruby --server sexyprimes.rb 100000
Running up to num 100000
user     system      total        real
default     52.640000   0.270000  52.910000 ( 51.393000)
threads    105.700000   0.290000 105.990000 ( 30.298000)

It looks like the JVM's JIT is giving Ruby a nice performance boost in the default case, while true multithreading helps JRuby perform 50% faster in the threaded case. What's more interesting is that JRuby 1.7 improves the JRuby 1.6 score by a healthy 17%!

I couldn't resist to do a few of the most obvious optimizations for the C version which made the 100k test now take 0.3s on my machine (5 times faster than the C version in the question, both compiled with MSVC 2010 /Ox).

int isprime( int x )
{
int i, n;
for( i = 3, n = x >> 1; i <= n; i += 2 )
if( x % i == 0 )
return 0;
return 1;
}


void findprimes( int m )
{
int i, s = 3; // s is bitmask of primes in last 3 odd numbers
for( i = 11; i < m; i += 2, s >>= 1 ) {
if( isprime( i ) ) {
if( s & 1 )
printf( "%d %d\n", i - 6, i );
s |= 1 << 3;
}
}
}


main() {
findprimes( 10 * 1000 );
}

Here is the identical implemention in Java:

public class prime
{
private static boolean isprime( final int x )
{
for( int i = 3, n = x >> 1; i <= n; i += 2 )
if( x % i == 0 )
return false;
return true;
}


private static void findprimes( final int m )
{
int s = 3; // s is bitmask of primes in last 3 odd numbers
for( int i = 11; i < m; i += 2, s >>= 1 ) {
if( isprime( i ) ) {
if( ( s & 1 ) != 0 )
print( i );
s |= 1 << 3;
}
}
}


private static void print( int i )
{
System.out.println( ( i - 6 ) + " " + i );
}


public static void main( String[] args )
{
// findprimes( 300 * 1000 ); // for some JIT training
long time = System.nanoTime();
findprimes( 10 * 1000 );
time = System.nanoTime() - time;
System.err.println( "time: " + ( time / 10000 ) / 100.0 + "ms" );
}
}

With Java 1.7.0_04 this runs almost exactly as fast as the C version. Client or server VM doesn't show much difference, except that JIT training seems to help the server VM a bit (~3%) while it has almost no effect with the client VM. The output in Java seems to be slower than in C. If the output is replaced with a static counter in both versions, the Java version runs a little faster than the C version.

These are my times for the 100k run:

  • 319ms C compiled with /Ox and output to >NIL:
  • 312ms C compiled with /Ox and static counter
  • 324ms Java client VM with output to >NIL:
  • 299ms Java client VM with static counter

and the 1M run (16386 results):

  • 24.95s C compiled with /Ox and static counter
  • 25.08s Java client VM with static counter
  • 24.86s Java server VM with static counter

While this does not really answer your questions, it shows that small tweaks can have a noteworthy impact on performance. So to be able to really compare languages you should try to avoid all algorithmic differences as much as possible.

It also gives a hint why Scala seems rather fast. It runs on the Java VM and thus benefits from its impressive performance.

The answer to your question #1 is that Yes, the JVM is incredably fast and yes static typing helps.

The JVM should be faster than C in the long run, possibly even faster than "Normal" assembly language--Of course you can always hand optimize assembly to beat anything by doing manual runtime profiling and creating a separate version for each CPU, you just have to be amazingly good and knowledgable.

The reasons for Java's speed are:

The JVM can analyze your code while it runs and hand-optimize it--for instance, if you had a method that could be statically analyzed at compile time to be a true function and the JVM noticed that you were often calling it with the same parameters, it COULD actually eliminate the call completely and just inject the results from the last call (I'm not sure if Java actually does this exactly, but it doest a lot of stuff like this).

Due to static typing, the JVM can know a lot about your code at compile time, this lets it pre-optimize quite a bit of stuff. It also lets the compiler optimize each class individually without knowledge of how another class is planning to use it. Also Java doesn't have arbitrary pointers to memory location, it KNOWS what values in memory may and may not be changed and can optimize accordingly.

Heap allocation is MUCH more efficient than C, Java's heap allocation is more like C's stack allocation in speed--yet more versatile. A lot of time has gone into the different algroithims used here, it's an art--for instance, all the objects with a short lifespan (like C's stack variables) are allocated to a "known" free location (no searching for a free spot with enough space) and are all freed together in a single step (like a stack pop).

The JVM can know quirks about your CPU architecture and generate machine code specifically for a given CPU.

The JVM can speed your code long after you shipped it. Much like moving a program to a new CPU can speed it up, moving it to a new version of the JVM can also give you huge speed performances taylored to CPUs that didn't even exist when you initially compiled your code, something c physically cannot do without a recomiple.

By the way, most of the bad rep for java speed comes from the long startup time to load the JVM (Someday someone will build the JVM into the OS and this will go away!) and the fact that many developers are really bad at writing GUI code (especially threaded) which caused Java GUIs to often become unresponsive and glitchy. Simple to use languages like Java and VB have their faults amplified by the fact that the capibilities of the average programmer tends to be lower than more complicated languages.

Based on x4u's answer, I wrote a scala version using recursion, and I improved on it by only going to the sqrt instead of x/2 for the prime check function. I get ~250ms for 100k, and ~600ms for 1M. I went ahead and went to 10M in ~6s.

import scala.annotation.tailrec


var count = 0;
def print(i:Int) = {
println((i - 6) + " " + i)
count += 1
}


@tailrec def isPrime(n:Int, i:Int = 3):Boolean = {
if(n % i == 0) return false;
else if(i * i > n) return true;
else isPrime(n = n, i = i + 2)
}


@tailrec def findPrimes(max:Int, bitMask:Int = 3, i:Int = 11):Unit = {
if (isPrime(i)) {
if((bitMask & 1) != 0) print(i)
if(i + 2 < max) findPrimes(max = max, bitMask = (bitMask | (1 << 3)) >> 1, i = i + 2)
} else if(i + 2 < max) {
findPrimes(max = max, bitMask = bitMask >> 1, i = i + 2)
}
}


val a = System.currentTimeMillis()
findPrimes(max=10000000)
println(count)
val b = System.currentTimeMillis()
println((b - a).toString + " mils")

I also went back and wrote a CoffeeScript (V8 JavaScript) version, which gets ~15ms for 100k, 250ms for 1M, and 6s for 10M, by using a counter (ignoring I/O). If I turn on the output it takes ~150ms for 100k, 1s for 1M, and 12s for 10M. Couldn't use tail recursion here, unfortunately, so I had to convert it back into loops.

count = 0;
print = (i) ->
console.log("#{i - 6} #{i}")
count += 1
return


isPrime = (n) ->
i = 3
while i * i < n
if n % i == 0
return false
i += 2
return true


findPrimes = (max) ->
bitMask = 3
for i in [11..max] by 2
prime = isPrime(i)
if prime
if (bitMask & 1) != 0
print(i)
bitMask |= (1 << 3)
bitMask >>= 1
return


a = new Date()
findPrimes(1000000)
console.log(count)
b = new Date()
console.log((b - a) + " ms")