String.Join 和 StringBuilder: 哪个更快?

在一个关于格式化 double[][]到 CSV 格式的 上一个问题中,使用 StringBuilder有人建议的会比使用 String.Join快。这是真的吗?

73956 次浏览

yes. If you do more than a couple of joins, it will be a lot faster.

When you do a string.join, the runtime has to:

  1. Allocate memory for the resulting string
  2. copy the contents of the first string to the beginning of the output string
  3. copy the contents of the second string to the end of the output string.

If you do two joins, it has to copy the data twice, and so on.

StringBuilder allocates one buffer with space to spare, so data can be appended without having to copy the original string. As there is space left over in the buffer, the appended string can be written into the buffer directly. Then it just has to copy the entire string once, at the end.

Short answer: it depends.

Long answer: if you already have an array of strings to concatenate together (with a delimiter), String.Join is the fastest way of doing it.

String.Join can look through all of the strings to work out the exact length it needs, then go again and copy all the data. This means there will be no extra copying involved. The only downside is that it has to go through the strings twice, which means potentially blowing the memory cache more times than necessary.

If you don't have the strings as an array beforehand, it's probably faster to use StringBuilder - but there will be situations where it isn't. If using a StringBuilder means doing lots and lots of copies, then building an array and then calling String.Join may well be faster.

EDIT: This is in terms of a single call to String.Join vs a bunch of calls to StringBuilder.Append. In the original question, we had two different levels of String.Join calls, so each of the nested calls would have created an intermediate string. In other words, it's even more complex and harder to guess about. I would be surprised to see either way "win" significantly (in complexity terms) with typical data.

EDIT: When I'm at home, I'll write up a benchmark which is as painful as possibly for StringBuilder. Basically if you have an array where each element is about twice the size of the previous one, and you get it just right, you should be able to force a copy for every append (of elements, not of the delimiter, although that needs to be taken into account too). At that point it's nearly as bad as simple string concatenation - but String.Join will have no problems.

I don't think so. Looking through Reflector, the implementation of String.Join looks very optimized. It also has the added benefit of knowing the total size of the string to be created in advance, so it doesn't need any reallocation.

I have created two test methods to compare them:

public static string TestStringJoin(double[][] array)
{
return String.Join(Environment.NewLine,
Array.ConvertAll(array,
row => String.Join(",",
Array.ConvertAll(row, x => x.ToString()))));
}


public static string TestStringBuilder(double[][] source)
{
// based on Marc Gravell's code


StringBuilder sb = new StringBuilder();
foreach (var row in source)
{
if (row.Length > 0)
{
sb.Append(row[0]);
for (int i = 1; i < row.Length; i++)
{
sb.Append(',').Append(row[i]);
}
}
}
return sb.ToString();
}

I ran each method 50 times, passing in an array of size [2048][64]. I did this for two arrays; one filled with zeros and another filled with random values. I got the following results on my machine (P4 3.0 GHz, single-core, no HT, running Release mode from CMD):

// with zeros:
TestStringJoin    took 00:00:02.2755280
TestStringBuilder took 00:00:02.3536041


// with random values:
TestStringJoin    took 00:00:05.6412147
TestStringBuilder took 00:00:05.8394650

Increasing the size of the array to [2048][512], while decreasing the number of iterations to 10 got me the following results:

// with zeros:
TestStringJoin    took 00:00:03.7146628
TestStringBuilder took 00:00:03.8886978


// with random values:
TestStringJoin    took 00:00:09.4991765
TestStringBuilder took 00:00:09.3033365

The results are repeatable (almost; with small fluctuations caused by different random values). Apparently String.Join is a little faster most of the time (although by a very small margin).

This is the code I used for testing:

const int Iterations = 50;
const int Rows = 2048;
const int Cols = 64; // 512


static void Main()
{
OptimizeForTesting(); // set process priority to RealTime


// test 1: zeros
double[][] array = new double[Rows][];
for (int i = 0; i < array.Length; ++i)
array[i] = new double[Cols];


CompareMethods(array);


// test 2: random values
Random random = new Random();
double[] template = new double[Cols];
for (int i = 0; i < template.Length; ++i)
template[i] = random.NextDouble();


for (int i = 0; i < array.Length; ++i)
array[i] = template;


CompareMethods(array);
}


static void CompareMethods(double[][] array)
{
Stopwatch stopwatch = Stopwatch.StartNew();
for (int i = 0; i < Iterations; ++i)
TestStringJoin(array);
stopwatch.Stop();
Console.WriteLine("TestStringJoin    took " + stopwatch.Elapsed);


stopwatch.Reset(); stopwatch.Start();
for (int i = 0; i < Iterations; ++i)
TestStringBuilder(array);
stopwatch.Stop();
Console.WriteLine("TestStringBuilder took " + stopwatch.Elapsed);


}


static void OptimizeForTesting()
{
Thread.CurrentThread.Priority = ThreadPriority.Highest;
Process currentProcess = Process.GetCurrentProcess();
currentProcess.PriorityClass = ProcessPriorityClass.RealTime;
if (Environment.ProcessorCount > 1) {
// use last core only
currentProcess.ProcessorAffinity
= new IntPtr(1 << (Environment.ProcessorCount - 1));
}
}

Here's my test rig, using int[][] for simplicity; results first:

Join: 9420ms (chk: 210710000
OneBuilder: 9021ms (chk: 210710000

(update for double results:)

Join: 11635ms (chk: 210710000
OneBuilder: 11385ms (chk: 210710000

(update re 2048 * 64 * 150)

Join: 11620ms (chk: 206409600
OneBuilder: 11132ms (chk: 206409600

and with OptimizeForTesting enabled:

Join: 11180ms (chk: 206409600
OneBuilder: 10784ms (chk: 206409600

So faster, but not massively so; rig (run at console, in release mode, etc):

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;


namespace ConsoleApplication2
{
class Program
{
static void Collect()
{
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
GC.WaitForPendingFinalizers();
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
GC.WaitForPendingFinalizers();
}
static void Main(string[] args)
{
const int ROWS = 500, COLS = 20, LOOPS = 2000;
int[][] data = new int[ROWS][];
Random rand = new Random(123456);
for (int row = 0; row < ROWS; row++)
{
int[] cells = new int[COLS];
for (int col = 0; col < COLS; col++)
{
cells[col] = rand.Next();
}
data[row] = cells;
}
Collect();
int chksum = 0;
Stopwatch watch = Stopwatch.StartNew();
for (int i = 0; i < LOOPS; i++)
{
chksum += Join(data).Length;
}
watch.Stop();
Console.WriteLine("Join: {0}ms (chk: {1}", watch.ElapsedMilliseconds, chksum);


Collect();
chksum = 0;
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOPS; i++)
{
chksum += OneBuilder(data).Length;
}
watch.Stop();
Console.WriteLine("OneBuilder: {0}ms (chk: {1}", watch.ElapsedMilliseconds, chksum);


Console.WriteLine("done");
Console.ReadLine();
}
public static string Join(int[][] array)
{
return String.Join(Environment.NewLine,
Array.ConvertAll(array,
row => String.Join(",",
Array.ConvertAll(row, x => x.ToString()))));
}
public static string OneBuilder(IEnumerable<int[]> source)
{
StringBuilder sb = new StringBuilder();
bool firstRow = true;
foreach (var row in source)
{
if (firstRow)
{
firstRow = false;
}
else
{
sb.AppendLine();
}
if (row.Length > 0)
{
sb.Append(row[0]);
for (int i = 1; i < row.Length; i++)
{
sb.Append(',').Append(row[i]);
}
}
}
return sb.ToString();
}
}
}

Unless the 1% difference turns into something significant in terms of the time the entire program takes to run, this looks like micro-optimization. I'd write the code that's the most readable/understandable and not worry about the 1% performance difference.