在循环中重用 StringBuilder 是否更好?

关于使用 StringBuilder,我有一个与性能相关的问题。 在一个非常长的循环中,我操作一个 StringBuilder并将它传递给另一个方法,如下所示:

for (loop condition) {
StringBuilder sb = new StringBuilder();
sb.append("some string");
. . .
sb.append(anotherString);
. . .
passToMethod(sb.toString());
}

在每个循环周期实例化 StringBuilder是一个好的解决方案吗?调用 delete 是否更好,如下所示?

StringBuilder sb = new StringBuilder();
for (loop condition) {
sb.delete(0, sb.length);
sb.append("some string");
. . .
sb.append(anotherString);
. . .
passToMethod(sb.toString());
}
80603 次浏览

The modern JVM is really smart about stuff like this. I would not second guess it and do something hacky that is less maintainable/readable...unless you do proper bench marks with production data that validate a non-trivial performance improvement (and document it ;)

In the philosophy of writing solid code its always better to put your StringBuilder inside your loop. This way it doesnt go outside the code its intended for.

Secondly the biggest improvment in StringBuilder comes from giving it an initial size to avoid it growing bigger while the loop runs

for (loop condition) {
StringBuilder sb = new StringBuilder(4096);
}

Based on my experience with developing software on Windows I would say clearing the StringBuilder out during your loop has better performance than instantiating a StringBuilder with each iteration. Clearing it frees that memory to be overwritten immediately with no additional allocation required. I'm not familiar enough with the Java garbage collector, but I would think that freeing and no reallocation (unless your next string grows the StringBuilder) is more beneficial than instantiation.

(My opinion is contrary to what everyone else is suggesting. Hmm. Time to benchmark it.)

The second one is about 25% faster in my mini-benchmark.

public class ScratchPad {


static String a;


public static void main( String[] args ) throws Exception {
long time = System.currentTimeMillis();
for( int i = 0; i < 10000000; i++ ) {
StringBuilder sb = new StringBuilder();
sb.append( "someString" );
sb.append( "someString2"+i );
sb.append( "someStrin4g"+i );
sb.append( "someStr5ing"+i );
sb.append( "someSt7ring"+i );
a = sb.toString();
}
System.out.println( System.currentTimeMillis()-time );
time = System.currentTimeMillis();
StringBuilder sb = new StringBuilder();
for( int i = 0; i < 10000000; i++ ) {
sb.delete( 0, sb.length() );
sb.append( "someString" );
sb.append( "someString2"+i );
sb.append( "someStrin4g"+i );
sb.append( "someStr5ing"+i );
sb.append( "someSt7ring"+i );
a = sb.toString();
}
System.out.println( System.currentTimeMillis()-time );
}
}

Results:

25265
17969

Note that this is with JRE 1.6.0_07.


Based on Jon Skeet's ideas in the edit, here's version 2. Same results though.

public class ScratchPad {


static String a;


public static void main( String[] args ) throws Exception {
long time = System.currentTimeMillis();
StringBuilder sb = new StringBuilder();
for( int i = 0; i < 10000000; i++ ) {
sb.delete( 0, sb.length() );
sb.append( "someString" );
sb.append( "someString2" );
sb.append( "someStrin4g" );
sb.append( "someStr5ing" );
sb.append( "someSt7ring" );
a = sb.toString();
}
System.out.println( System.currentTimeMillis()-time );
time = System.currentTimeMillis();
for( int i = 0; i < 10000000; i++ ) {
StringBuilder sb2 = new StringBuilder();
sb2.append( "someString" );
sb2.append( "someString2" );
sb2.append( "someStrin4g" );
sb2.append( "someStr5ing" );
sb2.append( "someSt7ring" );
a = sb2.toString();
}
System.out.println( System.currentTimeMillis()-time );
}
}

Results:

5016
7516

Okay, I now understand what's going on, and it does make sense.

I was under the impression that toString just passed the underlying char[] into a String constructor which didn't take a copy. A copy would then be made on the next "write" operation (e.g. delete). I believe this was the case with StringBuffer in some previous version. (It isn't now.) But no - toString just passes the array (and index and length) to the public String constructor which takes a copy.

So in the "reuse the StringBuilder" case we genuinely create one copy of the data per string, using the same char array in the buffer the whole time. Obviously creating a new StringBuilder each time creates a new underlying buffer - and then that buffer is copied (somewhat pointlessly, in our particular case, but done for safety reasons) when creating a new string.

All this leads to the second version definitely being more efficient - but at the same time I'd still say it's uglier code.

Declare once, and assign each time. It is a more pragmatic and reusable concept than an optimization.

The first is better for humans. If the second is a bit faster on some versions of some JVMs, so what?

If performance is that critical, bypass StringBuilder and write your own. If you're a good programmer, and take into account how your app is using this function, you should be able to make it even faster. Worthwhile? Probably not.

Why is this question stared as "favorite question"? Because performance optimization is so much fun, no matter whether it is practical or not.

Since I don't think it's been pointed out yet, because of optimizations built into the Sun Java compiler, which automatically creates StringBuilders (StringBuffers pre-J2SE 5.0) when it sees String concatenations, the first example in the question is equivalent to:

for (loop condition) {
String s = "some string";
. . .
s += anotherString;
. . .
passToMethod(s);
}

Which is more readable, IMO, the better approach. Your attempts to optimize may result in gains in some platform, but potentially losses others.

But if you really are running into issues with performance, then sure, optimize away. I'd start with explicitly specifying the buffer size of the StringBuilder though, per Jon Skeet.

Faster still:

public class ScratchPad {


private static String a;


public static void main( String[] args ) throws Exception {
final long time = System.currentTimeMillis();


// Pre-allocate enough space to store all appended strings.
// StringBuilder, ultimately, uses an array of characters.
final StringBuilder sb = new StringBuilder( 128 );


for( int i = 0; i < 10000000; i++ ) {
// Resetting the string is faster than creating a new object.
// Since this is a critical loop, every instruction counts.
sb.setLength( 0 );
sb.append( "someString" );
sb.append( "someString2" );
sb.append( "someStrin4g" );
sb.append( "someStr5ing" );
sb.append( "someSt7ring" );
setA( sb.toString() );
}


System.out.println( System.currentTimeMillis() - time );
}


private static void setA( final String aString ) {
a = aString;
}
}

In the philosophy of writing solid code, the inner workings of the method are hidden from the client objects. Thus it makes no difference from the system's perspective whether you re-declare the StringBuilder within the loop or outside of the loop. Since declaring it outside of the loop is faster, and it does not make the code significantly more complicated, reuse the object.

Even if it was much more complicated, and you knew for certain that object instantiation was the bottleneck, comment it.

Three runs with this answer:

$ java ScratchPad
1567
$ java ScratchPad
1569
$ java ScratchPad
1570

Three runs with the other answer:

$ java ScratchPad2
1663
2231
$ java ScratchPad2
1656
2233
$ java ScratchPad2
1658
2242

Although not significant, setting the StringBuilder's initial buffer size, to prevent memory re-allocations, will give a small performance gain.

The reason why doing a 'setLength' or 'delete' improves the performance is mostly the code 'learning' the right size of the buffer, and less to do the memory allocation. Generally, I recommend letting the compiler do the string optimizations. However, if the performance is critical, I'll often pre-calculate the expected size of the buffer. The default StringBuilder size is 16 characters. If you grow beyond that, then it has to resize. Resizing is where the performance is getting lost. Here's another mini-benchmark which illustrates this:

private void clear() throws Exception {
long time = System.currentTimeMillis();
int maxLength = 0;
StringBuilder sb = new StringBuilder();


for( int i = 0; i < 10000000; i++ ) {
// Resetting the string is faster than creating a new object.
// Since this is a critical loop, every instruction counts.
//
sb.setLength( 0 );
sb.append( "someString" );
sb.append( "someString2" ).append( i );
sb.append( "someStrin4g" ).append( i );
sb.append( "someStr5ing" ).append( i );
sb.append( "someSt7ring" ).append( i );
maxLength = Math.max(maxLength, sb.toString().length());
}


System.out.println(maxLength);
System.out.println("Clear buffer: " + (System.currentTimeMillis()-time) );
}


private void preAllocate() throws Exception {
long time = System.currentTimeMillis();
int maxLength = 0;


for( int i = 0; i < 10000000; i++ ) {
StringBuilder sb = new StringBuilder(82);
sb.append( "someString" );
sb.append( "someString2" ).append( i );
sb.append( "someStrin4g" ).append( i );
sb.append( "someStr5ing" ).append( i );
sb.append( "someSt7ring" ).append( i );
maxLength = Math.max(maxLength, sb.toString().length());
}


System.out.println(maxLength);
System.out.println("Pre allocate: " + (System.currentTimeMillis()-time) );
}


public void testBoth() throws Exception {
for(int i = 0; i < 5; i++) {
clear();
preAllocate();
}
}

The results show reusing the object is about 10% faster than creating a buffer of the expected size.

LOL, first time i ever seen people compared the performance by combining string in StringBuilder. For that purpose, if you use "+", it could be even faster ;D. The purpose of using StringBuilder to speed up for retrieval of the whole string as the concept of "locality".

In the scenario that you retrieve a String value frequently that does not need frequent change, Stringbuilder allows higher performance of string retrieval. And that is the purpose of using Stringbuilder.. please do not MIS-Test the core purpose of that..

Some people said, Plane flies faster. Therefore, i test it with my bike, and found that the plane move slower. Do you know how i set the experiment settings ;D

Not significantly faster, but from my tests it shows on average to be a couple millis faster using 1.6.0_45 64 bits: use StringBuilder.setLength(0) instead of StringBuilder.delete():

time = System.currentTimeMillis();
StringBuilder sb2 = new StringBuilder();
for (int i = 0; i < 10000000; i++) {
sb2.append( "someString" );
sb2.append( "someString2"+i );
sb2.append( "someStrin4g"+i );
sb2.append( "someStr5ing"+i );
sb2.append( "someSt7ring"+i );
a = sb2.toString();
sb2.setLength(0);
}
System.out.println( System.currentTimeMillis()-time );

The fastest way is to use "setLength". It won't involve the copying operation. The way to create a new StringBuilder should be completely out. The slow for the StringBuilder.delete(int start, int end) is because it will copy the array again for the resizing part.

 System.arraycopy(value, start+len, value, start, count-end);

After that, the StringBuilder.delete() will update the StringBuilder.count to the new size. While the StringBuilder.setLength() just simplify update the StringBuilder.count to the new size.

I don't think that it make sence to try to optimize performance like that. Today (2019) the both statments are running about 11sec for 100.000.000 loops on my I5 Laptop:

    String a;
StringBuilder sb = new StringBuilder();
long time = 0;


System.gc();
time = System.currentTimeMillis();
for (int i = 0; i < 100000000; i++) {
StringBuilder sb3 = new StringBuilder();
sb3.append("someString");
sb3.append("someString2");
sb3.append("someStrin4g");
sb3.append("someStr5ing");
sb3.append("someSt7ring");
a = sb3.toString();
}
System.out.println(System.currentTimeMillis() - time);


System.gc();
time = System.currentTimeMillis();
for (int i = 0; i < 100000000; i++) {
sb.setLength(0);
sb.delete(0, sb.length());
sb.append("someString");
sb.append("someString2");
sb.append("someStrin4g");
sb.append("someStr5ing");
sb.append("someSt7ring");
a = sb.toString();
}
System.out.println(System.currentTimeMillis() - time);

==> 11000 msec(declaration inside loop) and 8236 msec(declaration outside loop)

Even if I'am running programms for address dedublication with some billion loops a difference of 2 sec. for 100 million loops does not make any difference because that programs are running for hours. Also be aware that things are different if you only have one append statement:

    System.gc();
time = System.currentTimeMillis();
for (int i = 0; i < 100000000; i++) {
StringBuilder sb3 = new StringBuilder();
sb3.append("someString");
a = sb3.toString();
}
System.out.println(System.currentTimeMillis() - time);


System.gc();
time = System.currentTimeMillis();
for (int i = 0; i < 100000000; i++) {
sb.setLength(0);
sb.delete(0, sb.length());
sb.append("someString");
a = sb.toString();
}
System.out.println(System.currentTimeMillis() - time);

==> 3416 msec(inside loop), 3555 msec(outside loop) The first statement which is creating the StringBuilder within the loop is faster in that case. And, if you change the order of execution it is much more faster:

    System.gc();
time = System.currentTimeMillis();
for (int i = 0; i < 100000000; i++) {
sb.setLength(0);
sb.delete(0, sb.length());
sb.append("someString");
a = sb.toString();
}
System.out.println(System.currentTimeMillis() - time);


System.gc();
time = System.currentTimeMillis();
for (int i = 0; i < 100000000; i++) {
StringBuilder sb3 = new StringBuilder();
sb3.append("someString");
a = sb3.toString();
}
System.out.println(System.currentTimeMillis() - time);

==> 3638 msec(outside loop), 2908 msec(inside loop)

Regards, Ulrich

The practice of not recreating so many new objects in a tight loop, where easily avoidable, definitely has a clear and obvious benefit as shown by the performance benchmarks.

However it also has a more subtle benefit that no one has mentioned.

This secondary benefit is related to an application freeze I saw in a large app processing the persistent objects produced after parsing CSV files with millions of lines/records and each record having about 140 fields.

Creating a new object here and there doesn't normally affect the garbage collector's workload.

Creating two new objects in a tight loop that iterates through each of the 140 fields in each of the millions of records in the aforementioned app incurs more than just mere wasted CPU cycles. It places a massive burden on the GC.

For the objects created by parsing a CSV file with 10 million lines the GC was being asked to allocate then clean up 2 x 140 x 10,000,000 = 2.8 billion objects!!!

If at any stage of the amount of free memory gets scarce e.g. the app has been asked to process multiple large files simultaneously, then you run the risk that the app ends up doing way more GC'ing than actual real work. When the GC effort takes up more than 98% of the CPU time then BANG! You get one of these dreaded exceptions:

GC Overhead Limit Exceeded

https://www.baeldung.com/java-gc-overhead-limit-exceeded

In that case rewriting the code to reuse objects like the StringBuilder instead of instantiating a new one at each iteration can really avoid a lot of GC activity (by not instantiating an extra 2.8 billion objects unnecessarily), reduce the chance of it throwing a "GC Overhead Limit Exceeded" exception and drastically improve the app's general performance even when it is not yet tight on memory.

Clearly, "leaving to the JVM to optimize" can not be a "rule of thumb" applicable to all scenarios.

With the sort of metrics associated with known large input files nobody who writes code to avoid the unnecessary creation of 2.8 billion objects should ever be accused by the "puritanicals" of "Pre-Optimizing" ;)

Any dev with half a brain and the slightest amount of foresight could see that this type of optimization for the expected input file size was warranted from day one.