字符串串联的最优化方法

我们每天都会遇到很多情况,需要在代码中做很多冗长乏味的字符串运算。我们都知道字符串操作是昂贵的操作。我想知道在现有的版本中哪个最便宜。

最常见的操作是连接(我们可以在一定程度上控制它)。在 C + + 中连接 std: : 字符串的最佳方法是什么,以及各种加速连接的解决方案?

我是说,

std::string l_czTempStr;


1).l_czTempStr = "Test data1" + "Test data2" + "Test data3";


2). l_czTempStr =  "Test data1";
l_czTempStr += "Test data2";
l_czTempStr += "Test data3";


3). using << operator


4). using append()

另外,使用 CString 比使用 std: : string 有什么优势吗?

68473 次浏览

Here is a small test suite:

#include <iostream>
#include <string>
#include <chrono>
#include <sstream>


int main ()
{
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::duration<float, std::milli> mil;
std::string l_czTempStr;
std::string s1="Test data1";
auto t0 = clock::now();
#if VER==1
for (int i = 0; i < 100000; ++i)
{
l_czTempStr = s1 + "Test data2" + "Test data3";
}
#elif VER==2
for (int i = 0; i < 100000; ++i)
{
l_czTempStr =  "Test data1";
l_czTempStr += "Test data2";
l_czTempStr += "Test data3";
}
#elif VER==3
for (int i = 0; i < 100000; ++i)
{
l_czTempStr =  "Test data1";
l_czTempStr.append("Test data2");
l_czTempStr.append("Test data3");
}
#elif VER==4
for (int i = 0; i < 100000; ++i)
{
std::ostringstream oss;
oss << "Test data1";
oss << "Test data2";
oss << "Test data3";
l_czTempStr = oss.str();
}
#endif
auto t1 = clock::now();
std::cout << l_czTempStr << '\n';
std::cout << mil(t1-t0).count() << "ms\n";
}

On coliru:

Compile with the following:

clang++ -std=c++11 -O3 -DVER=1 -Wall -pedantic -pthread main.cpp

21.6463ms

-DVER=2

6.61773ms

-DVER=3

6.7855ms

-DVER=4

102.015ms

It looks like 2), += is the winner.

(Also compiling with and without -pthread seems to affect the timings)

The WORST possible scenario is using plain old strcat (or sprintf), since strcat takes a C string, and that has to be "counted" to find the end. For long strings, that's a real performance sufferer. C++ style strings are much better, and the performance problems are likely to be with the memory allocation, rather than counting lengths. But then again, the string grows geometrically (doubles each time it needs to grow), so it's not that terrible.

I'd very much suspect that all of the above methods end up with the same, or at least very similar, performance. If anything, I'd expect that stringstream is slower, because of the overhead in supporting formatting - but I also suspect it's marginal.

As this sort of thing is "fun", I will get back with a benchmark...

Edit:

Note that these result apply to MY machine, running x86-64 Linux, compiled with g++ 4.6.3. Other OS's, compilers and C++ runtime library implementations may vary. If performance is important to your application, then benchmark on the system(s) that are critical for you, using the compiler(s) that you use.

Here's the code I wrote to test this. It may not be the perfect representation of a real scenario, but I think it's a representative scenario:

#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
#include <cstring>


using namespace std;


static __inline__ unsigned long long rdtsc(void)
{
unsigned hi, lo;
__asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi));
return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 );
}


string build_string_1(const string &a, const string &b, const string &c)
{
string out = a + b + c;
return out;
}


string build_string_1a(const string &a, const string &b, const string &c)
{
string out;
out.resize(a.length()*3);
out = a + b + c;
return out;
}


string build_string_2(const string &a, const string &b, const string &c)
{
string out = a;
out += b;
out += c;
return out;
}


string build_string_3(const string &a, const string &b, const string &c)
{
string out;
out = a;
out.append(b);
out.append(c);
return out;
}




string build_string_4(const string &a, const string &b, const string &c)
{
stringstream ss;


ss << a << b << c;
return ss.str();
}




char *build_string_5(const char *a, const char *b, const char *c)
{
char* out = new char[strlen(a) * 3+1];
strcpy(out, a);
strcat(out, b);
strcat(out, c);
return out;
}






template<typename T>
size_t len(T s)
{
return s.length();
}


template<>
size_t len(char *s)
{
return strlen(s);
}


template<>
size_t len(const char *s)
{
return strlen(s);
}






void result(const char *name, unsigned long long t, const string& out)
{
cout << left << setw(22) << name << " time:" << right << setw(10) <<  t;
cout << "   (per character: "
<< fixed << right << setw(8) << setprecision(2) << (double)t / len(out) << ")" << endl;
}


template<typename T>
void benchmark(const char name[], T (Func)(const T& a, const T& b, const T& c), const char *strings[])
{
unsigned long long t;


const T s1 = strings[0];
const T s2 = strings[1];
const T s3 = strings[2];
t = rdtsc();
T out = Func(s1, s2, s3);
t = rdtsc() - t;


if (len(out) != len(s1) + len(s2) + len(s3))
{
cout << "Error: out is different length from inputs" << endl;
cout << "Got `" << out << "` from `" << s1 << "` + `" << s2 << "` + `" << s3 << "`";
}
result(name, t, out);
}




void benchmark(const char name[], char* (Func)(const char* a, const char* b, const char* c),
const char *strings[])
{
unsigned long long t;


const char* s1 = strings[0];
const char* s2 = strings[1];
const char* s3 = strings[2];
t = rdtsc();
char *out = Func(s1, s2, s3);
t = rdtsc() - t;


if (len(out) != len(s1) + len(s2) + len(s3))
{
cout << "Error: out is different length from inputs" << endl;
cout << "Got `" << out << "` from `" << s1 << "` + `" << s2 << "` + `" << s3 << "`";
}
result(name, t, out);
delete [] out;
}




#define BM(func, size) benchmark(#func " " #size, func, strings ## _ ## size)




#define BM_LOT(size) BM(build_string_1, size); \
BM(build_string_1a, size); \
BM(build_string_2, size); \
BM(build_string_3, size); \
BM(build_string_4, size); \
BM(build_string_5, size);


int main()
{
const char *strings_small[]  = { "Abc", "Def", "Ghi" };
const char *strings_medium[] = { "abcdefghijklmnopqrstuvwxyz",
"defghijklmnopqrstuvwxyzabc",
"ghijklmnopqrstuvwxyzabcdef" };
const char *strings_large[]   =
{ "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz",


"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"


"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc"
"defghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc",


"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
"ghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef"
};


for(int i = 0; i < 5; i++)
{
BM_LOT(small);
BM_LOT(medium);
BM_LOT(large);
cout << "---------------------------------------------" << endl;
}
}

Here are some representative results:

build_string_1 small   time:      4075   (per character:   452.78)
build_string_1a small  time:      5384   (per character:   598.22)
build_string_2 small   time:      2669   (per character:   296.56)
build_string_3 small   time:      2427   (per character:   269.67)
build_string_4 small   time:     19380   (per character:  2153.33)
build_string_5 small   time:      6299   (per character:   699.89)
build_string_1 medium  time:      3983   (per character:    51.06)
build_string_1a medium time:      6970   (per character:    89.36)
build_string_2 medium  time:      4072   (per character:    52.21)
build_string_3 medium  time:      4000   (per character:    51.28)
build_string_4 medium  time:     19614   (per character:   251.46)
build_string_5 medium  time:      6304   (per character:    80.82)
build_string_1 large   time:      8491   (per character:     3.63)
build_string_1a large  time:      9563   (per character:     4.09)
build_string_2 large   time:      6154   (per character:     2.63)
build_string_3 large   time:      5992   (per character:     2.56)
build_string_4 large   time:     32450   (per character:    13.87)
build_string_5 large   time:     15768   (per character:     6.74)

Same code, run as 32-bit:

build_string_1 small   time:      4289   (per character:   476.56)
build_string_1a small  time:      5967   (per character:   663.00)
build_string_2 small   time:      3329   (per character:   369.89)
build_string_3 small   time:      3047   (per character:   338.56)
build_string_4 small   time:     22018   (per character:  2446.44)
build_string_5 small   time:      3026   (per character:   336.22)
build_string_1 medium  time:      4089   (per character:    52.42)
build_string_1a medium time:      8075   (per character:   103.53)
build_string_2 medium  time:      4569   (per character:    58.58)
build_string_3 medium  time:      4326   (per character:    55.46)
build_string_4 medium  time:     22751   (per character:   291.68)
build_string_5 medium  time:      2252   (per character:    28.87)
build_string_1 large   time:      8695   (per character:     3.72)
build_string_1a large  time:     12818   (per character:     5.48)
build_string_2 large   time:      8202   (per character:     3.51)
build_string_3 large   time:      8351   (per character:     3.57)
build_string_4 large   time:     38250   (per character:    16.35)
build_string_5 large   time:      8143   (per character:     3.48)

From this, we can conclude:

  1. The best option is appending a bit at a time (out.append() or out +=), with the "chained" approach reasonably close.

  2. Pre-allocating the string is not helpful.

  3. Using stringstream is pretty poor idea (between 2-4x slower).

  4. The char * uses new char[]. Using a local variable in the calling function makes it the fastest - but slightly unfairly to compare that.

  5. There is a fair bit of overhead in combining short string - just copying data should be at most one cycle per byte [unless the data doesn't fit in the cache].

edit2

Added, as per comments:

string build_string_1b(const string &a, const string &b, const string &c)
{
return a + b + c;
}

and

string build_string_2a(const string &a, const string &b, const string &c)
{
string out;
out.reserve(a.length() * 3);
out += a;
out += b;
out += c;
return out;
}

Which gives these results:

build_string_1 small   time:      3845   (per character:   427.22)
build_string_1b small  time:      3165   (per character:   351.67)
build_string_2 small   time:      3176   (per character:   352.89)
build_string_2a small  time:      1904   (per character:   211.56)


build_string_1 large   time:      9056   (per character:     3.87)
build_string_1b large  time:      6414   (per character:     2.74)
build_string_2 large   time:      6417   (per character:     2.74)
build_string_2a large  time:      4179   (per character:     1.79)

(A 32-bit run, but the 64-bit shows very similar results on these).

As with most micro-optimisations, you will need to measure the effect of each option, having first established through measurement that this is indeed a bottle-neck worth optimising. There is no definitive answer.

append and += should do exactly the same thing.

+ is conceptually less efficient, since you're creating and destroying temporaries. Your compiler may or may not be able to optimise this to be as fast as appending.

Calling reserve with the total size may reduce the number of memory allocations needed - they will probably be the biggest bottleneck.

<< (presumably on a stringstream) may or may not be faster; you'll need to measure that. It's useful if you need to format non-string types, but probably won't be particularly better or worse at dealing with strings.

CString has the disadvantage that it's not portable, and that a Unix hacker like me can't tell you what its advantages may or may not be.

In addition to other answers...

I made extensive benchmarks about this problem some time ago, and came to the conclusion that the most efficient solution (GCC 4.7 & 4.8 on Linux x86 / x64 / ARM) in all use cases is first to reserve() the result string with enough space to hold all the concatenated strings, and then only append() them (or use operator +=(), that makes no difference).

Unfortunately it seems I deleted that benchmark so you only have my word (but you can easily adapt Mats Petersson's benchmark to verify this by yourself, if my word isn't enough).

In a nutshell:

const string space = " ";
string result;
result.reserve(5 + space.size() + 5);
result += "hello";
result += space;
result += "world";

Depending on the exact use case (number, types and sizes of the concatenated strings), sometimes this method is by far the most efficient, and other times it is on par with other methods, but it is never worse.


Problem is, this is really painful to compute the total required size in advance, especially when mixing string literals and std::string (the example above is clear enough on that matter, I believe). The maintainability of such code is absolutely horrible as soon as you modify one of the literals or add another string to be concatenated.

One approach would be to use sizeof to compute the size of the literals, but IMHO it creates as much mess than it solves, the maintainability is still terrible:

#define STR_HELLO "hello"
#define STR_WORLD "world"


const string space = " ";
string result;
result.reserve(sizeof(STR_HELLO)-1 + space.size() + sizeof(STR_WORLD)-1);
result += STR_HELLO;
result += space;
result += STR_WORLD;

A usable solution (C++11, variadic templates)

I finally settled for a set of variadic templates that efficiently take care of calculating the string sizes (eg. the size of string literals is determined at compile time), reserve() as needed, and then concatenate everything.

Here it is, hope this is useful:

namespace detail {


template<typename>
struct string_size_impl;


template<size_t N>
struct string_size_impl<const char[N]> {
static constexpr size_t size(const char (&) [N]) { return N - 1; }
};


template<size_t N>
struct string_size_impl<char[N]> {
static size_t size(char (&s) [N]) { return N ? strlen(s) : 0; }
};


template<>
struct string_size_impl<const char*> {
static size_t size(const char* s) { return s ? strlen(s) : 0; }
};


template<>
struct string_size_impl<char*> {
static size_t size(char* s) { return s ? strlen(s) : 0; }
};


template<>
struct string_size_impl<std::string> {
static size_t size(const std::string& s) { return s.size(); }
};


template<typename String> size_t string_size(String&& s) {
using noref_t = typename std::remove_reference<String>::type;
using string_t = typename std::conditional<std::is_array<noref_t>::value,
noref_t,
typename std::remove_cv<noref_t>::type
>::type;
return string_size_impl<string_t>::size(s);
}


template<typename...>
struct concatenate_impl;


template<typename String>
struct concatenate_impl<String> {
static size_t size(String&& s) { return string_size(s); }
static void concatenate(std::string& result, String&& s) { result += s; }
};


template<typename String, typename... Rest>
struct concatenate_impl<String, Rest...> {
static size_t size(String&& s, Rest&&... rest) {
return string_size(s)
+ concatenate_impl<Rest...>::size(std::forward<Rest>(rest)...);
}
static void concatenate(std::string& result, String&& s, Rest&&... rest) {
result += s;
concatenate_impl<Rest...>::concatenate(result, std::forward<Rest>(rest)...);
}
};


} // namespace detail


template<typename... Strings>
std::string concatenate(Strings&&... strings) {
std::string result;
result.reserve(detail::concatenate_impl<Strings...>::size(std::forward<Strings>(strings)...));
detail::concatenate_impl<Strings...>::concatenate(result, std::forward<Strings>(strings)...);
return result;
}

The only interesting part, as far as the public interface is concerned, is the very last template<typename... Strings> std::string concatenate(Strings&&... strings) template. Usage is straightforward:

int main() {
const string space = " ";
std::string result = concatenate("hello", space, "world");
std::cout << result << std::endl;
}

With optimizations turned on, any decent compiler should be able to expand the concatenate call to the same code as my first example where I manually wrote everything. As far as GCC 4.7 & 4.8 are concerned, the generated code is pretty much identical as well as the performance.

There are some significant parameters, which has potential impact on deciding the "most optimized way". Some of these are - string/content size, number of operations, compiler optimization, etc.

In most of the cases, string::operator+= seems to be working best. However at times, on some compilers, it is also observed that ostringstream::operator<< works best [like - MingW g++ 3.2.3, 1.8 GHz single processor Dell PC]. When compiler context comes, then it is majorly the optimizations at compiler which would impact. Also to mention, that stringstreams are complex objects as compared to simple strings, and therefore adds to the overhead.

For more info - discussion, article.

I decided to run a test with the code provided by user Jesse Good, slightly modified to take into account the observation of Rapptz, specifically the fact that the ostringstream was constructed in each single iteration of the loop. Therefore I added some cases, a couple of them being the ostringstream cleared with the sequence "oss.str(""); oss.clear()"

Here is the code

#include <iostream>
#include <string>
#include <chrono>
#include <sstream>
#include <functional>




template <typename F> void time_measurement(F f, const std::string& comment)
{
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::duration<float, std::milli> mil;
std::string r;
auto t0 = clock::now();
f(r);
auto t1 = clock::now();
std::cout << "\n-------------------------" << comment << "-------------------\n" <<r << '\n';
std::cout << mil(t1-t0).count() << "ms\n";
std::cout << "---------------------------------------------------------------------------\n";


}


inline void clear(std::ostringstream& x)
{
x.str("");
x.clear();
}


void test()
{
std:: cout << std::endl << "----------------String Comparison---------------- " << std::endl;
const int n=100000;
{
auto f=[](std::string& l_czTempStr)
{
std::string s1="Test data1";
for (int i = 0; i < n; ++i)
{
l_czTempStr = s1 + "Test data2" + "Test data3";
}
};
time_measurement(f, "string, plain addition");
}


{
auto f=[](std::string& l_czTempStr)
{
for (int i = 0; i < n; ++i)
{
l_czTempStr =  "Test data1";
l_czTempStr += "Test data2";
l_czTempStr += "Test data3";
}
};
time_measurement(f, "string, incremental");
}


{
auto f=[](std::string& l_czTempStr)
{
for (int i = 0; i < n; ++i)
{
l_czTempStr =  "Test data1";
l_czTempStr.append("Test data2");
l_czTempStr.append("Test data3");
}
};
time_measurement(f, "string, append");
}


{
auto f=[](std::string& l_czTempStr)
{
for (int i = 0; i < n; ++i)
{
std::ostringstream oss;
oss << "Test data1";
oss << "Test data2";
oss << "Test data3";
l_czTempStr = oss.str();
}
};
time_measurement(f, "oss, creation in each loop, incremental");
}


{
auto f=[](std::string& l_czTempStr)
{
std::ostringstream oss;
for (int i = 0; i < n; ++i)
{
oss.str("");
oss.clear();
oss << "Test data1";
oss << "Test data2";
oss << "Test data3";
}
l_czTempStr = oss.str();
};
time_measurement(f, "oss, 1 creation, incremental");
}


{
auto f=[](std::string& l_czTempStr)
{
std::ostringstream oss;
for (int i = 0; i < n; ++i)
{
oss.str("");
oss.clear();
oss << "Test data1" << "Test data2" << "Test data3";
}
l_czTempStr = oss.str();
};
time_measurement(f, "oss, 1 creation, plain addition");
}


{
auto f=[](std::string& l_czTempStr)
{
std::ostringstream oss;
for (int i = 0; i < n; ++i)
{
clear(oss);
oss << "Test data1" << "Test data2" << "Test data3";
}
l_czTempStr = oss.str();
};
time_measurement(f, "oss, 1 creation, clearing calling inline function, plain addition");
}




{
auto f=[](std::string& l_czTempStr)
{
for (int i = 0; i < n; ++i)
{
std::string x;
x =  "Test data1";
x.append("Test data2");
x.append("Test data3");
l_czTempStr=x;
}
};
time_measurement(f, "string, creation in each loop");
}


}

Here are the results:

/*


g++ "qtcreator debug mode"
----------------String Comparison----------------


-------------------------string, plain addition-------------------
Test data1Test data2Test data3
11.8496ms
---------------------------------------------------------------------------


-------------------------string, incremental-------------------
Test data1Test data2Test data3
3.55597ms
---------------------------------------------------------------------------


-------------------------string, append-------------------
Test data1Test data2Test data3
3.53099ms
---------------------------------------------------------------------------


-------------------------oss, creation in each loop, incremental-------------------
Test data1Test data2Test data3
58.1577ms
---------------------------------------------------------------------------


-------------------------oss, 1 creation, incremental-------------------
Test data1Test data2Test data3
11.1069ms
---------------------------------------------------------------------------


-------------------------oss, 1 creation, plain addition-------------------
Test data1Test data2Test data3
10.9946ms
---------------------------------------------------------------------------


-------------------------oss, 1 creation, clearing calling inline function, plain addition-------------------
Test data1Test data2Test data3
10.9502ms
---------------------------------------------------------------------------


-------------------------string, creation in each loop-------------------
Test data1Test data2Test data3
9.97495ms
---------------------------------------------------------------------------




g++ "qtcreator release mode" (optimized)
----------------String Comparison----------------


-------------------------string, plain addition-------------------
Test data1Test data2Test data3
8.41622ms
---------------------------------------------------------------------------


-------------------------string, incremental-------------------
Test data1Test data2Test data3
2.55462ms
---------------------------------------------------------------------------


-------------------------string, append-------------------
Test data1Test data2Test data3
2.5154ms
---------------------------------------------------------------------------


-------------------------oss, creation in each loop, incremental-------------------
Test data1Test data2Test data3
54.3232ms
---------------------------------------------------------------------------


-------------------------oss, 1 creation, incremental-------------------
Test data1Test data2Test data3
8.71854ms
---------------------------------------------------------------------------


-------------------------oss, 1 creation, plain addition-------------------
Test data1Test data2Test data3
8.80526ms
---------------------------------------------------------------------------


-------------------------oss, 1 creation, clearing calling inline function, plain addition-------------------
Test data1Test data2Test data3
8.78186ms
---------------------------------------------------------------------------


-------------------------string, creation in each loop-------------------
Test data1Test data2Test data3
8.4034ms
---------------------------------------------------------------------------
*/

Now using std::string is still faster, and the append is still the fastest way of concatenation, but ostringstream is no more so incredibly terrible like it was before.

So as this question's accepted answer is quite old I've decided to update it's benchmarks with modern compiler and compare both solutions by @jesse-good and template version from @syam

Here is the combined code:

#include <iostream>
#include <string>
#include <chrono>
#include <sstream>
#include <vector>
#include <cstring>




#if VER==TEMPLATE
namespace detail {


template<typename>
struct string_size_impl;


template<size_t N>
struct string_size_impl<const char[N]> {
static constexpr size_t size(const char (&) [N]) { return N - 1; }
};


template<size_t N>
struct string_size_impl<char[N]> {
static size_t size(char (&s) [N]) { return N ? strlen(s) : 0; }
};


template<>
struct string_size_impl<const char*> {
static size_t size(const char* s) { return s ? strlen(s) : 0; }
};


template<>
struct string_size_impl<char*> {
static size_t size(char* s) { return s ? strlen(s) : 0; }
};


template<>
struct string_size_impl<std::string> {
static size_t size(const std::string& s) { return s.size(); }
};


template<typename String> size_t string_size(String&& s) {
using noref_t = typename std::remove_reference<String>::type;
using string_t = typename std::conditional<std::is_array<noref_t>::value,
noref_t,
typename std::remove_cv<noref_t>::type
>::type;
return string_size_impl<string_t>::size(s);
}


template<typename...>
struct concatenate_impl;


template<typename String>
struct concatenate_impl<String> {
static size_t size(String&& s) { return string_size(s); }
static void concatenate(std::string& result, String&& s) { result += s; }
};


template<typename String, typename... Rest>
struct concatenate_impl<String, Rest...> {
static size_t size(String&& s, Rest&&... rest) {
return string_size(s)
+ concatenate_impl<Rest...>::size(std::forward<Rest>(rest)...);
}
static void concatenate(std::string& result, String&& s, Rest&&... rest) {
result += s;
concatenate_impl<Rest...>::concatenate(result, std::forward<Rest>(rest)...);
}
};


} // namespace detail


template<typename... Strings>
std::string concatenate(Strings&&... strings) {
std::string result;
result.reserve(detail::concatenate_impl<Strings...>::size(std::forward<Strings>(strings)...));
detail::concatenate_impl<Strings...>::concatenate(result, std::forward<Strings>(strings)...);
return result;
}


#endif


int main ()
{
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::duration<float, std::milli> ms;
std::string l_czTempStr;




std::string s1="Test data1";




auto t0 = clock::now();
#if VER==PLUS
for (int i = 0; i < 100000; ++i)
{
l_czTempStr = s1 + "Test data2" + "Test data3";
}
#elif VER==PLUS_EQ
for (int i = 0; i < 100000; ++i)
{
l_czTempStr =  "Test data1";
l_czTempStr += "Test data2";
l_czTempStr += "Test data3";
}
#elif VER==APPEND
for (int i = 0; i < 100000; ++i)
{
l_czTempStr =  "Test data1";
l_czTempStr.append("Test data2");
l_czTempStr.append("Test data3");
}
#elif VER==STRSTREAM
for (int i = 0; i < 100000; ++i)
{
std::ostringstream oss;
oss << "Test data1";
oss << "Test data2";
oss << "Test data3";
l_czTempStr = oss.str();
}
#elif VER=TEMPLATE
for (int i = 0; i < 100000; ++i)
{
l_czTempStr = concatenate(s1, "Test data2", "Test data3");
}
#endif


#define STR_(x) #x
#define STR(x) STR_(x)


auto t1 = clock::now();
//std::cout << l_czTempStr << '\n';
std::cout << STR(VER) ": " << ms(t1-t0).count() << "ms\n";
}

The test instruction:

for ARGTYPE in PLUS PLUS_EQ APPEND STRSTREAM TEMPLATE; do for i in `seq 4` ; do clang++ -std=c++11 -O3 -DVER=$ARGTYPE -Wall -pthread -pedantic main.cpp && ./a.out ; rm ./a.out ; done; done

And results (processed through spreadsheet to show average time):

PLUS       23.5792
PLUS       23.3812
PLUS       35.1806
PLUS       15.9394   24.5201
PLUS_EQ    15.737
PLUS_EQ    15.3353
PLUS_EQ    10.7764
PLUS_EQ    25.245    16.773425
APPEND     22.954
APPEND     16.9031
APPEND     10.336
APPEND     19.1348   17.331975
STRSTREAM  10.2063
STRSTREAM  10.7765
STRSTREAM  13.262
STRSTREAM  22.3557   14.150125
TEMPLATE   16.6531
TEMPLATE   16.629
TEMPLATE   22.1885
TEMPLATE   16.9288   18.09985

The surprise is strstream which seems to have a lot of benefit from C++11 and later improvements. Probably removal of necessary allocation due to introduction of move semantics has some influence.

You can test it on your own on coliru

Edit: I've updated test on coliru to use g++-4.8: http://coliru.stacked-crooked.com/a/593dcfe54e70e409. Results in graph here: g++-4.8 test results

(explanation - "stat. average" means average over all values except two extreme ones - one minimal and one maximal value)

Using C++17 this simple solution should have very good performance, in most cases comparable to @syam's template-heavy solution. In some conditions it will be even faster, avoiding unnecessary strlen calls.

#include <string>
#include <string_view>


template <typename... T>
std::string concat(T ...args) {
std::string result;
std::string_view views[] { args... };
std::string::size_type full_size = 0;
for (auto sub_view : views)
full_size += sub_view.size();
result.reserve(full_size);
for (auto sub_view : views)
result.append(sub_view);
return result;
}

There is a little bit of redundancy here - we don't really need to store string_views, just the length of the arguments. However, the overhead is negligible, and it makes the code clean and clear.

std::string_views store the length of the arguments. Because of that, appending them to a std::string can be faster than appending by char*s. Also, std::string_view uses std::char_traits for length calculation, that in some implementations can be calculated in compile-time for arguments known at compile-time. This optimization usually can't be performed for C calls like strlen.

#include <concepts>
#include <string>


template<class T>
concept string_like_t = requires(const T & str)
{
{std::size(str)} -> std::same_as<size_t>;
{*std::data(str)} -> std::convertible_to<std::remove_cvref_t<decltype(str[0])>>;
};
template<string_like_t T>
using char_t = std::remove_cvref_t<decltype(std::declval<T>()[0])>;


template<class Alloc, string_like_t First, string_like_t... Rest>
requires (!string_like_t<Alloc>)
auto concat(const Alloc& alloc, const First& first, const Rest&... rest)
{
std::basic_string<char_t<First>, std::char_traits<char_t<First>>, Alloc> result{ alloc };
result.reserve(std::size(first) + (std::size(rest) + ...));
result.append(std::data(first), std::size(first));
(result.append(std::data(rest), std::size(rest)), ...);
return result;
}


template<string_like_t First, string_like_t... Rest>
auto concat(const First& first, const Rest&... rest)
{
typename std::basic_string<char_t<First>>::allocator_type alloc{};
return concat(alloc, first, rest...);
}


#include <string_view>
#include <iostream>
#include <memory_resource>
int main()
{
std::pmr::monotonic_buffer_resource mr { 1000 };
std::pmr::polymorphic_allocator<char> alloc {&mr};
std::string xxx = "xxxxxx";
std::string_view yyy = "TEST";
std::pmr::string zzz {", zzz", &mr};
std::cout << concat(yyy, "123: ", "test", xxx, zzz) << std::endl;
std::cout << concat(alloc, yyy, "123: ", "test", xxx, zzz) << std::endl;


return 0;
}

Seems to be the most optimized C++20 version. Supports polymorphic allocators