Never say "never", but I'd agree that their role is greatly diminished by true data structures from STL.
I'd also say that encapsulation inside objects should minimize the impact of choices like this. If the array is a private data member, you can swap it in or out without affecting clients of your class.
In C++11 where std::array is available, the answer is "yes, arrays should be avoided". Prior to C++11, you may need to use C arrays to allocate arrays in the automatic storage (i.e. on the stack).
I have worked on safety critical systems where you are unable to use dynamic memory allocation. The memory has to always be on the stack. Therefore in this case you would use arrays as the size is fixed at compile time.
array in c++ gives you fixed size fast alternative of dynamic sized std::vector and std::list. std::array is one of the additions in c++11. It provides the benefit of std containers while still providing the aggregate type semantics of C-style arrays.
So in c++11 i'd certainly use std::array, where it is required, over vector. But i'd avoid C style array in C++03.
The only advantage of an array (of course wrapped in something that will manage automatically its deallocation when need) over std::vector I can think about is that vector cannot pass ownership of its data, unless your compiler supports C++11 and move constructors.
Definitely, although with std::array in C++11, practically only for
static data. C style arrays have three important advantages over
std::vector:
They don't require dynamic allocation. For this reason, C style
arrays are to be preferred where you're likely to have a lot of very
small arrays. Say something like an n-dimension point:
template <typename T, int dims>
class Point
{
T myData[dims];
// ...
};
Typically, one might imagine a that dims will be very small (2 or 3),
T a built-in type (double), and that you might end up with
std::vector<Point> with millions of elements. You definitely don't
want millions of dynamic allocations of 3 double.
The support static initialization. This is only an issue for static
data, where something like:
struct Data { int i; char const* s; };
Data const ourData[] =
{
{ 1, "one" },
{ 2, "two" },
// ...
};
This is often preferable to using a vector (and std::string), since it
avoids all order of initialization issues; the data is pre-loaded,
before any actual code can be executed.
Finally, related to the above, the compiler can calculate the actual
size of the array from the initializers. You don't have to count them.
If you have access to C++11, std::array solves the first two issues,
and should definitely be used in preference to C style arrays in the
first case. It doesn't address the third, however, and having the
compiler dimension the array according to the number of initializers is
still a valid reason to prefer C style arrays.
C style arrays are a fundamental data structure, so there will be cases when it is better to use it. For the general case, however, use the more advanced data structures that round off the corners of the underlying data. C++ allows you to do some very interesting and useful things with memory, many of which work with simple arrays.
I know a lot of people are pointing out std::array for allocating arrays on the stack, and std::vector for the heap. But neither seem to support non-native alignment. If you're doing any kind of numeric code that you want use SSE or VPX instructions on (thus requiring 128 or 256 byte alignment respectively), C arrays would still seem to be your best bet.
You should use STL containers internally, but you should not pass pointers to such containers between different modules, or you will end up in dependency hell. Example:
std::string foo;
// fill foo with stuff
myExternalOutputProc(foo.c_str());
is a very good solution but not
std::string foo;
// fill foo with stuff
myExternalOutputProc(&foo);
The reason is that std::string can be implemented in many different ways but a c-style string is always a c-style string.