I have heard that some people prefer C for embedded work due to the fact that is simpler and therefore easier to predict the actual code that will be generated.
I personally would think writing C-style C++ (using templates for type-safety) would give you a lot of advantages though and I can't see any real reason not to.
No. Any of the C++ language features that could cause problems (runtime polymorphism, RTTI, etc.) can be avoided while doing embedded development. There is a community of embedded C++ developers (I remember reading columns by embedded developers using C++ in the old C/C++ Users' Journal), and I can't imagine they'd be very vocal if the choice was that bad.
I see no reason to use C instead of C++. Whatever you can do in C, you can do it also in C++. If you want to avoid overheads of VMT, don't use virtual methods and polymorphism.
However, C++ can provide some very useful idioms with no overhead. One of my favourites is RAII. Classes are not necessary expensive in terms of memory or performance...
A good reason and sometimes the only reason is that there is still no C++ compiler for the specific embedded system. This is the case for example for Microchip PIC micro-controllers. They are very easy to write for and they have a free C compiler (actually, a slight variant of C) but there is no C++ compiler in sight.
I recommend using the C++ compiler, but limiting your use of C++ specific features. You can program like C in C++ (the C runtime is included when doing C++, though in most embedded applications you don't make use of the standard library anyway).
You can go ahead and use C++ classes etc., just
Limit your use of virtual functions (as you've said)
Limit your use of templates
For an embedded platform, you'll want to override the operator new and/or use placement new for memory allocation.
I've written some code for ARM7 embedded paltform on IAR Workbench. I highly recommend relying on templates to do compile-time optimization and path prediction. Avoid dynamic casting like plague. Use traits/policies to your advantage, as prescribed in Andrei Alexandrescu's book, Modern C++ design.
I know, it can be hard to learn, but I am also sure that your product will benefit from this approach.
For memory allocation issue, I can recommend using Quantum Platform and its state machine approach, as it allocates everything you'd need at the initialization time. It also helps to alleviate contention problems.
Some say that C compilers can generate much more efficient code because they don't have to support the advanced C++ features and can therefore be more aggressive in their optimizations.
Of course, in this case you may want to put the two specific compilers to the test.
Do you see any reason to stick with C89 when developing for very limited
hardware (4kb of RAM)?
Personally, when it comes to embedded applications (When I say embedded, I don't mean winCE, iPhone, etc.. bloated embedded devices today). I mean resource limited devices.
I prefer C, though I have worked with C++ quite a bit as well.
For example, the device you're talking about has 4kb of RAM, well just for that reason I wouldn't consider C++. Sure, you may be able to design something small using C++ and limit your usage of it in your application like other posts have suggested but C++ "could" potentially end up complicating/bloating your application under the covers.
Are you going to link statically? You may want to compare static a dummy application using c++ vs c. That may lead you to consider C instead. On the other hand if you are able to build a C++ application within your memory requirements, go for it.
IMHO,
In general, in embedded applications I like to know everything that is going on. Who's using memory/system resources, how much and why? When do they free them up?
When developing for a target with X amount of resources, cpu, memory, etc.. I try to stay on the lower side of using those resources because you never know what future requirements will come along thus having you add more code to the project that was "supposed" to be a simple small application but ends up becoming a lot bigger.
C wins on portability - because it is less ambiguous in language spec; therefore offering much better portability and flexibility across different compilers etc (less headaches).
If you aren't going to leverage C++ features to meet a need then go with C.
For a system constrained to 4K of ram, I would use C, not C++, just so that you can be sure to see everything that's going on. The thing with C++, is that it's very easy to use far more resources (both CPU and memory) than it looks like glancing at the code. (Oh, I'll just create another BlerfObject to do that...whoops! out of memory!)
You can do it in C++, as already mentioned (no RTTI, no vtables, etc, etc), but you'll spend as much time making sure your C++ usage doesn't get away from you as you would doing the equivalent in C.
You have inline in C99. Maybe you like ctors, but the business of getting dtors right can be messy. If the remaining only reason to not use C is namespaces, I would really stick to C89. This is because you might want to port it to a slightly different embedded platform. You may later start writing in C++ on that same code. But beware the following, where C++ is NOT a superset of C. I know you said you have a C89 compiler, but does this C++ comparison with C99 anyway, as the first item for example is true for any C since K&R.
sizeof 'a' > 1 in C, not in C++.
In C you have VLA variable length arrays. Example: func(int i){int a[i].
In C you have VAM variable array members. Example: struct{int b;int m[];}.
For a very resource constrained target such as 4KB of RAM, I'd test the waters with some samples before committing a lot of effort that can't be easily ported back into a pure ANSI C implementation.
The Embedded C++ working group did propose a standard subset of the language and a standard subset of the standard library to go with it. I lost track of that effort when the C User's Journal died, unfortunately. It looks like there is an article at Wikipedia, and that the committee still exists.
In an embedded environment, you really have to be careful about memory allocation. To enforce that care, you may need to define the global operator new() and its friends to something that can't be even linked so that you know it isn't used. Placement new on the other hand is likely to be your friend, when used judiciously along with a stable, thread-safe, and latency guaranteed allocation scheme.
Inlined functions won't cause much problem, unless they are big enough that they should have been true functions in the first place. Of course the macros their replacing had that same issue.
Templates, too, may not cause a problem unless their instantiation runs amok. For any template you do use, audit your generated code (the link map may have sufficient clues) to make certain that only the instantiations you intended to use happened.
One other issue that may arise is compatibility with your debugger. It isn't unusual for an otherwise usable hardware debugger to have very limited support for interaction with the original source code. If you effectively must debug in assembly, then the interesting name mangling of C++ can add extra confusion to the task.
RTTI, dynamic casts, multiple inheritance, heavy polymorphism, and exceptions all come with some amount of runtime cost for their use. A few of those features level that cost over the whole program if they are used, others just increase the weight of classes that need them. Know the difference, and choose advanced features wisely with full knowledge of at least a cursory cost/benefit analysis.
In an small embedded environment you will either be linking directly to a real time kernel or running directly on the hardware. Either way, you will need to make certain that your runtime startup code handles C++ specific startup chores correctly. This might be as simple as making sure to use the right linker options, but since it is common to have direct control over the source to the power on reset entry point, you might need to audit that to make certain that it does everything. For example, on a ColdFire platform I worked on, the dev tools shipped with a CRT0.S module that had the C++ initializers present but comment out. If I had used it straight from the box, I would have been mystified by global objects whose constructors had never run at all.
Also, in an embedded environment, it is often necessary to initialize hardware devices before they can be used, and if there is no OS and no boot loader, then it is your code that does that. You will need to remember that constructors for global objects are run beforemain() is called so you will need to modify your local CRT0.S (or its equivalent) to get that hardware initialization done before the global constructors themselves are called. Obviously, the top of main() is way too late.
Personally with 4kb of memory I'd say you are not getting that much more mileage out of C++, so just pick the one that seems the best compiler/runtime combination for the job, since language is probably not going to matter much.
Note that it is also not all about language anyway, since also the library matters. Often C libs have a slightly smaller minimum size, but I could imagine that a C++ lib targeted at embedded development is cut down, so be sure to test.
The Technical Report on C++ Performance is a great guide for this sort of thing. Note that it has a section on embedded programming concerns!
Also, ++ on the mention of Embedded C++ in the answers. The standard is not 100% to my tastes, but it is a good bit of reference when deciding what parts of C++ you might drop.
While programming for small platforms, we disable exceptions and RTTI, avoided virtual inheritance, and paid close attention to the number of virtual functions we have lying around.
Your friend is the linker map, though: check it frequently, and you'll spot sources of code and static memory bloat quickly.
After that, the standard dynamic memory usage considerations apply: in an environment as restricted as the one you mention, you may want to not use dynamic allocations at all. Sometimes you can get away with memory pools for small dynamic allocs, or "frame-based" allocation where you preallocate a block and throw out the whole thing later.
Not all embedded compilers implement all of C++, and even if they do, they might not be good at avoiding code bloat (which is always a risk with templates). Test it with a few smaller programs, see if you run into any problems.
But given a good compiler, no, there's no reason not to use C++.
For a lot of embedded processors, either there is no C++ compiler, or you have to pay extra for it.
My experience is that a signficant proportion of embedded software engineers have little or no experience of C++ -- either because of (1), or because it tends not to be taught on electronic engineeering degrees -- and so it would be better to stick with what they know.
Also, the original question, and a number of comments, mention the 4 Kb of RAM. For a typical embedded processor, the amount of RAM is (mostly) unrelated to the code size, as the code is stored, and run from, flash.
Certainly, the amount of code storage space is something to bear in mind, but as new, more capacious, processors appear on the market, it's less of an issue than it used to be for all but the most cost-sensitive projects.
On the use of a subset of C++ for use with embedded systems: there is now a MISRA C++ standard, which may be worth a look.
EDIT: See also this question, which led to a debate about C vs C++ for embedded systems.
My choice is usually determined by the C library we decide to use, which is selected based on what the device needs to do. So, 9/10 times .. it ends up being uclibc or newlib and C. The kernel we use is a big influence on this too, or if we're writing our own kernel.
Its also a choice of common ground. Most good C programmers have no problem using C++ (even though many complain the entire time that they use it) .. but I have not found the reverse to be true (in my experience).
On a project we're working on (that involves a ground up kernel), most things are done in C, however a small network stack was implemented in C++, because it was just easier and less problematic to implement networking using C++.
The end result is, the device will either work and pass acceptance tests or it won't. If you can implement foo in xx stack and yy heap constraints using language z, go for it, use whatever makes you more productive.
My personal preference is C because :
I know what every line of code is doing (and costs)
I don't know C++ well enough to know what every line of code is doing (and costs)
Yes, I am comfortable with C++, but I don't know it as well as I do standard C.
Now if you can say the reverse of that, well, use what you know :) If it works, passes tests, etc .. what's the problem?
4kB of RAM can still mean there are hundreds of kilobytes of FLASH to store the actual code and static data. RAM on this size tends to be meant just for variables, and if you are careful with those you can fit quite a large program in terms of code lines into memory.
However, C++ tends to make putting code and data in FLASH more difficult, due to the run-time construction rules for objects. In C, a constant struct can easily be put into FLASH memory and accessed as a hardware-constant object. In C++, a constant object would require the compiler to evaluate the constructor at compile-time, which I think is still beyond what a C++ compiler can do (theoretically, you could do it, but it is very very hard to do in practice).
So in a "small RAM", "large FLASH" kind of environment I would go with C any day. Note that a good intermediate choice i C99 which has most of the nice C++ features for non-class-based-code.
Different answer post to a different aspect of the question:
"malloc"
Some previous replies talk quite a bit about this. Why do you even think that call exists? For a truly small platform, malloc tends to be unavailable, or definitely optional. Implementing dynamic memory allocation tends to be meaningful when you get to have an RTOS in the bottom of your system -- but until then, it is purely dangerous.
You can get very far without it. Just think about all the old FORTRAN programs which did not even have a proper stack for local variables...
On such a limited system. Just go for Assembler. Gives you total control over every aspect, and gives no overhead.
Probably a lot faster too since a lot of embedded compilers are not the best optimizers (especially if one compares it to state of the art compilers like the ones we have for the desktop (intel, visual studio, etc))
"yeah yeah...but c is re-usable and...". On such a limited system, chances are you won't re-use much of that code on a different system anyway. On the same system, assembler is just as re-usable.
As a firmware/embedded system engineer, I can tell you guys some of the reason why C is still the #1 choice over C++ and yes, I'm fluent in both of them.
1) Some targets we develop on has 64kB of RAM for both code and data, so you have to make sure every byte count, and yes, I've dealt with code optimization to save 4 bytes that cost me 2 hours, and that's in 2008.
2) Every C library function is reviewed before we let them in the final code, because of size limitation, so we prefer people not to use divide (no hardware divider, so a big library is needed), malloc (because we have no heap, all memory is allocated from data buffer in 512 byte chunk and must be code reviewed), or other object oriented practice that carry large penalty. Remember, every library function that you use count.
3) Ever heard of the term overlay? you have so little code space that sometimes you have to swap things out with another set of code. If you call a library function then the library function must be resident. If you only use it in an overlay function, you are wasting a lot of space relying on too many object oriented methods. So, don't assume any C library function, let alone C++ to be accepted.
4) Casting and even packing (where unaligned data structure crosses word boundary) are needed due to limited hardware design (i.e. an ECC engine that is wired a certain way) or to cope with a hardware bug. You cannot assume too much inplicitly, so why object orient it too much?
5) Worst case scenario: eliminating some of the object oriented methods will force develop to think before they use resources that can explode (i.e. allocating 512bytes on a stack rather than from a data buffer), and prevent some of the potential worst case scenario that are not tested for or eliminate the whole code path all together.
6) We do use a lot of abstraction to keep hardware from software and make code as portable as possible, and simulation friendly. Hardware access must be wrapped in a macro or inline function that are conditionally compiled between different platform, data type must be casted as byte size rather than target specific, direct pointer usage is not allowed (because some platform assume memory mapped I/O is the same as data memory), etc.
I can think of more, but you get the idea. Us firmware guys do have object oriented training, but the task of embedded system can be so hardware oriented and low level, that it is not high level or abstractable by nature.
BTW, every firmware job I've been at uses source control, I don't know where you get that idea from.
The number of instructions produced depends massively on:
The sizes of a, b, and c.
whether those pointers are stored on the stack or are global
whether i, j and k are on the stack or are global
This is especially true in the tiny embedded world, where processors are just not set up to handle C. So my answer would be that C and C++ are just as bad as each other, unless you always examine the asm output, in which case they are just as good as each other.
The human mind deals with complexity by evaluating as much as possible, and then deciding what's important to focus on, and discarding or depreciating the rest. This is the entire underpinning behind branding in Marketing, and largely, icons.
To combat this tendency I prefer C to C++, because it forces you to think about your code, and how it's interacting with the hardware more closely - relentlessly close.
From long experience it is my belief that C forces you to come up with better solutions to problems, in part, by getting out of your way and not forcing you to waste lots of time satisfying a constraint some compiler-writer thought was a good idea, or figuring out what's going on "under the covers".
In that vein, low level languages like C have you spending a lot of time focused on the hardware and building good data-structure/algorithm bundles, while high level languages have you spending a lot of time scratching your head wondering what's going on in there, and why you can't do something perfectly reasonable in your specific context and environment. Beating your compiler into submission (strong typing is the worst offender) is NOT a productive use of time.
I probably fit the programmer mold well - I like control. In my view, that's not a personality flaw for a programmer. Control is what we get paid to do. More specifically, FLAWLESSLY control. C gives you much more control than C++.
Just want to say that there is no system with "UNLIMITED" resources. Everything in this world is limited and EVERY application should consider resource usage no matter whether its ASM, C, JAVA or JavaScript. The dummies that allocate a few Mbs "just to be sure" makes iPhone 7, Pixel and other devices extremely luggy. No matter whether you have 4kb or 40 Gb.
But from another side to oppose resource wasting - is a time that takes to save those resources. If it takes 1 week extra to write a simple thing in C to save a few ticks and a few bytes instead of using C++ already implemented, tested and distributed. Why bother? Its like buying a usb hub. yes you can make it yourself but is it going to be better? more reliable? cheaper if you count your time?
Just a side thought - even power from your outlet is not unlimited. Try to research where its coming from and you will see mostly its from burning something. The law of energy and material is still valid: no material or energy appears or disappears but rather transforms.
There are a number of different controller manufacturers around the globe and when you take a look into their designs and the instruction sets that need to be used to configure you may end up in a lot of troubles. The main disadvantage of assembly language is that machine/architecture dependent. It’s really huge ask for a developer to by heart all the instructions set out there to accomplish coding for different controllers. That is why C became more popular in embedded development because C is high-level enough to abstract the algorithms and data structures from hardware-dependent details, making the source code portable across a wide variety of target hardware, architecture independent language and very easy to convert and maintain the code. But we do see some high-level languages (object-oriented) like C, C++, Python, Java etc. are evolving enough to make them under the radar of embedded system development.
Time to market and maintainability. C++ development is easier and faster. So if you are in the designing phase, chose a controller great enough to use C++. (Note, that some high volume market requires as low cost as possible, where you cannot make this choice.)
Speed. C can be faster than C++, but be sure this the speed gain is not big. So you can go with C++. Develop your algorithms, test them, and make them faster only if required(!). Use profilers, to point the bottlenecks and rewrite them in extern "C" way, to achieve C speed. (If it still slow implement that part in ASM)
Binary size. The C++ codes are bigger, but here is a great answer that tells the details. The size of the compiled binary of a given C code will be the same whether it was compiled using the C or the C++ compiler. "The executable size is hardly related to the language, but to the libraries you include in your project." Go with C++ but avoid advanced functionalities, like streams, string, new, virtual functions, etc. Review all library function before letting them in the final code, because of size limitation (based on this answer)