为什么‘ free’in C 不接受要释放的字节数?

先说清楚: 我确实知道 mallocfree是在 C 库中实现的,C 库通常从操作系统中分配内存块,并自己管理分配给应用程序的较小内存块,并跟踪分配的字节数。这个问题不是 免费怎么知道多少免费

相反,我想知道为什么 free最初是以这种方式制造的。作为一门低级语言,我认为让 C 程序员不仅要记录分配了什么内存,还要记录分配了多少内存是完全合理的(事实上,我通常发现我最终还是要记录被错配的字节数)。我还突然想到,明确地给 free提供字节数可能会允许一些性能优化,例如,一个分配器有不同分配大小的独立池,可以通过查看输入参数来确定哪个池可以释放,而且总体空间开销会更小。

那么,简而言之,为什么创建 mallocfree时要求它们在内部跟踪分配的字节数?这只是一个历史意外吗?

一个小小的编辑: 一些人提供了类似“如果你释放的数量与你分配的数量不同怎么办”这样的观点。我想象中的 API 可能只需要释放所分配的字节数; 释放或多或少可能只是 UB 或实现定义。不过,我不想打击讨论其他可能性的积极性。

10229 次浏览

C may not be as "abstract" as C++, but it's still intended to be an abstraction over assembly. To that end, the lowest-level details are taken out of the equation. This prevents you from having to furtle about with alignment and padding, for the most part, which would make all your C programs non-portable.

In short, this is the entire point of writing an abstraction.

malloc and free go hand in hand, with each "malloc" being matched by one "free". Thus it makes total sense that the "free" matching a previous "malloc" should simply free up the amount of memory allocated by that malloc - this is the majority use case that would make sense in 99% of cases. Imagine all the memory errors if all uses of malloc/free by all programmers around the world ever, would need the programmer to keep track of the amount allocated in malloc, and then remember to free the same. The scenario you talk about should really be using multiple mallocs/frees in some kind of memory management implementation.

I would suggest that it is because it is very convenient not to have to manually track size information in this way (in some cases) and also less prone to programmer error.

Additionally, realloc would need this bookkeeping information, which I expect contains more than just the allocation size. i.e. it allows the mechanism by which it works to be implementation defined.

You could write your own allocator that worked somewhat in the way you suggest though and it is often done in c++ for pool allocators in a kind of similar way for specific cases (with potentially massive performance gains) though this is generally implemented in terms of operator new for allocating pool blocks.

I'm only posting this as an answer not because it's the one you're hoping for, but because I believe it's the only plausibly correct one:

It was probably deemed convenient originally, and it could not be improved thereafter.
There is likely no convincing reason for it. (But I'll happily delete this if shown it's incorrect.)

There would be benefits if it was possible: you could allocate a single large piece of memory whose size you knew beforehand, then free a little bit at a time -- as opposed to repeatedly allocating and freeing small chunks of memory. Currently tasks like this are not possible.


To the many (many1!) of you who think passing the size is so ridiculous:

May I refer you to C++'s design decision for the std::allocator<T>::deallocate method?

void deallocate(pointer p, size_type n);

All n T objects in the area pointed to by p shall be destroyed prior to this call.
n shall match the value passed to allocate to obtain this memory.

I think you'll have a rather "interesting" time analyzing this design decision.


As for operator delete, it turns out that the 2013 N3778 proposal ("C++ Sized Deallocation") is intended to fix that, too.


1Just look at the comments under the original question to see how many people made hasty assertions such as "the asked for size is completely useless for the free call" to justify the lack of the size parameter.

"Why does free in C not take the number of bytes to be freed?"

Because there's no need for it, and it wouldn't quite make sense anyway.

When you allocate something, you want to tell the system how many bytes to allocate (for obvious reasons).

However, when you have already allocated your object, the size of the memory region you get back is now determined. It's implicit. It's one contiguous block of memory. You can't deallocate part of it (let's forget realloc(), that's not what it's doing anyway), you can only deallocate the entire thing. You can't "deallocate X bytes" either -- you either free the memory block you got from malloc() or you don't.

And now, if you want to free it, you can just tell the memory manager system: "here's this pointer, free() the block it is pointing to." - and the memory manager will know how to do that, either because it implicitly knows the size, or because it might not even need the size.

For example, most typical implementations of malloc() maintain a linked list of pointers to free and allocated memory blocks. If you pass a pointer to free(), it will just search for that pointer in the "allocated" list, un-link the corresponding node and attach it to the "free" list. It didn't even need the region size. It will only need that information when it potentially attempts to re-use the block in question.

Five reasons spring to mind:

  1. It's convenient. It removes a whole load of overhead from the programmer and avoids a class of extremely difficult to track errors.

  2. It opens up the possibility of releasing part of a block. But since memory managers usually want to have tracking information it isn't clear what this would mean?

  3. Lightness Races In Orbit is spot on about padding and alignment. The nature of memory management means that the actual size allocated is quite possibly different from the size you asked for. This means that were free to require a size as well as a location malloc would have to be changed to return the actual size allocated as well.

  4. It's not clear that there is any actual benefit to passing in the size, anyway. A typical memory manager has 4-16 bytes of header for each chunk of memory, which includes the size. This chunk header can be common for allocated and unallocated memory and when adjacent chunks come free they can be collapsed together. If you're making the caller store the free memory you can free up probably 4 bytes per chunk by not having a separate size field in allocated memory but that size field is probably not gained anyway since the caller needs to store it somewhere. But now that information is scattered in memory rather than being predictably located in the header chunk which is likely to be less operationally efficient anyway.

  5. Even if it was more efficient it's radically unlikely your program is spending a large amount of time freeing memory anyway so the benefit would be tiny.

Incidentally, your idea about separate allocators for different size items is easily implemented without this information (you can use the address to determine where the allocation occurred). This is routinely done in C++.

Added later

Another answer, rather ridiculously, has brought up std::allocator as proof that free could work this way but, in fact, it serves as a good example of why free doesn't work this way. There are two key differences between what malloc/free do and what std::allocator does. Firstly, malloc and free are user facing - they're designed for the general programmers to work with - whereas std::allocator is designed to provide specialist memory allocation to the standard library. This provides a nice example of when the first of my points doesn't, or wouldn't, matter. Since it's a library, the difficulties of handling the complexities of tracking size are hidden from the user anyway.

Secondly, std::allocator always works with the same size item this means that it is possible for it to use the originally passed number of elements to determine how much of free. Why this differs from free itself is illustrative. In std::allocator the items to be allocated are always of the same, known, size and always the same kind of item so they always have the same kind of alignment requirements. This means that the allocator could be specialised to simply allocate an array of these items at the start and dole them out as needed. You couldn't do this with free because there is no way to guarantee that the best size to return is the size asked for, instead it is much more efficient to sometimes return larger blocks than the caller asks for* and thus either the user or the manager needs to track the exact size actually granted. Passing these kinds of implementation details onto the user is a needless headache that gives no benefit to the caller.

-* If anyone is still having difficultly understanding this point, consider this: a typical memory allocator adds a small amount of tracking information to the start of a memory block and then returns a pointer offset from this. Information stored here typically includes pointers to the next free block, for example. Let's suppose that header is a mere 4 bytes long (which is actually smaller than most real libraries), and doesn't include the size, then imagine we have a 20 byte free block when the user asks for a 16 byte block, a naive system would return the 16byte block but then leave a 4byte fragment that could never, ever be used wasting time every time malloc gets called. If instead the manager simply returns the 20 byte block then it saves these messy fragments from building up and is able to more cleanly allocate the available memory. But if the system is to correctly do this without tracking the size itself we then require the user to track - for every, single allocation - the amount of memory actually allocated if it is to pass it back for free. The same argument applies to padding for types/allocations that don't match the desired boundaries. Thus, at most, requiring free to take a size is either (a) completely useless since the memory allocator can't rely on the passed size to match the actually allocated size or (b) pointlessly requires the user to do work tracking the real size that would be easily handled by any sensible memory manager.

Why does free in C not take the number of bytes to be freed?

Because it doesn't need to. The information is already available in the internal management performed by malloc/free.

Here are two considerations (that may or may not have contributed to this decision):

  • Why would you expect a function to receive a parameter it doesn't need?

    (this would complicate virtually all client code relying on dynamic memory, and add completely unnecessary redundancy to your application). Keeping track of pointer allocation is already a dificult problem. Keeping track of memory allocations along with associated sizes would increase the complexity of client code unnecessarily.

  • What would the altered free function do, in these cases?

    void * p = malloc(20);
    free(p, 25); // (1) wrong size provided by client code
    free(NULL, 10); // (2) generic argument mismatch
    

    Would it not free (cause a memory leak?)? Ignore the second parameter? Stop the application by calling exit? Implementing this would add extra failure points in your application, for a feature you probably don't need (and if you need it, see my last point, below - "implementing solution at application level").

Rather, I want to know why free was made this way in the first place.

Because this is the "proper" way to do it. An API should require the arguments it needs to perform it's operation, and no more than that.

It also occurs to me that explicitly giving the number of bytes to free might allow for some performance optimisations, e.g. an allocator that has separate pools for different allocation sizes would be able to determine which pool to free from just by looking at the input arguments, and there would be less space overhead overall.

The proper ways to implement that, are:

  • (at the system level) within the implementation of malloc - there is nothing stopping the library implementer from writing malloc to use various strategies internally, based on received size.

  • (at application level) by wrapping malloc and free within your own APIs, and using those instead (everywhere in your application that you may need).

I don't see how an allocator would work that does not track the size of its allocations. If it didn't do this, how would it know which memory is available to satisfy a future malloc request? It has to at least store some sort of data structure containing addresses and lengths, to indicate where the available memory blocks are. (And of course, storing a list of free spaces is equivalent to storing a list of allocated spaces).

Well, the only thing you need is a pointer that you'll use to free up the memory you previously allocated. The amount of bytes is something managed by the operating system so you don't have to worry about it. It wouldn't be necessary to get the number of bytes allocated returned by free(). I suggest you a manual way to count the number of bytes/positions allocated by a running program:

If you work in Linux and you want to know the amount of bytes/positions malloc has allocated, you can make a simple program that uses malloc once or n times and prints out the pointers you get. In addition, you must make the program sleep for a few seconds (enough for you to do the following). After that, run that program, look for its PID, write cd /proc/process_PID and just type "cat maps". The output will show you, in one specific line, both the beginning and the final memory addresses of the heap memory region (the one in which you are allocating memory dinamically).If you print out the pointers to these memory regions being allocated, you can guess how much memory you have allocated.

Hope it helps!

Why should it? malloc() and free() are intentionally very simple memory management primitives, and higher-level memory management in C is largely up to the developer. T

Moreover realloc() does that already - if you reduce the allocation in realloc() is it will not move the data, and the pointer returned will be the the same as the original.

It is generally true of the entire standard library that it is composed of simple primitives from which you can build more complex functions to suit your application needs. So the answer to any question of the form "why does the standard library not do X" is because it cannot do everything a programmer might think of (that's what programmers are for), so it chooses to do very little - build your own or use third-party libraries. If you want a more extensive standard library - including more flexible memory management, then C++ may be the answer.

You tagged the question C++ as well as C, and if C++ is what you are using, then you should hardly be using malloc/free in any case - apart from new/delete, STL container classes manage memory automatically, and in a manner likely to be specifically appropriate to the nature of the various containers.

Actually, in the ancient Unix kernel memory allocator, mfree() took a size argument. malloc() and mfree() kept two arrays (one for core memory, another one for swap) that contained information on free block addresses and sizes.

There was no userspace allocator until Unix V6 (programs would just use sbrk()). In Unix V6, iolib included an allocator with alloc(size) and a free() call which did not take a size argument. Each memory block was preceded by its size and a pointer to the next block. The pointer was only used on free blocks, when walking the free list, and was reused as block memory on in-use blocks.

In Unix 32V and in Unix V7, this was substituted by a new malloc() and free() implementation, where free() did not take a size argument. The implementation was a circular list, each chunk was preceded by a word that contained a pointer to the next chunk, and a "busy" (allocated) bit. So, malloc()/free() didn't even keep track of an explicit size.

One-argument free(void *) (introduced in Unix V7) has another major advantage over the earlier two-argument mfree(void *, size_t) which I haven't seen mentioned here: one argument free dramatically simplifies every mfree(void *, size_t)0 API that works with heap memory. For example, if free needed the size of the memory block, then strdup would somehow have to return two values (pointer + size) instead of one (pointer), and C makes multiple-value returns much more cumbersome than single-value returns. Instead of char *strdup(char *) we'd have to write char *strdup(char *, size_t *) or else struct CharPWithSize { char *val; size_t size}; CharPWithSize strdup(char *). (Nowadays that second option looks pretty tempting, because we know that NUL-terminated strings are the mfree(void *, size_t)1, but that's hindsight speaking. Back in the 70's, C's ability to handle strings as a simple char * was actually considered a mfree(void *, size_t)2.) Plus, it isn't just strdup that suffers from this problem -- it affects every system- or user-defined function which allocates heap memory.

The early Unix designers were very clever people, and there are many reasons why free is better than mfree so basically I think the answer to the question is that they noticed this and designed their system accordingly. I doubt you'll find any direct record of what was going on inside their heads at the moment they made that decision. But we can imagine.

Pretend that you're writing applications in C to run on V6 Unix, with its two-argument mfree. You've managed okay so far, but keeping track of these pointer sizes is becoming more and more of a hassle as your programs become more ambitious and require more and more use of heap allocated variables. But then you have a brilliant idea: instead of copying around these size_ts all the time, you can just write some utility functions, which stash the size directly inside the allocated memory:

void *my_alloc(size_t size) {
void *block = malloc(sizeof(size) + size);
*(size_t *)block = size;
return (void *) ((size_t *)block + 1);
}
void my_free(void *block) {
block = (size_t *)block - 1;
mfree(block, *(size_t *)block);
}

And the more code you write using these new functions, the more awesome they seem. Not only do they make your code easier to write, they also make your code faster -- two things which don't often go together! Before you were passing these size_ts around all over the place, which added CPU overhead for the copying, and meant you had to spill registers more often (esp. for the extra function arguments), and wasted memory (since nested function calls will often result in multiple copies of the size_t being stored in different stack frames). In your new system, you still have to spend the memory to store the size_t, but only once, and it never gets copied anywhere. These may seem like small efficiencies, but keep in mind that we're talking about high-end machines with 256 KiB of RAM.

This makes you happy! So you share your cool trick with the bearded men who are working on the next Unix release, but it doesn't make them happy, it makes them sad. You see, they were just in the process of adding a bunch of new utility functions like strdup, and they realize that people using your cool trick won't be able to use their new functions, because their new functions all use the cumbersome pointer+size API. And then that makes you sad too, because you realize you'll have to rewrite the good strdup(char *) function yourself in every program you write, instead of being able to use the system version.

But wait! This is 1977, and backwards compatibility won't be invented for another 5 years! And besides, no-one serious actually uses this obscure "Unix" thing with its off-color name. The first edition of K&R is on its way to the publisher now, but that's no problem -- it says right on the first page that "C provides no operations to deal directly with composite objects such as character strings... there is no heap...". At this point in history, string.h and malloc are vendor extensions (!). So, suggests Bearded Man #1, we can change them however we like; why don't we just declare your tricky allocator to be the official allocator?

A few days later, Bearded Man #2 sees the new API and says hey, wait, this is better than before, but it's still spending an entire word per allocation storing the size. He views this as the next thing to blasphemy. Everyone else looks at him like he's crazy, because what else can you do? That night he stays late and invents a new allocator that doesn't store the size at all, but instead infers it on the fly by performing black magic bitshifts on the pointer value, and swaps it in while keeping the new API in place. The new API means that no-one notices the switch, but they do notice that the next morning the compiler uses 10% less RAM.

And now everyone's happy: You get your easier-to-write and faster code, Bearded Man #1 gets to write a nice simple strdup that people will actually use, and Bearded Man #2 -- confident that he's earned his keep for a bit -- goes back to messing around with quines. Ship it!

Or at least, that's how it could have happened.