realloc primer
Steve Summit
scs at adam.mit.edu
Sun Feb 10 11:40:53 AEST 1991
Now that most of the shouting about the users command has died
down, I wanted to go back and touch on the fixed vs. variable-
sized array issue, and realloc usage in general.
I hope there aren't too many people who still believe that
dynamic allocation is too exotic for general use, and that
fixed-size arrays are therefore a viable alternative. Someone
said that all you have to do is pick the largest size you can
think of and then multiply it by 10. That fails for two reasons:
1. The largest size you can think of today, that no one
would ever use, will be in widespread use tomorrow.
2. Orders of magnitude are big, even with virtual memory.
Doggedly sticking to fixed-size arrays, which have tens
of thousands of elements in hopes that they will always
be big enough, can waste prodigious quantities of memory,
especially if most invocations require only a few
elements. (Remember that address spaces, though large,
are not infinite, particularly on the all-too-popular
80*86, and that swap space isn't infinite, either.
Start-up time matters, too.)
Most of the time, writing code which dynamically allocates the
array space it needs is not very difficult. Suppose we have a
simple array, and a subroutine for adding items to it:
#define MAXELTS 100
int array[MAXELTS];
int nelts = 0;
install(x)
int x;
{
if(nelts >= MAXELTS)
{
fprintf(stderr, "too many elements (max %d)\n", MAXELTS);
exit(1);
}
array[nelts++] = x;
}
Let's see how easy it is to remove the arbitrary limitation in
this code, by dynamically allocating the array:
#include <stdlib.h>
int *array = NULL;
int nalloc = 0;
int nelts = 0;
install(x)
int x;
{
if(nelts >= nalloc)
{
nalloc += 10;
array = (int *)realloc((char *)array, nalloc * sizeof(int));
if(array == NULL)
{
fprintf(stderr, "out of memory with %d elements\n",
nelts);
exit(1);
}
}
array[nelts++] = x;
}
Be sure that realloc is declared correctly. If you don't have
<stdlib.h> yet, use extern char *realloc(); . If you want to be
true-blue ANSI, use size_t for nalloc and nelts. (Note, too,
the format of the error messages: it's nice to mention what the
limit is when a fixed limit is exceeded, and how much fit before
memory ran out when dynamic allocation is used.)
The transformation from fixed-size to dynamically-allocated is
simple: just change the array to a pointer, and add another
variable keeping track of how many slots have been allocated.
(If you always allocate the array just as big as it needs to be
at any given time, without any +10 slop, you don't even need the
extra variable. I'll talk more about overallocation in a
minute.) Wherever elements are added to the array, replace the
check against the fixed array size with code which checks the
dynamic size and reallocates a larger array if necessary.
Notice, if you haven't already, the self-starting nature of the
realloc code, and the use it makes of an apparent wart in the
specification of realloc: when realloc's first argument is NULL,
it essentially acts like malloc. Therefore we don't need a
clumsy special case like
if(array == NULL)
array = (int *)malloc(nalloc * sizeof(int));
else array = (int *)realloc((char *)array, nalloc * sizeof(int));
to handle the initial allocation. (Note that this wrinkle in
realloc has only become official with the ANSI C standard.
Under some pre-ANSI compilers, the special case and initial
explicit malloc were typically required.)
With one exception, no appearances of the global array (which is
now actually a pointer) in the rest of the code have to be
changed, because pointers can be subscripted the same as arrays
(more properly, all arrays are subscripted as pointers). I'll
get to the exception in a minute.
Suppose you have some code with a fixed-size array, which you
wish to modify via the above transformation. Here are two little
difficulties you might encounter:
1. The code didn't check for array overflow, making it
harder to replace those checks with the realloc code.
That's too bad, but the code needed fixing, at least to add those
checks, anyway. Fixed-size arrays are acceptable only if
overflow is checked for while filling them. Otherwise, they're
time bombs.
2. Insertions into the array are scattered all through the
code, rather than being centralized in one "insert" or
"add" subroutine as in the example above. I don't feel
like duplicating the realloc code all over the place.
Again, that's too bad, but it shouldn't be too difficult to
define the subroutine, do the reallocation there, and replace the
scattered explicit insertions with calls to the new routine.
It's easy to find all of the places in the code which might need
adjusting. Just grep for the name of the array and the name of
the counter variable (nelts in this example).
One thing to beware of when using realloc is that the array can
(and usually will) move around in memory, invalidating any other
pointers to elements within it. (This is the "one exception" I
alluded to above.) For example, if you had something like
(continuing from the code above)
int *special;
install(12);
install(34);
install(56);
special = &array[nelts - 1];
install(78);
the "special" pointer is not necessarily pointing to the third
element any more, after the fourth call to install.
Whenever there are pointers into an array or buffer, those
pointers must be relocated if the array or buffer is ever
reallocated. The easiest way is with offsets:
int offset_special = special - array;
array = (int *)realloc((char *)array, nalloc * sizeof(int));
if(array == NULL)
...
special = array + offset_special;
(Assuming you're using ANSI C a better type for the
offset_special variable would be size_t.)
If your code is nice and modular, you may find it uncomfortable
to adjust some pointers, which are dealt with and really belong
exclusively in some other part of the code, in the central
reallocation routine. In this case, why not keep them as indices
(or, if you prefer the term, offsets, i.e. just like the
temporary offset_special computed above) rather than pointers?
(The answer from the four winds will be "because pointers are
more efficient than indexed subscripting," but wait a minute.
For one thing, under some architectures, that's not true, and in
any case the special pointers which are kept into an array which
is still growing are typically not the scan-over-the-whole-array-
in-a-big-hurry pointers that need optimizing in the first place.)
Finally, we come to the topic of the incremental growth strategy,
which several people were flaming about earlier. My code
typically uses the kindergarten-style arithmetic iteration
illustrated in the example above:
if(nelts >= nalloc)
{
nalloc += 10;
...
If I am adding several elements at once, the code is slightly
different:
if(nelts + nnew > nalloc)
{
nalloc = nnew + 10;
...
or maybe
if(nelts + nnew > nalloc)
{
nalloc += 10;
if(nelts + nnew > nalloc)
nalloc = nelts + nnew;
...
Many people will tell you that all of thest are O(n**2) and
therefore too inefficient. They will say that you should use
nalloc *= 2 instead.
It's true that adding some number (rather than multiplying by
some number, i.e. an arithmetic rather than geometric
progression) is O(n**2). I can say, though, that I've never
found the arithmetic progression to be a problem, perhaps because
n is "always" small, or (more likely) because 65536/10 (or the
equivalent for a given memory size and slop factor) is never very
many iterations. (Often the I/O involved filling the array, or
other manipulations upon the data, swamp the allocation time
anyway.)
But why avoid geometric progression? One problem is that
nalloc *= 2;
won't work; it's not self-starting. You need something like
if(nalloc == 0)
nalloc = 10;
else nalloc *= 2;
(which is ugly, and we're back to a first-time-through special
case), or
nalloc = nalloc * 2 + 10;
which is probably a little better (although it's still weird-
enough looking that it may deserve a comment explaining it).
The bigger problem is that geometric progressions can seem to run
out of memory far too soon, especially if you don't have virtual
memory. If you've got, say, 40K available, and nalloc has grown
to the point where you're using 32K, and you need one element
more, doubling nalloc will cause you to ask for 64K, which you
won't get. So you have to put in a back-off strategy:
int oldnalloc = nalloc;
int *newarray;
nalloc = nalloc * 2 + 10;
newarray = (int *)realloc((char *)array, nalloc * sizeof(int));
if(newarray == NULL)
{
nalloc = oldnalloc + 10;
newarray = (int *)realloc((char *)array,
nalloc * sizeof(int));
if(newarray == NULL)
{
fprintf(stderr, "out of memory with %d elements\n",
nelts);
exit(1);
}
}
array = newarray;
...
Suddenly things have gotten pretty complicated, with messy-
looking extra tests and temporary variables, all of which have to
be thought about carefully, and documented. There are several
things one could do to "improve" this last algorithm, such as
trying to keep it from being strictly arithmetic after the
geometric progression overflows, but any such "improvements"
would be further complications, and I'll not digress.
(Note, too, that any back-off strategy requires that the first,
failing realloc not corrupt the old block. As has been mentioned
here already, ANSI X3.159 guarantees that realloc fails
gracefully, while many earlier systems did not. Indeed, I find
the line "When realloc returns 0, the block pointed to by ptr may
be destroyed" in the BUGS section of my old V7 manual, the
current 4.3 manual, and the Tenth Edition Research System manuals
I just found in a bookstore yesterday. This sounds to me like an
egregious, unacceptable failure mode. Presumably it's still
around because it's hard to fix. Can someone explain why it's
impossible for these old allocators to guarantee that a block
won't be corrupted during an unsuccessful enlargement attempt?)
To summarize: dynamic allocation of arrays is easy, except for
badly-written programs which need fixing anyway. Unless you're
writing really vigorous production code, where you know that n
will always be large, consider using a simple arithmetic
progression. (If a back-off strategy is too hard to get right,
I'd rather that the code perform slowly for huge n than that it
fail with half of its available memory still unused.)
Steve Summit
scs at adam.mit.edu
More information about the Comp.lang.c
mailing list