VM allocation question
Greg Ullman
gregu at pyrtech
Sat Jul 22 05:16:02 AEST 1989
In article <KARL.89Jul21083407 at cheops.cis.ohio-state.edu> karl at cheops.cis.ohio-state.edu (Karl Kleinpaste) writes:
>I have a user who has a program with these stats, as reported by
>size(1):
>
>text data bss dec hex
>28672 4096 1318008 1350776 149c78
>
>The machine on which he's trying to run it is a 9825 running 4.4c with
>a single standard 30Mb swap partition. pstat -s reports, e.g.,
>
>18312k used (4904k text, 0k shm), 11576k free, 4766k wasted, 0k missing
>avail: 4*512k 9*256k 27*128k 29*64k 27*32k 32*16k 536*2k
>
>but when he runs the program, he immediately gets a complaint, "a.out:
>Not enough memory." I have been looking for possible causes for this,
>since the amount of space available is really fairly substantial
>(considerably more than he will need). I am wondering if the problem
>is due to his BSS section being so large, in combination with pstat
>reporting that the largest available single chunk is only half that
>amount. Am I on the right track? Is the problem due to badly
>fragmented space? Or should I be looking in some other dark corner?
>
>--Karl
Check the value of dmmax and dmmin in /sys/conf/param.c. If dmmax is
set to 512, then the execv call to start the program failed because a
1024k chunk of swap was unavailable. Assuming the dmmin=8, the order
of allocation of swap for a program this size would be:
Block Current Total
Number Allocation Allocated
------ ---------- ---------
1 16k 16k
2 32k 48k
3 64k 112k
4 128k 240k
5 256k 496k
6 512k 1008k
At this point, another block still needs to be allocated, since the total
size is still less than the ~1300k program size. If dmmax=512, then
the next block requested will be a block of size 1024k. By the pstat
info, we see that a 1024k block is unavailable, so the execv dies.
However, if dmmax=256, then the next block requested would be one of
size 512k. Since there are some of these available, I would be at a loss
to explain the problem.
If dmmax=512, then problem is indeed fragmentation. Setting dmmax=256
and remaking a kernel would probably fix the problem, but would cut
down the maximum process size to about 16MB.
-Greg
More information about the Comp.sys.pyramid
mailing list