increasing 4.2 process memory > 6meg?
Mike Muuss
mike at brl
Tue Apr 30 11:24:32 AEST 1985
Here is all the information I have on this topic.
Most of this came from Utah.
-Mike
- - - - - - - - - - - -
INCREASING DATA/STACK MAXIMUM SIZES
The following sets of numbers illustrate ways to increase the limits
for virtual memory for data and stack segments. There are two things
which must be changed in order to do this. The resource limit MAXDSIZ
must be raised to accomodate the largest data segment size desired
(although the soft limit need not be raised to the same value). Currently,
the hard stack limit is set to MAXDSIZ as well. Also, the swap map per
segment, as defined in dmap.h, must be made large enough to map the
maximum size data or stack segment (there is one map for each). This
is accomplished by changing the number or sizes of the sections of this
map, NDMAP and/or DMMAX.
max size, MAXDSIZ NDMAP DMMIN DMMAX bytes
Mb (pages/blocks) added to u.
6 (default) (12*1024-32-SLOP) 16 32 1024 -
10* (20*1024-32-SLOP) 24 32 1024 64
11 (22*1024-32-SLOP) 16 32 2048 0
12 (24*1024-32-SLOP) 16 64 2048 0
16 (32*1024-32-SLOP) 36 32 1024 160
16* (32*1024-32-SLOP) 21 32 2048 40
20 (40*1024-32-SLOP) 44 32 1024 224
20* (40*1024-32-SLOP) 24 32 2048 40
20* (40*1024-32-SLOP) 16 32 4096 0
* recommended configurations
parameter locations:
MAXDSIZ vax/vmparam.h
NDMAP h/dmap.h
DMMAX, DMMIN vax/autoconfig.c
Map size calculation:
The map contains blocks of size (in pages) DMMIN, DMMIN*2, DMMIN*4, ...
to size DMMAX, then the remaining blocks (up to NDMAP) are of size DMMAX.
For example, the current sizes give blocks of sizes 32 64 128 256 512 11*1024.
The size mapped by a dmap can be determined by counting, or calculated by
size = (DMMAX - DMMIN) + (NDMAP - (log2(DMMAX) - log2(DMMIN))) * DMMAX.
Considerations:
1. Larger NDMAP increases size of user struct, possibly
leaving insufficient room for kernel stack. Could increase UPAGES.
Note: most 4.2 systems have a bug in locore.s, sccsid under 6.4;
Fastreclaim had UPAGES wired in. This must be fixed to change UPAGES:
about line 1160, change
subl3 P_SSIZE(r5),$0x3ffff8,r0
to
subl3 P_SSIZE(r5),$(0x400000-UPAGES),r0
and
subl2 $(0x3ffff8+UPAGES),r4
to
subl2 $0x400000,r4
2. Larger DMMAX means larger chunks of swap (interleaving is done
in DMMAX sections), thus more fragmentation loss. This requires
more than proportional increase in swap area (for 20Mb processes
and DMMAX of 4096, 2Mb sections, at least three 32-Mb swap partitions
should be provided). Of course, any increase in process size makes
greater demands on swap space.
3. Changes to NDMAP and/or UPAGES require recompiling ps, pstat, adb,
dbx, etc.
4. I'm not sure that increasing DMMIN works, at least without increasing
SLOP.
>From Mike.Accetta at cmu-cs-ius Sat Jan 14 11:54:55 1984
Date: Thursday, 12 January 1984 11:28:09 EST
From: Mike.Accetta at cmu-cs-ius
To: ihnp4!houxm!houxf!ho95b!wcs at ucb-vax
Cc: info-unix at brl-vgr
Subject: Re: How can I get bigger processes on 4.1BSD ?
Status: R
Bill,
What panic message are you getting? You probably also have to change
the definition of NDMAP in h/dmap.h. The setup document mentions this
file but neglects to describe what constants to change. We were able
to use 12Mb data segment sizes after doubling this constant from 16 to
32.
To explain, the paging system allocates chunks of paging space for the
data segment geometrically beginning with a size of DMMIN up to a
maximum of DMMAX. It stores the pointers to the beginning of each of
these chunks in the dm_map array. Paging area memory for a large
process would thus get allocated something like this:
Chunk Size (sectors)
0 32 \
1 64 |
2 128 | < .5 Mb
3 256 |
4 512 /
5 1024 \
6 1024 |
7 1024 |
8 1024 |
9 1024 |
10 1024 | 5.5 Mb
11 1024 |
12 1024 |
13 1024 |
14 1024 |
15 1024 /
Which as you can see causes the dm_map array to run out of room
slightly before the process size can reach 6Mb. By adding another 16
elements to the the end of the array, you gain 16*.5Mb = 8Mb more
process address space for a maximum of slightly under 14Mb (actually
the minimum number which you need to increase NDMAP by is more like 12
or 13 for 12Mb data segments depending on how you define MAXDSIZ).
When you change this constant it is also advisable to recompile the
various user programs (w, ps and pstat are the ones that come to mind
immediately) which include this file for examining the user area of a
process and for grabbing the command line arguments from the paging
area when a process is not resident.
- Mike Accetta
>From thomas Sat Jan 14 21:54:44 1984
Received: by utah-cs.ARPA (4.19/3.33.3)
id AA03226; Sat, 14 Jan 84 21:54:40 mst
Date: Sat, 14 Jan 84 21:54:40 mst
From: thomas (Spencer Thomas)
Message-Id: <8401150454.AA03226 at utah-cs.ARPA>
To: lepreau, vax-sys
Subject: Increased process sizes
Status: R
I am increasing the maximum data space for a process from 6MB (approx) to
11MB (approx). This seems to be the easiest alternative size (has the
fewest side effects). No programs should need to be recompiled for
this change, the only effect is to make fragmentation of swap space a
little worse for processes bigger than 1MB (it allocates 1MB chunks
instead of 512kb chunks for large processes).
If it weren't for the fact that CS is short of swap space, I would
increase it to 2MB chunks, giving a total process size of 20Mb, but the
recommended amount of swap in this case is 96Mb, about twice what CS has
(and, indeed, more than GR or UG have, too).
=S
More information about the Comp.unix.wizards
mailing list