large memory model support
MS W420
chen at MITRE-GATEWAY.ARPA
Wed Sep 4 08:04:52 AEST 1985
Hah, hah, har, har, snicker, tee hee, guffaw...
(Sorry, but I couldn't help myself. I've been working myself
with the 80188's younger brother, the 8088, although bashing
my head against it might be more correct. Anyway...)
Personally, I know of one compiler that generates better
code for the 80188/6 than for the 8088/6 and supports
the large memory model. It's the CI-C86 C compiler
by Computer Innovations. They also sell something called
ROMPAK for ROMing your code. However, your friend left
out one crucial detail -- namely what operating system
the code will run under (if any). CI-C86 will run
under MS-DOS and CPM. I don't know if they support
anything else.
If the CI compiler doesn't work out, I hear that Lattice
is also a good compiler. Myself, I prefer the CI compiler
as it's a 4-pass compiler (including an optimizer), includes
all sort of hardware fp support, large, small, medium (just out),
and compact memory model support, AND because they give you the
source code to their libraries. The only gripes I have with the CI
compiler is that it doesn't know about structure assignments yet
and the way they handle "extern". They handle it like the
System V Release 1 compiler did. (I'm going to bitch about
both of these to CI.)
If neither compiler works out, you might want to look
into Whitesmith's C compiler.
Now, a general note about memory models. You'll be able to
find a compiler that lets you have > 64K code and > 64K data.
However, there are going to be certain restrictions that are
possible to get around when programming in assembler, but
are very difficult for a compiler.
Namely, you're going to have 64K restrictions are a lot
of objects. Basically, if the compiler would have to know
when to change the value of a segment register when
accessing two pieces within the same object, forget it.
For example, chances are, you won't be able to have static
arrays > 64K. If you did, this would mean that the compiler
would have to break the array up across two or more segments,
make sure the linker loaded the segments contiguously (to
make array indexing and pointer arithmetic a reasonable
proposition) and check for overflows on indexing (for wrap-
around if the index variable is 16 bit (int))
or do boundary checking on every random access into the array
(if the index variable is 32 bit (long) in order to
keep track of when to change the segment register.
If you don't make the linker load segments contiguously,
then you'd have to keep some sort of segment map indicating
which segments held which parts of what array and index into
that when you had to change the segment register. Plus, values in
the table would probably have to be addresses relative to the
address where the program was loaded. And on top of all this,
remember, this isn't a 68K or a VAX where you've got a lot
of registers to play with. Ugh.
You run into the same problem with pieces
of dynamically allocated memory > 64K, code modules > 64K,
stack sizes of > 64K, etc. It's possible to write a compiler
that could handle this sort of stuff, but it'd be a real
pain and the resulting code would have all sorts of special-case
checks that would slow it down a *lot*.
Large model code is going to be slower anyway. First of all,
all your procedure calls are going to be far calls instead of
near calls. Second, all your pointers are going to be 32-bit
instead of 16 as the compiler now needs both an offset and a
segment value. Also, watch out for pointer arithmetic.
If your pointers aren't pointing to things in the same segment,
the resulting value is liable to be implementation-dependent.
If you decide to go with CI's compiler, their phone number is
201-542-5920.
Note that I have no connection with the Computer Innovations
or Intel except as a customer.
Good luck...
Ray Chen
chen at mitre-gw
"This message brought to you courtesy of Intel -- maker
of the world's finest 16 bit elevator controllers..."
More information about the Comp.lang.c
mailing list