Assembly or ....
Doug Gwyn
gwyn at smoke.BRL.MIL
Thu Dec 1 04:25:20 AEST 1988
In article <189 at ernie.NECAM.COM> koll at ernie.NECAM.COM (Michael Goldman) writes:
>When you are working on drivers or hardware specific items, C (or PASCAL)
>makes assumptions about hardware that aren't true, and to do the job in C
>requires using such convoluted, obscure code that maintainability and
>productivity are gone anyway. This is just as true for non-time
>-critical things such as keyboard handlers as for time-critical things
>such as communications handling routines.
I have to strongly disagree with this. UNIX's device drivers have been
written in C for many years, and I regularly use C to do low-level I/O
(direct device register access) on my Apple II at home. Speed is close
to that attainable via assembler, and maintainability is much higher.
The main reason for using assembler in a predominantly C application
is to implement low-level support for features needed at high level,
for example coroutining or parallel tasking, which C does not directly
support. The rest of the assembler support you need should already
exist, in the form of the C run-time library and compiler code generator.
The only parts of the UNIX kernel normally coded in assembler (other than
C language run-time support) are the hooks into the interrupt system, a
bit of code to get the C environment properly initialized, and a few
support routines to diddle the memory management unit, perform block
moves, etc. (The block-move can be thought of as a specialized form of
the C library memcpy(), so it really should've already existed.) All the
disk buffer management, terminal queues, scheduler, network protocols,
etc. are coded in C.
One of the reasons UNIX has spread as widely as it has is that, having
been coded primarily in (fairly) portable C, it can be brought up on a new
architecture much more quickly than a similar assembly-coded operating
system. There are several computer manufacturers who owe whatever
competitive advantage they may have to the speed with which they were
able to provide a fully functioning and useful operating system for their
new hardware.
>There is also the question of what happens when a new machine (like the
>IBM PC or MAC, or whatever) comes out and the C compilers for it are
>late or terribly buggy, or soooooooooo slow, and there are few if
>any utility packages for it ? Users are used to (and should have !)
>crisp, snappy feedback for window manipulation and I/O and it takes
>years to get the utility packages (often written in ASM) that will
>do that for you.
It takes no time at all to get a C version of such packages for a new
system once you have one for an older system. Is that quick enough?
These days, C is usually the first language supported for a new processor
architecture. Since the majority of the development of C applications
can be done on existing systems, there is not the need for an advance
implementation of C that there would be for an assembler for a new
machine. Even there, new architectures these days are normally
simulated duirng the design stage, and assemblers, compilers, etc. are
developed early using the simulator, sometimes before the hardware even
exists.
>Only in the academic world can code be written to be 100% machine
>independent. The rest of us have to struggle with those little quirks
>of the chips.
I'm about as production-oriented as any software developer I know of,
and maximal portability is an important design and coding goal for the
applications I develop. I avoid "quirks of the chips" almost entirely.
More information about the Comp.lang.c
mailing list