Unix/C program modularity

Doug Gwyn <gwyn> gwyn at brl-tgr.ARPA
Fri Oct 18 03:05:46 AEST 1985

> My observations are based on inspection of graphics applications (which
> Dicomed is in the business of producing) which tend to be predominantly
> user-interface stuff.  I am specificly NOT commenting on code used to
> support program development, operating systems, and tools, but rather
> applications programs that are used in a graphics production environment.

UNIX code comes in several flavors.  I am not familiar with Dicomed's
software design and coding practice.  Maybe it's just not very good?

> Why might this be the case ??  Further inspection of much code shows that
> applications designed for the Unix environment tend to follow the spirit
> of the Unix operating system: design your system as a series of small
> programs and "pipe" them together (revelationary!)

Exactly!  Re-usability is obtained at the higher, process, level.
Many new applications should be produced by combining existing tools
rather than by writing code in the traditional sense.  This works
especially well when one is trying to support a wide and growing
variety of graphic devices.

> As a result of this philosophy to design systems as a network of filters
> piped together:
> 	o Much of the bulk of the code is involved in argument parsing,
> 	  most of which is not re-usable.

The shell is eminently reusable.  Within each process, argument
processing should be done via getopt(), in the standard library.
Beyond that, obviously different processes are going to have
different specific requirements.

> 	o Error handling is minimal at best.  When your only link to the
> 	  outside world is a pipe, your only recourse when an error
> 	  occurs is to break the pipe.

If a subordinate module is not able to perform its assigned task,
it should so indicate to its controlling module.  Error recovery
is best performed at the higher strategic levels.  UNIX processes
indeed do have a simple means of returning error status to their

> 	o Programs do not tend to be organized around a package concept,
> 	  such as one sees in Ada or Modula-2 programs.  The programs are
> 	  small, so data abstraction and hiding seem inappropriate.  Also
> 	  the C language support for these concepts is cumbersome, forcing
> 	  the programmer to use clumsy mechanisms such as ".h" files and
> 	  "static" variables to accomplish packaging tasks.

There is no need to emulate Ada packages, if module interfaces are
clean and well-defined.  The UNIX process interface usually is.
Within processes, the facilities C provides are generally adequate,
although some prefer to spiffy up intra-process module design via
"classes", "Objective-C", "C++", or some other preprocessing scheme.
We have not felt much need for this in our UNIX graphics work.

> 	o Programmers invent "homebrew" data access mechanisms to supplement
> 	  the lack of a standard Unix ISAM or other file management.  Much
> 	  of this code cannot be re-used because the programmer implemented
> 	  a primitive system to satisfy the needs of this one filter.

It is relatively rare that UNIX applications have to be concerned
with detailed file access mechanisms.  There is as yet no standard
UNIX DBMS, so portable UNIX applications have to either work without
one or provide their own.  Most graphics applications do not need the
complexity of a DBMS, but can work with simple data formats.

> Despite all this, the graphics community is settling in on using Unix as
> the operating system of choice.

That's because it supports rapid development of good, flexible systems
that can be ported widely with little additional expense.

> Are we being lulled into using an O/S and language that allows us to whip
> together quicky demos to demonstrate concepts, at the expense of long-term
> usefulness as a finished product ??

You should make your own decisions.
Do you have a better approach to suggest?

More information about the Comp.lang.c mailing list