UNIX-WIZARDS Digest V5#066
Mike Muuss
Unix-Wizards-Request at arpa.brl
Sun Jun 12 17:45:26 AEST 1988
UNIX-WIZARDS Digest Sun, 12 Jun 1988 V5#066
Today's Topics:
Re: Vax 11/780 performance vs Sun 4/280 performance
Smaller is better (Re: OSF (2) Why is it better than AT&T?)
Re: File System Type (statfs, sysfs)
Re: ksh incompatabilities with sh?
Re: redirection before wildcards
Re: Yet Another OSF Question (YAOQ)
Re: ksh weird files
Re: back to the (ivory) tower
COBOL-to-C translator wanted
Re: Stdio buffering question
Source License
Re: How do I use ksh TMOUT on 5.2
Re: Getting the OSF address
Re: OSF: A Desperation Move?
Re: alloca... (and setjmp/longjmp coroutines)
Re: grep replacement
Re: yacc and lex tutorials
Re: phys(2) under sVr3?
Re: Sun 4
alloca
Stupid Mistake!
I need to RENT COMPUTER TIME from someone!
SVVS user/system time tests
Re: SVVS user/system time tests
Re: In defense of BSD (was: something else)
-----------------------------------------------------------------
From: Greg Franks <greg at xios.xios.uucp>
Subject: Re: Vax 11/780 performance vs Sun 4/280 performance
Date: 6 Jun 88 15:28:55 GMT
To: unix-wizards at brl-sem.arpa
>I also tried forking 32 `for(;;) ;' loops on a 3/60 with 8-mb.
>Each process got about 3 percent of the CPU and the reponse was
>still quote good for interactive work. This stuff about a `knee'
>at 7 processes just isn't real...
However, the 32 processes do nothing but chew up CPU cycles. Add some
disk I/O, other random interrupts, and a desire for memory to your test.
--
Greg Franks XIOS Systems Corporation, 1600 Carling Avenue,
utzoo!dciem!nrcaer!xios!greg Ottawa, Ontario, Canada, K1Z 8R8. (613)725-5411.
ACME Electric: When you can't find your short, call us!
-----------------------------
From: Peter da Silva <peter at ficc.uucp>
Subject: Smaller is better (Re: OSF (2) Why is it better than AT&T?)
Date: 7 Jun 88 16:44:24 GMT
To: unix-wizards at brl-sem.arpa
What I want to know is:
Why does everyone have this big need to just merge System V
and BSD? A lot of the features that have been added since
Version 7 (and even a few features in Version 7) just don't
seem to fit well with the rest of UNIX.
System V IPC, for example... why a new name-space seperate
from the file system?
I also can't get over the sneaking feeling that it should be
possible to build a general windowing system with realtime
support that will run in under a megabyte. After all, I use
one every day that does a very good job in half a meg. Yes,
I'm talking about the Amiga Exec. It's not up to UNIX standards,
but surely protected memory can't cause more than a factor of
two size differential?
Also:
When are AT&T and/or Berkeley going to knuckle down and make
all the commands use perror()? It's not perfect, but it's
a lot better than "foo.bar: can't open" and it's been around
for at least 8 years. Just this one change would do wonders
for users' impressions of UNIX as a hostile beast.
--
-- Peter da Silva, Ferranti International Controls Corporation.
-- Phone: 713-274-5180. Remote UUCP: uunet!nuchat!sugar!peter.
-----------------------------
From: David Grossman <dpg at abstl.uucp>
Subject: Re: File System Type (statfs, sysfs)
Date: 9 Jun 88 17:43:39 GMT
Keywords: Sys V Release 3
To: unix-wizards at SEM.BRL.MIL
RFS under Sys V Release 3 on the 3b2's is not fully implemented with file
system switch (fss). There are also some RFS specific system calls.
Apparently, fss and RFS were developed independently and only partially
merged.
Fss was originally developed under Research Version 8 for /proc, a special
type of file system where each active process appears as a file whose name
is the process id and contents contiain kernel user structure info.
The idea to put fss in SVR3 was motivated by the need to merge SV, BSD, and
MS-DOS. In fact AT&T's 386 version of SVR3 does allow mounting of MS-DOS
diskettes. Fss works by having a switch table listing functions for each
fs type to perform various actions.
The reason there is so little documentation on fss is that it needs some
major modifications to be completely generic. Perhaps someone from AT&T
can tell us if what the plans for RFS and fss are in Release 4?
David Grossman ..!wucs2!abstl!dpg
Anheuser-Busch Companies 314/577-3125
One Busch Place, Bldg. 202-7
St. Louis, MO 63118
-----------------------------
From: Shankar Unni <shankar at hpclscu.hp.com>
Subject: Re: ksh incompatabilities with sh?
Date: 10 Jun 88 00:45:25 GMT
To: unix-wizards at SEM.BRL.MIL
>
> Many of our customers still use ^ for pipes, that's what they got used
> to. They have been using various versions of UNIX for at least 12 years.
> Old habits die hard as the saying goes.
>
Maybe a judicious dose of WARNINGS (a' la' the C compiler) might have turned
them off a long time ago...
--
Shankar.
-----------------------------
From: Shankar Unni <shankar at hpclscu.hp.com>
Subject: Re: redirection before wildcards
Date: 10 Jun 88 01:01:16 GMT
To: unix-wizards at brl-sem.arpa
> It's pretty well known that commands like "grep whatever * > out" can
> cause infinate loops, because C-shell will create the file "out"
> before expanding the asterisk wildcard, and grep never reaches EOF
> once it reaches the file "out".
>
> Try `grep whatever * > .out'. Small consolation if you didn't :-)
But, for curiosity's sake, why exactly are redirections performed *before*
wildcard expansions? For "historical" ( :-> ) reasons only? Or is there a
grander design behind it?
--
Shankar.
-----------------------------
From: Shankar Unni <shankar at hpclscu.hp.com>
Subject: Re: Yet Another OSF Question (YAOQ)
Date: 10 Jun 88 01:05:44 GMT
To: unix-wizards at brl-sem.arpa
> If OSF is really "open", why haven't I see a phone number or address
> where I can contact them and ask questions rather than endlessly
> speculate? I realize that they are a relatively young organization
> but they will have to "open up" soon.
Here's the name and number AGAIN:
Deborah Siegel
Cohn & Wolfe
(212) 951-8300
OK?
--
scu
-----------------------------
From: Wayne Krone <wk at hpirs.hp.com>
Subject: Re: Yet Another OSF Question (YAOQ)
Date: 10 Jun 88 01:33:18 GMT
To: unix-wizards at SEM.BRL.MIL
> If OSF is really "open", why haven't I see a phone number or address
> where I can contact them and ask questions rather than endlessly
> speculate? I realize that they are a relatively young organization
> but they will have to "open up" soon.
Open Software Foundation
P.O. Box 545
Billerica, Massachusetts 01821-0545
Wayne Krone
-----------------------------
From: Maarten Litmaath <maart at cs.vu.nl>
Subject: Re: ksh weird files
Date: 10 Jun 88 06:03:55 GMT
Keywords: ksh shell scripts, not weird
To: unix-wizards at brl-sem.arpa
In article <494 at philmds.UUCP> leo at philmds.UUCP (L.J.M. de Wit) writes:
\
\The reason for the late removing of the /tmp files I don't know. But I've some
\other questions. How do you get a sh script running at login (the password
\file entry should be an executable I thought). When login or getty or whoever
\goes to exec the sh script, how does it know which command interpreter to use
\(I don't think it knows of #! lines, in fact it doesn't even know the c.i.
\issue) ?
The magic number #! is well-known to the KERNEL!
\And another question: how do you display text with
\echo << END-OF-TEXT
\
\text here
\
\END-OF-TEXT
\
\? The screens you get are rather small .... 8-).
How about:
cat << scroll_of_enchant_weapon
text here
scroll_of_enchant_weapon
?
(You can also use scroll_of_enchant_armor, whichever you prefer.)
--
South-Africa: |Maarten Litmaath @ Free U Amsterdam:
revival of the Third Reich |maart at cs.vu.nl, mcvax!botter!ark!maart
-----------------------------
From: Leo de Wit <leo at philmds.uucp>
Subject: Re: back to the (ivory) tower
Date: 10 Jun 88 11:07:52 GMT
To: unix-wizards at brl-sem.arpa
In article <16018 at brl-adm.ARPA> ted%nmsu.csnet at relay.cs.net writes:
>
>The 4.3 manual entry for alloca says:
>
> BUGS
> Alloca is both machine- and compiler-dependent; its use is
> discouraged.
>
>On the other hand, alloca is often used and strongly supported by the
>gnu camp (n.b. heavy use in emacs and bison).
>
>It is true that proper use does simplify many aspects of avoiding hard
>limits for things like line lengths and such. Alloca is also very
>handy in implementing stack gaps so that setjmp/longjmp can be used to
>do a weak implementation of coroutines and lightweight processes.
>
>It is also true that alloca is almost trivial to implement on most
>machines (say with a macro which expands to a single machine
>instruction to in(de)crement the stack pointer).
>
>What is the opinion of the masses? Is alloca really such a problem
>across differing architectures? Is it really that useful?
The masses (at least one of them 8-) give their opinion:
When you need local variables whose sizes cannot be determined at compile
time, I think alloca() is ideal. Even when it is implemented as a function
instead of inline code, it will outspeed malloc() by orders of magnitude (I
think; BSD malloc is pretty nice though). (As for your single machine
instruction, the value of the new stack pointer has also to be taken. But I
will settle for 2).
Besides, no more need of free; several modern architectures will restore
the stack pointer using the frame pointer.
The lack of dynamic arrays in C could be partly removed by using alloca().
Also linked lists can benefit from it. The drawback is that you still have
to save things in global/static var's or on the heap when you want the function
to return (a pointer to) such a value.
Taking the MC68000 as an example:
If compilers follow the convention to address parameters by a positive
offset from the link pointer, the local variables by a negative offset, and
temporaries by an offset from the stack pointer (and most do that), and always
use LINK/UNLK instructions to build/undo the stack frame, it works. However, the
last 'always' is not true; several compilers do not generate these instructions
if the function has no local variables; they use stack pointer relative
addressing then to reach the parameters (saving a few instructions). In this
case it is impossible to determine the position of the stack pointer when the
function was called, as it is not saved and it is modified by alloca(); unless
someone makes a very clever alloca that keeps track of return addresses
and lets the calling function return to it (alloca) when it (the caller) has
done (for instance; I'm just philosofing); but I doubt if it can be done
without breaking existing code, and if it can be done for each machine/compiler.
The problem with alloca() also is that, just because it does not come in
pairs (like malloc() - free()) and some (or most ??!?) compilers) cannot
support it (see above for some reasons) it is not portable.
But I have a nice alternative: lets always use alloca() and a new function,
call it freea(), in pairs. Freea() should be called before leaving the function
(although it does not need to be in this case).
For the compilers that support alloca():
#define freea() /* a dummy */
For those that don't:
#define alloca malloc /* the oldies */
#define freea free
In both cases memory is allocated dynamically and freed upon return from
the function.
Everybody happy ?
Leo.
-----------------------------
From: Bernard Lemercier <bl at sunbim.uucp>
Subject: COBOL-to-C translator wanted
Date: 10 Jun 88 11:49:54 GMT
Keywords: COBOL translator
To: unix-wizards at SEM.BRL.MIL
I have a huge amount of COBOL sources I would like to
translate into C, so I am looking for an automatic COBOL-to-C
translator. Would someone tell me where I can find one ?
Thanks in advance.
Bernie.
--
Bernard Lemercier tel: +32 2 7595925
B.I.M. fax: +32 2 7594795
Kwikstraat 4 email:{uunet!mcvax!prlb2!}sunbim!bl
3078 Everberg, Belgium
-----------------------------
From: Greg Pasquariello X1190 <gp at picuxa.uucp>
Subject: Re: Stdio buffering question
Date: 10 Jun 88 13:04:31 GMT
To: unix-wizards at brl-sem.arpa
In article <16124 at brl-adm.ARPA> ultra!wayne at ames.arc.nasa.gov (Wayne Hathaway) writes:
>I hate to bother Wizards with what is probably a very simple question,
>but it stumps me. I have this very simple program that periodically
>does an fprintf(stderr,...) to indicate progress.
>
> Wayne Hathaway ultra!wayne at Ames.ARPA
You can turn buffering off for stderr (or any stream) with setbuf(). The
syntax for stderr is setbuf(stderr, (char *) NULL). (where NULL is 0).
--
=========================================================================
Greg Pasquariello AT&T Product Integration Center
ihnp4!picuxa!gp 299 Jefferson Rd, Parsippany, NJ 07054
=========================================================================
-----------------------------
From: David Feldman <david at linc.cis.upenn.edu>
Subject: Re: Stdio buffering question
Date: 10 Jun 88 16:37:13 GMT
Sender: news at super.upenn.edu
To: unix-wizards at SEM.BRL.MIL
A self proclaimed novice asked why stderr gets buffered when piped in
Sun 0S 3.3.
Well, I can give an answer based on Ultrix experience. When piping,
stderr gets buffered so that it may be separated from stdout. Stdout
goes through the pipe, and when it is closed, the stderr buffer gets flushed
through the pipe. That is assuming you have redirected stderr through the
pipe also. This is a documented feature, and I believe it is a csh thing.
I can't remember off hand. I don't think fflush() will help, especially
if it is done in csh. One way csh could implement this feature is to
attach stderr to a file and then throw the file down the pipe when stdout
closes. Any csh hackers know the details on this thing? I am guessin'.
Dave Feldman
david at linc.cis.upenn.edu
-----------------------------
From: Chris Torek <chris at mimsy.uucp>
Subject: Re: Stdio buffering question
Date: 11 Jun 88 02:42:08 GMT
To: unix-wizards at brl-sem.arpa
In article <4999 at super.upenn.edu> david at linc.cis.upenn.edu (David Feldman)
writes:
>A self proclaimed novice asked why stderr gets buffered when piped in
>Sun OS 3.3.
>
>Well, I can give an answer based on Ultrix experience. When piping,
>stderr gets buffered so that it may be separated from stdout.
No. Buffering is a user-level concept; piping and descriptor
merging is a kernel level concept. The two are not supposed to
to be mixed together.
Stderr gets buffered because it improves performance and raises
benchmark numbers. It also makes correct programs fail, and should
not be done casually.
>... That is assuming you have redirected stderr through the pipe also.
>This is a documented feature, and I believe it is a csh thing.
It has nothing to do with the shell being used.
>I don't think fflush() will help ...
It will.
>... One way csh could implement this feature is to attach stderr to
>a file and then throw the file down the pipe when stdout closes.
This would generally destroy performance, as the shell would have to
remain in the path of a pipeline. Currently `a |& b' simply runs a and
b `together' (under csh, a is started first) with a's stdout and stderr
(fds 1 and 2) both going to a pipe that is b's stdin (fd 0). The
pipe is created with kernel concepts (file descriptors) and the
programs are run without giving them any knowledge as to where those
descriptors connect.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
-----------------------------
From: "John F. Haugh II" <jfh at rpp386.uucp>
Subject: Re: Stdio buffering question
Date: 11 Jun 88 03:06:10 GMT
To: unix-wizards at brl-sem.arpa
the reason printf buffers when writing to a non-terminal is because it
is cheaper (cpu-wise) to wait until an entire buffer is full. if you
really want to see the output immediately regardless of the output
destination, you must use fflush. this is a very common situation
when using fprintf or printf for debugging a program which core dumps.
- john.
-----------------------------
From: Doug Gwyn <gwyn at brl-smoke.arpa>
Subject: Re: Stdio buffering question
Date: 11 Jun 88 09:56:53 GMT
To: unix-wizards at brl-sem.arpa
In article <4999 at super.upenn.edu> david at linc.cis.upenn.edu.UUCP (David Feldman) writes:
>Well, I can give an answer based on Ultrix experience. When piping,
>stderr gets buffered so that it may be separated from stdout. Stdout
>goes through the pipe, and when it is closed, the stderr buffer gets flushed
>through the pipe. That is assuming you have redirected stderr through the
>pipe also. This is a documented feature, and I believe it is a csh thing.
What the hell are you talking about? The whole reason for the invention
of the standard error output as distinct from standard output is to AVOID
error output getting mixed with legitimate output in pipelines etc.
I don't like csh, but I won't accuse it of directing standard error output
into pipes. Indeed I think you'd have to work fairly hard to get it to
do so.
My guess for the reason that stderr is (line-)buffered on some BSD-derived
systems is that Bill Shannon once thought it would be a good idea and just
did it. (Apologies if I'm maligning him.) It is also possible that
someone thought it would improve network performance for error output to
be sent in large chunks rather than a character at a time.
-----------------------------
From: Leroy Cain <lcain at cucstud.uucp>
Subject: Source License
Date: 10 Jun 88 14:00:46 GMT
Keywords: BSD Mach
To: unix-wizards at SEM.BRL.MIL
We are just about through the process of getting a System V source license.
Once we complete this process we are interested in getting a BSD and maybe a
Mach source license. Anyone out there know who to contact about the BSD and
Mach licensing?
Thanks
-------------------------------------------------------------------------------
MS-DOS Just say NO!!!!! OS/2 Why????????
Leroy Cain; Columbia Union College; Mathematical Sciences Department
7600 Flower Ave. WH406; Takoma Park, Md 20912
(301) 891-4172 netsys!cucstud!lcain
-----------------------------
From: "William E. Davidsen Jr" <davidsen at steinmetz.ge.com>
Subject: Re: How do I use ksh TMOUT on 5.2
Date: 10 Jun 88 14:40:42 GMT
Keywords: ksh timeout
To: unix-wizards at brl-sem.arpa
In article <130 at wash08.UUCP> txr98 at wash08.UUCP (Timothy Reed) writes:
| hopefully easy question: how do I access the TMOUT variable to zap an
| idle user. Ideally I'd like to trap it in a user's login shell. MKS
TMOUT is in seconds. For ten minutes set it to 600. You might set it in
/etc/rc and then make it readonly. I do something similar for my guest
users.
--
bill davidsen (wedu at ge-crd.arpa)
{uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me
-----------------------------
From: Chick Webb <ckw at hpcupt1.hp.com>
Subject: Re: Getting the OSF address
Date: 10 Jun 88 16:26:44 GMT
To: unix-wizards at brl-sem.arpa
The original OSF press release lists the Foundation Contact as:
Deborah Siegel
Cohn & Wolfe
(212) 951-8300
Chick Webb "I am sitting in the smallest room of
Hewlett-Packard Company the house. I have your letter in front
Cupertino, CA of me. It will soon be behind me."
UUCP: {ucbvax, etc.}!hpda!ckw
ARPA: ckw at hpda.HP.COM - A letter of relpy, by Voltaire
-----------------------------
From: Godfather to putty-tats <matt at oddjob.uchicago.edu>
Subject: Re: OSF: A Desperation Move?
Date: 10 Jun 88 16:56:02 GMT
To: unix-wizards at SEM.BRL.MIL
Brandon, get real! IBM could afford to come out with fifty more
lines of micros and have them all flop utterly! Some heads would
roll, but the company would not be in trouble. There are plenty of
mainframe customers out there who would shoot their own dog before
they'd buy anything but IBM. (OK, maybe you could sell 'em an Amdahl
or two if you held their children hostage.)
Matt Crawford
-----------------------------
From: Larry McVoy <lm at arizona.edu>
Subject: Re: OSF: A Desperation Move?
Date: 10 Jun 88 18:53:12 GMT
To: unix-wizards at brl-sem.arpa
In article <23257 at bu-cs.BU.EDU> bzs at bu-cs.BU.EDU (Barry Shein) writes:
-
->(1) The past few new IBM machines have NOT done well in the marketplace.
-> We all know what happened to the RT PC and the PCJr. The jury's still
-> out on the PS/2, but...
-
-Brandon, don't take this *too* harshly, but get your head out of your
-ass.
-
-The PC/RT and the PC/Jr are probably as important as failures to IBM
-as the failure of a new breakfast cereal is to General Foods.
-
- -Barry Shein, Boston University
Heh, heh. I'm amused. I didn't think it was that bad, though, Barry.
You both seem to see that Unix is going to be the big $$$ in the future.
(hey, maybe I can use my Unix experience to get $ome. Nah, selling
hotdogs is where my future is :-)
What people seem to forget a lot, especially when comparing big iron to
fast micros, is I/O, or more generally, peripherals. Sure, my sun can do 4
MIPS. But that doesn't make it come even close to a 4 MIP mainframe. The
big iron have big I/O channels, I/O processors, etc, etc. When you put 100
users on an 8 MIP vax it's not so bad. Try putting 10 users on a 4 MIP
sun. The point is that the micros are still pretty close to single (ok,
double :-) user machines. So most of the comparisons floating around out
there would only make sense if you ran them in both single and multiple
forms (1, 4, 8, 16, 32, 64, 128 copies at a time). And you can't complain
because your sun won't let you have more than 8 contexts or more than 4
megs of ram. The comparisons are still valid, more so if you want a multi
user machine.
--
Larry McVoy lm at arizona.edu or ...!{uwvax,sun}!arizona.edu!lm
-----------------------------
From: Russ Nelson <nelson at sun.soe.clarkson.edu>
Subject: Re: OSF: A Desperation Move?
Date: 11 Jun 88 02:37:53 GMT
To: unix-wizards at brl-sem.arpa
In article <7941 at ncoast.UUCP> allbery at ncoast.UUCP (Brandon S. Allbery) writes:
>One more thought on this: the emergence of standards. We have the
>following armies shaping up in the 286/386 world:
>
> STANDARD UNIX IBM'S HOPE
>
> Any 386 system PS/2-80
> Unix for 386 OS/2 for 286/386
> X Windows Presentation Manager
> Any DBMS + Accell Sybase-like product(?)
>
>Notice the combinations. Unix runs on virtually any 386 box; so will OS/2
>(not using the full power of the 386!), but IBM really wants the PS/2. Unix
>does as much or more then OS/2; *that* discussion we had a month or so ago.
Here is the subject list from the most recent 386-users digest:
Inboard-386 problems
Re: XENIX on an Inboard '386 ?
Re: XENIX on an Inboard '386 ?
ega and kbd drivers for System V/386
Mixed DOS/UNIX environment on '386
Info request, Xenix, Uport, Bell Tech (SVR3/386)
Re: Computone Intelliport / Compaq 386/20
Re: SCO Xenix 2.2.1 lp problem
Re: Xenix/386 and VGA
Roadrunner (Sun 386i) and Targa boards
Chase Serial I/O boards
--
signed char *reply-to-russ(int network) { /* Why can't BITNET go */
if(network == BITNET) return "NELSON at CLUTX"; /* domainish? */
else return "nelson at clutx.clarkson.edu"; }
-----------------------------
From: Chris Torek <chris at mimsy.uucp>
Subject: Re: alloca... (and setjmp/longjmp coroutines)
Date: 10 Jun 88 16:57:41 GMT
To: unix-wizards at SEM.BRL.MIL
In article <16126 at brl-adm.ARPA> ted%nmsu.csnet at relay.cs.net writes:
[alloca + setjmp/longjmp for coroutines]
longjmp is not suitable for coroutines because it is valid for longjmp
to attempt to unwind the stack in order to find the corresponding
setjmp, and it is therefore legal for longjmp to abort if it is
attempting to jump the `wrong way' on the stack.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
-----------------------------
From: Dave Jones <djones at megatest.uucp>
Subject: Re: alloca... (and setjmp/longjmp coroutines)
Date: 11 Jun 88 02:18:31 GMT
To: unix-wizards at sem.brl.mil
>From article <11902 at mimsy.UUCP>, by chris at mimsy.UUCP (Chris Torek):
> In article <16126 at brl-adm.ARPA> ted%nmsu.csnet at relay.cs.net writes:
> [alloca + setjmp/longjmp for coroutines]
>
> longjmp is not suitable for coroutines because it is valid for longjmp
> to attempt to unwind the stack in order to find the corresponding
> setjmp, and it is therefore legal for longjmp to abort if it is
> attempting to jump the `wrong way' on the stack.
> --
> In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
> Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
I'm no C wizard -- BSD4.2 and Sun3-OS are the only C's I've ever used --
but it seems to me that longjmp is the most suitable technique going,
by default.
What else is there? You could use [abuse?] sigvec and kill. But if you
use separate stacks of fixed sizes, they can overflow with disastrous
consequences. And -- correct me if I'm wrong -- more systems have
setjmp/longjmp/alloca than have sigvec and kill.
Or you could use a smattering of assembler. But, it will certainly run
on more kinds of machines if written in C than it would if written in
assembler.
And to answer your objection about unwinding the stack, you can see to
it that the stack is restored _before_ you do the longjmp, so the longjmp
can "unwind" as it pleases.
I recently wrote a little discrete-event-simulator using
setjmp/longjmp/alloca to do lightweight processes.
We hope to run it on both Sun3s and IBM PCs. Haven't tried it on the PCs
yet, so I don't know if it works there or not. Do I have a surprise in
store for me?
Here's what I did. The main simulator loop calls alloca(1) to find
the bottom of the part of the stack that lightweight processes will
be using. It squirrels that address away in the variable {stack_bottom}.
To start a process initially, it just calls the process's procedure.
Then the simulator and the process trade setjmp/longjmp cycles through
a couple of jmpbufs.
Well, you'll see.
Is there some gotcha that will break this code on some systems?
If so, is there a better [more machine independent] way?
/***********************************************************************
** Run the simulation, stopping after some number of ticks, unless
** all processes exit, or some process calls PSim_stop() first.
***********************************************************************/
unsigned long
PSim_run(obj, ticks)
Simulation* obj;
unsigned long ticks;
{
obj->stack_bottom = (char*)alloca(1);
obj->stop_time += ticks;
while(!obj->quit)
{
/* Get a busy process from the busy-queue */
obj->active = (Process*) PQ_pop(&obj->busy);
/* If all processes are finished, or are waiting on
** a semaphore, we are blocked, and must exit the simulation.
*/
if(obj->active==0)
goto end_simulation;
{ register Process *active = obj->active;
/* Update the time to the time of the active process */
obj->time = active->busy_until;
if( obj->time >= obj->stop_time)
goto end_simulation;
if(setjmp(active->suspend) == 0)
if(active->stack_save == 0)
/* Process has not yet started. Call its start-procedure. */
active->return_value =
(*(active->start))(obj);
else
{ /* Process has been suspended, and will now be restarted. */
/* allocate the restarting process's stack. */
alloca( active->stack_size );
/* restore it */
bcopy( active->stack_save, active->stack_real,
active->stack_size);
sfree(active->stack_save);
active->stack_save = 0;
/* restart the process */
longjmp(active->restart, 1);
}
}
}
end_simulation:
cleanup(obj);
return obj->time;
}
static
suspend_active_proc(obj)
register Simulation* obj;
{
char* stack_top = (char*)alloca(1);
long size = abs(obj->stack_bottom - stack_top);
register Process* active = obj->active;
active->stack_save = (char*)smalloc(size);
active->stack_real = min(stack_top, obj->stack_bottom);
active->stack_size = size;
if(setjmp(active->restart) == 0)
{
/* copy the stack and return to the simulator. */
bcopy( active->stack_real, active->stack_save, size);
longjmp(active->suspend, 1);
}
}
-----------------------------
From: andrew at alice.uucp
Subject: Re: grep replacement
Date: 10 Jun 88 18:34:00 GMT
Posted: Fri Jun 10 14:34:00 1988
To: unix-wizards at SEM.BRL.MIL
The following is a summary of the somewhat plausible ideas
suggested for the new grep. I thank leo de witt particularly and others
for clearing up misconceptions and pointing out (correctly) that
existing tools like sed already do (or at least nearly do) what some people
asked for. The following points are in no particular order and no slight is
intended by my presentation. After that, I summarise the current flags.
1) named character classes, e.g. \alpha, \digit.
i think this is a hokey idea and dismissed it as unnecessary crud
but then found out it is part of the proposed regular expression
stuff for posix. it may creep in but i hope not.
2) matching multi-line patterns (\n as part of pattern)
this actually requires a lot of infrastructure support and thought.
i prefer to leave that to other more powerful programs such as sam.
3) print lines with context.
the second most requested feature but i'm not doing it. this is
just the job for sed. to be consistent, we just took the context
crap out of diff too. this is actually reasonable; showing context
is the job for a separate tool (pipeline difficulties apart).
4) print one(first matching) line and go onto the next file.
most of the justification for this seemed to be scanning
mail and/or netnews articles for the subject line; neither
of which gets any sympathy from me. but it is easy to do
and doesn't add an option; we add a new option (say -1)
and remove -s. -1 is just like -s except it prints the matching line.
then the old grep -s pattern is now grep -1 pattern > /dev/null
and within epsilon of being as efficent.
5) divert matching lines onto one fd, nonmatching onto another.
sorry, run grep twice.
6) print the Nth occurence of the pattern (N is number or list).
it may be possible to think of a real reason for this (i couldn't)
but the answer is no.
7) -w (pattern matches only words)
the most requested feature. well, it turns out that -x (exact)
is there because doug mcilroy wanted to match words against a dictionary.
it seems to have no other use. Therefore, -x is being dropped
(after all, it only costs a quick edit to do it yourself) and is
replaced by -w == (^|[^_a-zA-Z0-9])pattern($|[^_a-zA-Z0-9]).
8) grep should work on binary files and kanji.
that it should work on kanji or any character set is a given
(at least, any character set supported by the system V international
character set stuff). binary files will work too modulo the
following restraint: lines (between \n's) have to fit in a
buffer (current size 64K). violations are an error (exit 2).
9) -b has bogus units.
agreed. -b now is in bytes.
10) -B (add an ^ to the front of the given pattern, analogous to -x and -w)
-x (and -w) is enough. sorry.
11) recursively descend through argument lists
no. find | xargs is going to have to do.
12) read filenames on standard input
no. xargs will have to do.
13) should be as fast as bm.
no worries. in fact, our egrep is 3xfaster than bm. i intend to be
competitive with woods' egrep. it should also be as fast as fgrep for
multiple keywords. the new grep incorporates boyer-moore
as a degenerate case of Commentz-Walter, a faster replacement
for the fgrep algorithm.
14) -lv (files that don't have any matching lines)
-lv means print names of files that have any nonmatching lines
(useful, say, for checking input syntax). -L will mean print
names of files without selected lines.
15) print the part of the line that matched.
no. that is available at the subroutine level.
16) compatability with old grep/fgrep/egrep.
the current name for the new command is gre (aho chose it).
after a while, it will become our grep. there will be a -G
flag to take patterns a la old grep and a -F to take
patterns a la fgrep (that is, no metacharacters except \n == |).
gre is close enough to egrep to not matter.
17) fewer limits.
so far, gre will have only one limit, a line length of 64K.
(NO, i am not supporting arbitrary length lines (yet)!)
we forsee no need for any other limit. for example, the
current gre acts like fgrep. it is 4 times faster than
fgrep and has no limits; we can gre -f /usr/dict/words
(72K words, 600KB).
18) recognise file types (ignore binaries, unpack packed files etc).
get real. go back to your macintosh or pyramid. gre will just grep
files, not understand them.
19) handle patterns occurring multiple times per line
this is illdefined (how many time does aaaa occur in a line of 20 'a's?
in order of decreasing correctness, the answers are >=1, 17, 5).
For the cases people mentioned (words), pipe it thru
tr to put the words one per line.
20) why use \{\} instead of \(\)?
this is not yet resolved (mcilroy&ritchie vs aho&pike&me).
grouping is an orthogonal issue to subexpressions so why
use the same parentheses? the latest suggestion (by ritchie)
is to allow both \(\) and \{\} as grouping operators but
the \3 would only count one type (say \(\)). this would be much
better for complicated patterns with much grouping.
21) subroutine versions of the pattern matching stuff.
in a deep sense, the new grep will have no pattern matching code in it.
all the pattern matching code will be in libc with a uniform
interface. the boyer-moore and commentz-walter routines have been
done. the other two are egrep and back-referencing egrep.
lastly, regexp will be reimplemented.
22) support a filename of - to mean standard input.
a unix without /dev/stdin is largely bogus but as a sop to the poor
barstards having to work on BSD, gre will support -
as stdin (at least for a while).
Thus, the current proposal is the following flags. it would take a GOOD
argument to change my mind on this list (unless it is to get rid of a flag).
-f file pattern is (`cat file`)
-v nonmatching lines are 'selected'
-i ignore aphabetic case
-n print line number
-c print count of selected lines only
-l print filenames which have a selected line
-L print filenames who do not have a selected line
-b print byte offset of line begin
-h do not print filenames in front of matching lines
-H always print filenames in front of matching lines
-w pattern is (^|[^_a-zA-Z0-9])pattern($|[^_a-zA-Z0-9])
-1 print only first selected line per file
-e expr use expr as the pattern
Andrew Hume
research!andrew
-----------------------------
From: Wietse Venema <wswietse at eutrc3.uucp>
Subject: Re: grep replacement
Date: 10 Jun 88 20:34:23 GMT
To: unix-wizards at brl-sem.arpa
In article <7207 at watdragon.waterloo.edu> tbray at watsol.waterloo.edu (Tim Bray) writes:
}Grep should, where reasonable, not be bound by the notion of a 'line'.
}As a concrete expression of this, the useful grep -l (prints the names of
}the files that contain the string) should work on any kind of file. More
}than one existing 'grep -l' will fail, for example, to tell you which of a
}bunch of .o files contain a given string. Scenario - you're trying to
}link 55 .o's together to build a program you don't know that well. You're
}on berklix. ld sez: "undefined: _memcpy". You say: "who's doing that?".
}The source is scattered inconveniently. The obvious thing to do is:
}grep -l _memcpy *.o
}That this often will not work is irritating.
}Tim Bray, New Oxford English Dictionary Project, U of Waterloo
nm -op *.o | grep memcpy
will work just fine, both with bsd and att unix.
Wietse
--
uucp: mcvax!eutrc3!wswietse | Eindhoven University of Technology
bitnet: wswietse at heithe5 | Dept. of Mathematics and Computer Science
surf: tuerc5::wswietse | Eindhoven, The Netherlands.
-----------------------------
From: Richard Boehm <boehmr at unioncs.uucp>
Subject: Re: yacc and lex tutorials
Date: 10 Jun 88 19:21:08 GMT
Keywords: yacc, lex, tutorials
To: unix-wizards at brl-sem.arpa
In article <184 at asiux1.UUCP> wjm at asiux1.UUCP (Bill Mania) writes:
>I am looking for a tutorial/textbook for yacc and for
>lex. Something similar to 'The C Programming Language'
>and 'The AWK Programming Language' would be great. I
>have had some beginning experience with lex and no
>experience with yacc and would appreciate any
>recommendations.
>
>
> Bill Mania, Ameritech Applied Technologies
>
> The band is just fantastic, USENET {ihnp4,hcfeams}!asiux1!wjm
> that is really what I think. VOICENET (312) 870-4574
> Oh by the way which one's Pink? PAPERNET 3030 Salt Creek Lane Fl 3 Rm C6
> Pink Floyd, 1975 Arlington Heights, IL 60005
I, also, would be interested in such references.
Richard Boehm ( boehmr at unioncs.UUCP )
-----------------------------
From: gwp at hcx3.ssd.harris.com
Subject: Re: phys(2) under sVr3?
Date: 10 Jun 88 21:54:00 GMT
Nf-ID: #R:brl-adm.ARPA:16090:hcx3:48300009:000:3760
Nf-From: hcx3.SSD.HARRIS.COM!gwp Jun 10 17:54:00 1988
To: unix-wizards at brl-sem.arpa
From: Andrew Klossner <andrew at frip.gwd.tek.com>
> I need to augment sys V release 3 so as to let a user process map a
> video frame buffer into its address space. Something like the version 7
> phys(2) call, or what the 4.2BSD mmap(2) call promised but didn't
> deliver, is what I'm looking for.
I went around and around with this problem and finally came up with
something called shmbind(2). This system service takes an existing
shared memory region (created via shmget(2)) and binds a chunk of
physical memory to that region. User processes may then attach this
chunk of physically-mapped virtual memory into their address space
with the shmat(2) service.
This sounds a bit complicated at first but I think it has several
advantages over phys(2), mmap(2) et. al.
The first is security/usability. Allowing users direct access to
physical or I/O memory (for those architectures with memory mapped
I/O) is _extremely_ _dangerous_ (imagine Joe user mapping the device
controller registers for your root-partition disk into his address
space). The usual solution to this is to make phys(2), mmap(2) etc.
super-user only, which then causes everyone to write their
applications suid-root thus voiding _all_ user protections. By
binding a chunk of physical memory to a shared memory region you allow
access to that chunk to be controlled through the user-group-other
bits of the ipc_perm handle. Of course shmbind(2) must be restricted
to super-user, but the objects it creates can be accessed by anyone
you wish.
Furthermore multiple processes can attach and detach the same chunk of
physical memory in an simple and straightforward manner without
creating multiple regions or sets of page tables.
The second advantage is one of consistency (at least for Sys V types).
There already exists a system service for adding a chunk of virtual
memory to a users address space. That service is "shmat(2)". Why
should there be another service that does pretty much the same thing,
except that the chunk of virtual memory is now associated with a
specific range of physical memory ? Why not create a service that
performs this later operation, and then leave the rest to shmat(2) ?
The interface to shmbind(1) as I have written it is:
int shmbind(int shmid, caddr_t p_addr)
Shmid is the id returned from shmget(2) and p_addr is the starting
physical address of the chunk you wish to map. The size of the
physical chunk mapped is the size of the shared memory region you arre
mapping it into (the "size" argument to the shmget(2) call). The
physical addresses can lie in either "normal" RAM-space or in I/O
memory space (our machines use memory-mapped I/O). If the requested
physical memory lies in RAM-space the system will attempt to allocate
the appropriate pages at shmbind(2) time. If the desired pages of
physical memory are already allocated the shmbind(2) service returns
the ENOMEM error. Bind operations involving I/O memory are always
honored since I/O memory "pages" are not a critical resource (they're
really "ghost pages" consisting of page table entries pointing to I/O
locations). It is possible to reserve sections of RAM-memory for
later binding by placing "reserve" entries in the config file and
rebuilding the kernel.
I've also have a utility called shmconfig(1) (with more options than
you probably care about) that performs the shmget(2), shmbind(2), and
shmctl(2) operations necessary to create a physically-bound shared
memory region with the desired user, group and permission bits. This
utility is primarily for use in /etc/rc so that you can configure a
system with the desired "mappable objects" already present at init
time.
What do you think ?
Gil Pilz -=|=- Harris Computer Systems -=|=- gwp at ssd.harris.com
-----------------------------
From: Chris Torek <chris at mimsy.uucp>
Subject: Re: phys(2) under sVr3?
Date: 11 Jun 88 10:18:18 GMT
To: unix-wizards at SEM.BRL.MIL
>From: Andrew Klossner <andrew at frip.gwd.tek.com>
>>I need to augment sys V release 3 so as to let a user process map a
>>video frame buffer into its address space.
[note that Sun does this with mmap() in SunOS 2.x and 3.x, albeit a
bit clumsily, and with mmap() in SunOS 4.0 quite elegantly.]
In article <48300009 at hcx3> gwp at hcx3.SSD.HARRIS.COM writes:
>I went around and around with this problem and finally came up with
>something called shmbind(2). This system service takes an existing
>shared memory region (created via shmget(2)) and binds a chunk of
>physical memory to that region. ... [This] has several advantages
>over phys(2), mmap(2) et. al.
>
>The first is security/usability. ... [usually] mmap(2) etc. [are]
>super-user only, which then causes everyone to write their
>applications suid-root thus voiding _all_ user protections.
mmap() is not restricted to super-user. Anyone may call mmap;
but to map a device address, you must first open the device, then
pass any protection checks in the device driver. Hence the file
system provides the appropriate security (via user/group/other),
and specific devices can be further restricted if appropriate.
>The second advantage is one of consistency (at least for Sys V types).
Of course, mmap is consistent under SunOS (and someday 4BSD).
If you are stuck with System V's `new and wretched namespace'[*]
for memory regions, shmbind() is probably appropriate.
[*]An approximate quote from Dennis Ritchie, I think.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
-----------------------------
From: Randy Orrison <randy at umn-cs.cs.umn.edu>
Subject: Re: phys(2) under sVr3?
Date: 11 Jun 88 15:37:56 GMT
Posted: Sat Jun 11 10:37:56 1988
To: unix-wizards at brl-sem.arpa
In article <11921 at mimsy.UUCP> chris at mimsy.UUCP (Chris Torek) writes:
|>The second advantage is one of consistency (at least for Sys V types).
|Of course, mmap is consistent under SunOS (and someday 4BSD).
and someday SysVR4? Come on, AT&T&SUN, you can do it!
|If you are stuck with System V's `new and wretched namespace'[*]
|for memory regions, shmbind() is probably appropriate.
Now if only they would fix this, too!
Me, I have great hopes...
|[*]An approximate quote from Dennis Ritchie, I think.
|In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
-randy
--
Randy Orrison, Control Data, Arden Hills, MN randy at ux.acss.umn.edu
8-(OSF/Mumblix: Just say NO!)-8 {ihnp4, seismo!rutgers, sun}!umn-cs!randy
"I consulted all the sages I could find in Yellow Pages,
but there aren't many of them." -APP
-----------------------------
From: John Mashey <mash at mips.com>
Subject: Re: Sun 4 \"KNEE\" Wars
Date: 11 Jun 88 05:26:27 GMT
To: unix-wizards at brl-sem.arpa
In article <16138 at brl-adm.ARPA> weiser.pa at xerox.com writes:
>aglew at urbsdc.urbana.gould.com says:
>"I don't have Sun 4 source at hand right now, but if their scheduler
>is similar to the standard BSD scheduler a process coming out of
>a sleep may [*] have its priority boosted. ...[further explanation here...]"
>
>But this explanation doesn't explain why Sun's show this behavior and Vaxes
>don't, nor why Sun-3's, with 8 MMU contexts, show the knee at 8 fast sleepers,
>and Sun-4 with 16 MMU contexts show it at 16!
A VAX uses a TLB, whose user portion must be flushed upon each context
switch, i.e., in some sense, it has 1 context (although this is not
strictly the same kind of context, as it does not need to be mass
saved/restored). Hence, there is no reason for a VAX to have a
MMU-related knee anywhere.
--
-john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: {ames,decwrl,prls,pyramid}!mips!mash OR mash at mips.com
DDD: 408-991-0253 or 408-720-1700, x253
USPS: MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086
-----------------------------
From: Chris Torek <chris at mimsy.uucp>
Subject: alloca
Date: 11 Jun 88 10:01:41 GMT
To: unix-wizards at sem.brl.mil
Put it this way: If you have a stack or an emulation of a stack (as
required by recursive functions), and if `alloca' is a compiler
builtin, the concept can be implemented. Hence alloca can be *made*
portable to any C compiler, if only by fiat (declare that henceforth
`alloca' is a keyword or is otherwise reserved).
Now the problem becomes one of convincing compiler writers that
alloca (possibly by some other name) is worth adding as a reserved
word, or (on some systems) simply writing it in assembly (either as
a routine or with `inline assembly').
Note that alloca is not a panacea, and that it can largely be simulated
with dynamically sized arrays, as in
int n = get_n();
{
char like_alloca[n];
...
}
These are not identical concepts, but they are interrelated. Whether
one is `more important' or `more useful' than another I will not venture
to say.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
-----------------------------
From: Pete Holsberg <pjh at mccc.uucp>
Subject: Stupid Mistake!
Date: 11 Jun 88 14:52:53 GMT
To: unix-wizards at brl-sem.arpa
S--t! I just made did a beauty! I had /usr/bin/ksh as the last field
in /etc/passwd for root, and changed it to /usr/bin/sh (I *said* it was
stupid!) So now of course I cannot login in as root cause it says "no shell".
Can any of you SysV gurus help me out? All of my system logins are
locked out (specifically bin) so I can't link /bin/sh to /usr/bin/sh,
and I can't copy /bin/sh to /usr/bin! Is there anything I can do other
than restoring /etc/passwd from a backup tape. (I'd like to be able to
recover from this without physically going to the lab.)
Undying gratititude will be yours, even if your solution includes
asbestos-melting flames! But please, email, OK?
-----------------------------
From: Greg Corson <milo at ndmath.uucp>
Subject: I need to RENT COMPUTER TIME from someone!
Date: 11 Jun 88 17:42:51 GMT
To: unix-wizards at brl-sem.arpa
I need to rent computer time during non business hours (ie: evenings and
weekends) for a commercial project. I need to rent time on a system
capable of the following:
1. Must be able to support at least 20 simutanious users (if you could
eventually support several hundred it would be a MAJOR plus but 20 would
be enough to start). The applications being run present a very LIGHT
per-user load, probably no more computer time than someone running
a simple line-orented text editor or a write/phone like program.
2. Must be on a commercial dial-up network such as telenet, tymnet or something
similar. ---OR--- must be within local calling distance of a city with
a population of 1 million or more in the immediate area and have at least
20 incomming dial up lines with modems.
3. Must have plenty of computer time avaliable evenings which is normally
not used. For example, some kind of on-line service bureau that only
uses it's computers during business hours.
4. Must be willing to rent this unused and otherwise unprofitable computer
time at reasonable rates. Preferable with a SIMPLE billing based on
connect time/communications charges and disk space used.
5. The system must run UNIX (any version) or VAX/VMS and have a C compiler
available for use.
6. The system must be fairly reliable and well maintained.
The rented time will be used to provide an experimental state-of-the-art
consumer information service. This is an ideal application for anyone who
doesn't use their computer center much on evenings and weekends as it allows
you to make some money during a time when your computer normally sits idle.
I do NOT need to rent any time during the business hours so your normal daytime
operation will be uneffected.
I am mainly interested in renting time on computers in the US but if you meet
the above requirements and happen to be in any other english-speaking country
or area please contact me also.
Right now I mainly want to RENT computer time, but if the idea of operating
a high-tech information/entertainment service appeals to anyone out there
on a BUSINESS/INVESTMENT level please contact me and we could get a joint
venture going...
Hope this message isn't too commercial for anyone, I thought it would be ok
since I'm trying to BUY something rather than sell it. Sorry for posting
to several news groups but I needed to make sure the message would get through
to the maximum number of system administrators many of whom don't read
comp.misc.
Thanks!!
Greg Corson
19141 Summers Drive
South Bend, IN 46637
(219) 277-5306 (weekdays till 6 PM eastern)
{pur-ee,rutgers,uunet}!iuvax!ndmath!milo
-----------------------------
From: Jim Frost <madd at bu-cs.bu.edu>
Subject: SVVS user/system time tests
Date: 11 Jun 88 20:08:32 GMT
Followup-To: comp.unix.wizards
To: unix-wizards at brl-sem.arpa
In article <55906 at sun.uucp> guy at gorodish.Sun.COM (Guy Harris) writes:
|> In the file exec1.c, there are messages indicating that the child process
|> did not inherit the system and user times of its parent. However, the test
|> actually only checks to see that the times are NON-ZERO. This appears to
|> have been the sticking point for Apollo since our system time was always 0.
|
|In this case, they should test whether the *sum* of user+system time is
|non-zero.
Even this strikes me as bogus. It should check to see if the times
for the child are more than or equal to those of the parent. I see no
reason why a sufficiently fast machine might not be able to execute
the test in less time than one clock tick, making it fail the actual
test and the one you propose. Of course, I can imagine cases where my
idea is inaccurate as well, but you get the idea.
jim frost
madd at bu-it.bu.edu
-----------------------------
From: Guy Harris <guy at gorodish.sun.com>
Subject: Re: SVVS user/system time tests
Date: 11 Jun 88 23:29:36 GMT
Sender: news at sun.uucp
To: unix-wizards at sem.brl.mil
> |In this case, they should test whether the *sum* of user+system time is
> |non-zero.
>
> Even this strikes me as bogus. It should check to see if the times
> for the child are more than or equal to those of the parent. I see no
> reason why a sufficiently fast machine might not be able to execute
> the test in less time than one clock tick, making it fail the actual
> test and the one you propose.
In my original article, I stated that the test should *also* loop until the
real time since the loop began *and* the CPU time (user+system) since the loop
began was non-zero. This ensures that the machine will take more than one
clock tick to perform that portion of the test, no matter how fast it is.
The test would have to do something such as cut the loop off after some large
amount of time, just in case the system really *is* broken and doesn't maintain
CPU time figures.
-----------------------------
From: Henry Spencer <henry at utzoo.uucp>
Subject: Re: In defense of BSD (was: something else)
Date: 11 Jun 88 23:21:40 GMT
To: unix-wizards at brl-sem.arpa
> ... This is not an attempt to pick on Henry Spencer...
Who, me, criticize Berkeley? Nah. :-)
> ...I am just about fed up with all of the
> gratuitous Berkeley-bashing that has been going on here the last couple
> of months. I know of quite a few people out there who put in long, hard
> hours (with little or no pay) to benefit every person who uses any Unix
> system today (even SV versions)...
Actually, I will (and do) admit that Berkeley has done a lot of useful
things. In particular, there is one VERY IMPORTANT thing they did that
they almost never get credit for, because it's not flashy and obvious.
It's easy to notice, and praise, new features (although a bit less of
that might be a good idea...). It's not so easy to notice and properly
appreciate a solid system. 32V, as released by AT&T, was a very raw and
incomplete port. After releasing it, AT&T basically spent several years
dithering over whether to do anything further for outside consumption.
At around that time, an awful lot of people were interested in Unix on a
VAX, since the good old 11's limitations were getting pretty painful.
However, most of these people wanted a *production* system, something
they could use to do real work, not a flaky experimental system.
The significant thing that Berkeley and its outside contributors (e.g. DEC)
did was to shake 32V down into a solid system that coped with and exploited
the VAX hardware effectively. This is NOT trivial, as anyone who's read
the VAX hardware manuals will testify. The eventual System V releases
for the VAX didn't do nearly as good a job on it. Note that I am not
talking about virtual memory; I'm referring to hardware error recovery,
configuration procedures, proper device handling for a wide variety of
devices, bad-block support for disks, and so on. None of this is glamorous
and sexy, but it makes an enormous difference to people who want a system
that *runs* reliably without endless tinkering.
In my opinion, this particular effort was what *really* established UCB
as a credible "supplier" of Unix. And it was Berkeley's willingness to
do this work, and AT&T's unwillingness to do it (or at least, to release
the result), that really led to the current schizophrenic situation in
the Unix world. For several years, 4BSD -- silly incompatibilities and
all -- was the only Unix that a sensible, production-oriented shop would
run on a VAX. When AT&T finally got around to doing something along
those lines, 4BSD had a large head start. AT&T has been fighting to
catch up ever since.
We now return you to our normal Berkeley-bashing... :-)
> Without them driving the development
> process through the 1098's, we'd all still be using V7 systems...
Frankly, with a couple of reservations, that doesn't strike me as an
enormously bad thing. Going that route would have avoided an awful lot
of unnecessary compatibility headaches. There wasn't a lot wrong with
V7 that couldn't have been fixed in a backward-compatible way.
> ... Sure, they introduced
> some incompatible changes and features that were difficult and awkward
> to use, but before you criticize them for it, remember that in many
> cases they had *no prior art* to use as a guideline.
I'll go along with that argument, more or less, for semi-botched new
features. I fail to see that it applies to silly, incompatible changes
to existing ones.
> They gave it the gift of paging. (Yes, I know that the Vax paging
> code originated with 32V. When was the last time you used 32V?)
Actually not true; 32V used the paging hardware in a limited way but did
not do virtual memory, which is what most people think of when they hear
the word "paging". By the way, 4BSD virtual memory is a mediocre design
with wretchedly messy innards that few dare touch, because they are
so cryptic and fragile. It's not an accident that the virtual memory
is much the most popular target for re-implementation by Unix-box makers.
> They added networking code that has
> become an indispensible part of today's mini and workstation setups.
However, they were not the first to do this, so this hardly counts as a
massive argument in their favor. They also did a number of things that
almost got them lynched by the rest of the TCP/IP community; we're still
living with the aftereffects of some of those botches.
To sum up: Berkeley has made some quite valuable contributions. However,
they have also introduced a lot of stupid, incompatible changes that have
made life much harder than it needs to be. If the effort that went into
unnecessary meddling with working software had gone into useful projects
instead -- or even into tossing a Frisbee around on the lawn -- we'd all
be even better off. AT&T's problem is inertia and lack of interest in
useful changes; Berkeley's problem is an excess of enthusiasm for new
and nifty ideas, without adequate consideration of whether they are
*good* ideas. This enthusiasm is no problem -- indeed, it's desirable --
in a research lab that produces papers instead of software. But when the
end product is software that thousands of sites end up depending on, one
could wish for a bit more restraint.
--
"For perfect safety... sit on a fence| Henry Spencer @ U of Toronto Zoology
and watch the birds." --Wilbur Wright| {ihnp4,decvax,uunet!mnetor}!utzoo!henry
-----------------------------
End of UNIX-WIZARDS Digest
**************************
More information about the Comp.unix.wizards
mailing list