Warning From uucp
Hermes Trisgesmis
uucp at att.att.com
Mon Dec 26 23:09:14 AEST 1988
We have been unable to contact machine 'wa015b' since you queued your job.
wa015b!mail james (Date 12/24)
The job will be deleted in several days if the problem is not corrected.
If you care to kill the job, execute the following command:
uustat -kwa015bN49e1
Sincerely,
att!uucp
#############################################
##### Data File: ############################
>From arpa!VM1.NoDak.EDU!BRL.MIL!UNIX-WIZARDS Sat Dec 24 07:43:02 1988 remote from att
Received: by att.ATT.COM (smail2.6 att-mt)
id AA01009; 24 Dec 88 07:43:02 EST (Sat)
Received: from NDSUVM1.BITNET by VM1.NoDak.EDU (IBM VM SMTP R1.2) with BSMTP id 0764; Sat, 24 Dec 88 06:37:30 CST
Received: by NDSUVM1 (Mailer X1.25) id 0760; Sat, 24 Dec 88 06:37:20 CST
Date: Sat, 24 Dec 88 02:45:35 EST
Reply-To: UNIX-WIZARDS%BRL.MIL at VM1.NoDak.EDU
Sender: Unix-Wizards Mailing List <UNIX-WIZ at VM1.NoDak.EDU>
From: Mike Muuss The Moderator <Unix-Wizards-Request%BRL.MIL at VM1.NoDak.EDU>
Subject: UNIX-WIZARDS Digest V6#058
X-To: UNIX-WIZARDS at BRL.MIL
To: James Anderson <wa015b!james at ATT.ATT.COM>
UNIX-WIZARDS Digest Sat, 24 Dec 1988 V6#058
Today's Topics:
faster name lookups for SysV (was libraries)
Re: libraries
Re: Echo
Re: unshar business
Surprising fact about STREAMS service modules
Re: password security
Re: SysVr3.2.1 /etc/mount problems
Re: libraries
Re: password security
Re: libraries
Re: password security
Re: Yet Another useful paper
Re: password security
Light relief (was Re: IEEE 1003.2)
FIX from UC Berkeley
Re: unshar business
Re: This is strange...
Re: password security
Re: rsh environment
-----------------------------------------------------------------
From: Chris Torek <chris at mimsy.uucp>
Subject: faster name lookups for SysV (was libraries)
Date: 23 Dec 88 07:17:24 GMT
To: unix-wizards at sem.brl.mil
In article <580 at redsox.UUCP> campbell at redsox.UUCP (Larry Campbell) writes:
>Why not keep directories sorted? In SysV filesystems this is easy and
>relatively inexpensive, since you can assume a fixed 16 bytes per name.
>I am also assuming that lookups outnumber creations to a huge degree,
>which I'm sure is the case.
As far as I know, this is true for everything except /tmp (where
lookups average only a few times more common). Unfortunately, a
complete sorting would require nontrivial changes, as it is hard inside
the kernel to move text from one file system block to another. Still,
at 16 bytes per entry, and 512 or 1024 bytes per block, one could
easily keep each block sorted, and reduce each block scan from 32 or 64
string comparisons to 5 or 6.
>Then namei becomes a binary search.
Or a piecewise binary search (one block at a time).
If you are stuck with SysV file systems, and have source, try adding
partial sorting to your namei and see if performance improves.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
-----------------------------
From: Norman Joseph <norm at oglvee.uucp>
Subject: Re: libraries
Date: 22 Dec 88 14:08:10 GMT
To: unix-wizards at sem.brl.mil
>From article <15080 at mimsy.UUCP>, by chris at mimsy.UUCP (Chris Torek):
# [...] A Unix `.a' `library' file is simply a file containing
# other files, plus (depending on your system) a symbol table (in the
# `sub-file' __.SYMDEF). Now then, what is a Unix directory?
# [...]
# If your answer was `a file containing other files', congratulations.
#
# Now, aside from the actual implementation, what is the difference between
# a library file that contains other files and a library directory that
# contains other files?
#
# If your answer was `none', congratulations again.
I probably won't be the only one to point this out, but...
I was taught that a `Unix directory' contained filename/i-node number
pairs, and that the actual contents of the files listed in the directory
existed -outside- of the directory itself.
This certainly -would- be different from a Unix `.a' file if, in fact,
the contents of the (object) files it `archives' are actually contained
within the `.a' file proper.
Now, most of this goes without saying, so I believe that I must have
missed the point you were trying to make by using this analogy.
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\*//////////////////////////////////////
Norm Joseph | UUCP: ...!{pitt,cgh}!amanue!oglvee!norm
Oglevee Computer System, Inc. | "Everything's written in stone, until the
Connellsville, PA 15425 | next guy with a sledgehammer comes along."
/////////////////////////////////////*\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
-----------------------------
From: Guy Harris <guy at auspex.uucp>
Subject: Re: Echo
Date: 23 Dec 88 09:08:35 GMT
To: unix-wizards at sem.brl.mil
>And choosing between /usr/bin and /usr/ucb seems wrong. I would think
>that the choice should be made based on /bin or /5bin.
Remember, this is S5R4 we're talking about, not SunOS.... I think AT&T
would object to stuffing the S5-compatible versions of commands into
"/5bin" or "/usr/5bin".
-----------------------------
From: Chris Lewis <clewis at ecicrl.uucp>
Subject: Re: unshar business
Date: 22 Dec 88 06:41:45 GMT
To: unix-wizards at sem.brl.mil
In article <397 at eda.com> jim at eda.com (Jim Budler) writes:
>In article <164 at ecicrl.UUCP> clewis at ecicrl.UUCP (Chris Lewis) writes:
>| In article <395 at eda.com> jim at eda.com (Jim Budler) writes:
>| >In article <7876 at well.UUCP> Jef Poskanzer <jef at rtsg.ee.lbl.gov> writes:
>| >| Well, I have looked at Cathy's program, all 93 lines of it, and unless
>| >| I'm reading it wrong she wasn't paying much attention either.....
>[...]
>| >I may modify the source to disallow any '/'.
>First, you totally ignored the statement above.
First, you said "may". That also means "may not".
>| How about placing the following into "../../../rnews"?
>| for i in /bin/*
>| do
>| od $i | mail root
>| done
>Second, though partially my fault since I failed to mention I run here
>program under chroot(2). So there is no od(1), and no mail(1), and now
>there is not even a sed(1) available.
Second, you left out one line of your article that *you* wrote (just
before the "may" line):
>Currently the damage is limited to the news heirarchy, plus the news library.
That is, you're implying that it is *is* possible to damage the news
heirarchy, which rnews is a part of. I can only comment on the code as
presented. AND, more importantly, noone else running Cathy's program knows
that you're using chroot either - so *they* are insecure.
Thus, you're inventing excuses after the fact.
Your approach requires that something (mapsh if you are using uuhosts) has
to be setuid root so that chroot can be used. A lot of SA's out there
won't run setuid root programs if they can possibly help it.
With Jef Poskanzer simple suggestions, Cathy's program wouldn't have to use
chroot. What's wrong with that? Why did you react to a very constructive
posting from Jef with a flame? Is it that you are simply a twit?
>Now, I'll get down to what I really feel about this whole subject:
> 1) Someone supplied some source code, presented as a possible
> solution to a problem.
For which I applaud her attempt. Not your flames in retaliation for
a couple of simple suggestions by Jef.
> 3) You supplied neither a better solution, nor helped to
> fix it in any positive way ( or did I miss your posting of
> the traditional Usenet source code assistance, a diff).
Yes I did. Ever since I got involved in this discussion I have been
telling everyone to use uuhosts or something similar. Cathy's program
enhanced with Jef's suggestions is even better - because you *don't*
need chroot and because you *don't* have to setuid root.
>Cathy's program, slightly modified, wrapped within an edit of
>Mr. Quartermain's uuhosts script and mapsh program, increased
>the security of unpacking the maps.
Which is dumb. If you've using mapsh why in the hell do you need Cathy's
program? mapsh is a setuid root chroot'd shar. Which is probably safe
(but undesirable). What would be even better is to remove mapsh and
replace it completely with Cathy's program.
>What did your postings really contribute?
Regarding postings (plural):
Lots. Since Larry Blair and I made asses of ourselves about this
issue, people actually *DID* something about it. I've been telling
people about this hole on and off for about three years. What good
did it do? Not much. Publishing holes in the net is frowned upon, some
people are dense about blunt hints, and other people say "it couldn't
happen to me".
In light of the Internet Worm, I was actually composing an article
to completely reveal this hole along with the *strong* suggestion that
they install uuhosts ASAP. Then Larry Blair beat me to it.
Jim, read my lips:
- There is no bug. THEREFORE patch input is useless. There's nothing
to patch.
- There are already several packages available that unpack maps safely.
THEREFORE we didn't need to post any of them.
- All we've been trying to do is hit SA's over the head hard enough
for them to pay attention and plug their own bloody holes with
software that ALREADY EXISTS.
Because Larry and I made fools of ourselves, Cathy wrote her program.
Many other people wrote similar programs. Many other people thought
that their pet unshars were safe. Most of them were wrong and found out.
And in the end:
MANY SA'S PLUGGED THE HOLE!!!!!
Which is exactly what we were intending! Cosmic wow! And I helped!
Take a bow Chris and Larry! And all of us (except possibly you)
learned something in the process!
regarding "posting" singular:
Because you obviously didn't know what you were doing. And are inventing
excuses post-facto.
>And no I haven't finished my mods to the program, yet, so I know
>it isn't perfect yet, and given your response to less than perfection
>I may never post it,
Which is no great loss considering how well you understand uuhosts and
what mapsh does.
>but instead sit here more secure, in the grand
>tradition of all those who sat back and said "I've known about that
>hole for years." Why post source, I'll just get flames from the
>perfect people out there. <----- *more sarcasm*
[gosh, I'd never have noticed!]
[ ^ this is sarcasm too! ]
Nah, you couldn't be referring to me. I post source.
>Like I said lighten up.
Interesting. You say that in almost all of your postings. Most of
which are rabid flames in response to what appear to be relatively mild
comments or suggestions. Have you some sort of psychological problem?
In contrast, I only flame twits. <-------- *personal insult*
[ ^ *more sarcasm* ]
--
Chris Lewis, Markham, Ontario, Canada
{uunet!attcan,utgpu,yunexus,utzoo}!lsuc!ecicrl!clewis
Ferret Mailing list: ...!lsuc!gate!eci386!ferret-request
(or lsuc!gate!eci386!clewis or lsuc!clewis)
-----------------------------
From: Larry Philps <larry at hcr.uucp>
Subject: Surprising fact about STREAMS service modules
Date: 22 Dec 88 20:35:09 GMT
Posted: Thu Dec 22 15:35:09 1988
To: unix-wizards at sem.brl.mil
Well, I just learned something about the STREAMS mechanism today. Since I
found it quite surprising, I thought I should mention it to the world before
some other poor sucker gets surprised also.
As anyone who has perused the STREAMS code at all knows, all the
non-interruptable parts of it run at "splstr", which on most systems is
spl5. This is lower than the buffer cache, but high enough to block all
devices, most importantly the terminals and the network. After looking
carefully at the STREAMS programming manuals I can't find any reference to
the spl at which the driver service modules are called. Silly me, I just
assumed that it would be at splstr. However, check out the following code
fragment from io/stream.c.
queuerun()
{
....
s = splstr();
...
if (q->q_qinfo->qi_srvp) {
spl1();
(*q->q_qinfo->qi_srvp)(q);
splstr();
}
...
splx(s);
}
Note the spl1()! Anybody else out there surprised? We were seeing some
really bizarre behaviour out of RFS when under heavy load the server would
reverse the order of a DUCOPYOUT and a DUREAD packet. RFS produces no
diagnostics whatsoever when the out of sequence DUCOPYOUT arrives and the
only visible effect was that part of the read returned all zeros.
We tracked this down using a network analyzer to capture a packet trace and
then analyzed the RFS headers. After much work I decided that "This can
never happen, but it did - well ... at least it can never happen if the code
is never reentered". After that it took every little time to discover the
preceeding fragment.
So, the moral of this story is, if you have a STREAMS driver that talks to
any hardware, be sure to protect its code with an splstr() or else write that
code so that it can be reentered.
---
Larry Philps HCR Corporation
130 Bloor St. West, 10th floor Toronto, Ontario. M5S 1N5
(416) 922-1937 {utzoo,utcsri,ihnp4}!hcr!larry
-----------------------------
From: 99700000 <haynes at ucscc.ucsc.edu>
Subject: Re: password security
Date: 23 Dec 88 05:01:34 GMT
Sender: usenet at saturn.ucsc.edu
To: unix-wizards at sem.brl.mil
In article <5005 at b-tech.ann-arbor.mi.us> zeeff at b-tech.ann-arbor.mi.us (Jon
Zeeff) writes:
>The simple solution seems to be to force users to use some non alpha
>character somewhere in the middle of their passwords. Users then tend
>to use a combination of two words which prevents the dictionary search.
the 4.3-tahoe-BSD version of passwd seems to do this. At least the last
time I logged into a tahoe system and tried to change my password it
wouldn't rest until I had put a non-alphabetic character into it.
Had the same experience on a Convex machine.
haynes at ucscc.ucsc.edu
haynes at ucscc.bitnet
..ucbvax!ucscc!haynes
"Any clod can have the facts, but having opinions is an Art."
Charles McCabe, San Francisco Chronicle
-----------------------------
From: "Robert C. White Jr." <rwhite at nusdhub.uucp>
Subject: Re: SysVr3.2.1 /etc/mount problems
Date: 22 Dec 88 07:42:14 GMT
To: unix-wizards at sem.brl.mil
in article <1279 at nusdhub.UUCP>, rwhite at nusdhub.UUCP (Robert C. White Jr.) says:
> Does this sound like someone who has already been through all this? IT
> SHOULD! I spent the better part of the day disecting the diskettes just
> to make shure. ACK.
No, not today, back then when I did it. ;-)
Neither AT&T nor anybody else could tell me wether the interrupt-the-load-
and-install-the-other-package method was safe when I tried this the first
time (back in 3.1.1).
Rob.
-----------------------------
From: "Robert C. White Jr." <rwhite at nusdhub.uucp>
Subject: Re: libraries
Date: 22 Dec 88 21:49:00 GMT
To: unix-wizards at sem.brl.mil
in article <15126 at mimsy.UUCP>, chris at mimsy.UUCP (Chris Torek) says:
>
> In article <1278 at nusdhub.UUCP> rwhite at nusdhub.UUCP (Robert C. White Jr.)
> writes:
>>Wrong-O kid! An archive library is a "File which contains the
>>original contents of zero-or-more external sources, usually text or
>>object files, which have been reduced to a single system object."
>
> This is an implementation detail.
Where do you get this "implementation has nothing to do with it" bull?
We are TALKING inplementation after all. "Implementation asside" your
comments on implementing archives as directories instead of files are
reduced or nothing.
>>As subjective proof of this address this question: "Can you 'archive'
>>a a device special file and then access the device or service through
>>direct refrence to the archive?" The answer is OF COURSE *NO* because ...
>
> Indeed? Is that creaking I hear coming from the limb upon which you
> have climbed? Perhaps I should break it off, lest others be tempted to
> go out on it as well:
No, no creaking here!
> Why, certainly, you can `archive' a device special file and then access
> the device via the archive. What would you say if I told you I had
> added ar-file searching to the directory scanning code in namei?
I would say that you do not understand "functional units" in terms
of real computer systems arcetecture. Why would you take a (bad)
directory search routine and increase it's "baddness coefficient"
by including archive searching?? and if your were to do that, wouldn't
it, BY DEFINITION, no longer be a directory search routine?
Who are you anyway?
> Insecure, yes, but on a single user workstation, so what? (Note that
Too bad life is not "single worksataions" with no intrest in security isn't it?
> while `ar' ignores S_IFCHR and S_IFBLK on extraction, they do in fact
> appear in the header mode field. It is eminently possible to scan ar
> files in the kernel as if they were directories. Pointless, perhaps,
> but easy.)
>
>>(1) device special files have no "contents" per-se and (2) The archive
>>does not preserve the "file" concept on an individual-entry basis.
>
> % man 5 ar
>
> AR(5) UNIX Programmer's Manual AR(5)
> ...
> A file produced by ar has a magic string at the start, fol-
> lowed by the constituent files, each preceded by a file
> header. ...
^^^^^^
You will note that the archive procedes the "files" (meaning their contents)
with HEADER(s). *Headers* are NOT i-nodes. There is no "file" in an
archive, only the "contents of the original system object" and a
sufficient quantity of information to RE-CONSTITUTE a file which would
have the same *SYSTEM DEPENDANT* information.
The file is not preserved; only the contents (and by induction, the
system-object level information, because nature is part of contents.)
> Each file begins on a even (0 mod 2) boundary; a new-line is
> inserted between files if necessary. ...
>
> That should tell you that each `entry' is a `file'. [Argument from
> authority: the manual says so :-) ]
SHAME SHAME... Quoting out of context again. To whit: [ AR(4)]
DESCRIPTION:
The archive command ar(1) is used to combine several files into
one.
(...also...)
All information in the file member headers is in printable
ASCII.
NONE of this preserves the "file"ness of the entries, and it all states
that the contributers are all reduced to "one (file)" so it is prima
facie that you do not understand that of which you speak. If you don't
understand the difference between "a file" and "something that will let
you create a file." I suggest you compare some *.o file (as a concept)
to using the "cc" command and an *.c file. This is the same thing as saying
/usr/lib/libc.a is identical to using "ar" and the directory
/usr/src/libs/libc/*.o. NOT BLODDY LIKELY.
In terms of "practical authority" I suggest you compare the contents of
<inode.h> and <ar.h>. Archive entries are substancially variant from
WHATEVER your, or anybody elses, computers file-as-valid-system-object
concept is.
>>If you do not understand the difference between a "system object" file,
>>and "the contents of a" file go to FILE MGMT. THEORY 101. Do not pass
>>go, do not collect your next paycheck.
>
> The difference is an *implementation* question again. There is no
> fundamental reason that the kernel could not use `ar' format for
> directories that contain files with exactly one link.
I can think of things like "consistent refrencing" "Raw device
information" "file length extension (e.g. oppen for append)"
"stream/socket information" "Open and closure tracking" and perhaps a
dozen reasons that a kernel could not use portable/common archive
formats for actual file manipulation.
The "FUNDAMENTAL PROBLEM" is that the ar format does not have the
flexability or space to provide the kernel with the things it needs to
work with (adding things to/changing the format makes it nolonger ar
format so don't go off on an "I'll just add..." kick; it would make you
look like a fool)
Additional problems include: Having to read/lseek-past every entry
which procedes the entry you are intrested in. No convient method for
going backwards without altering the format. No "lookup" capibility.
Non-tabular format. Inefficent storage method for random access of
contents. (this list could be longer, but I have a life to get on
with.)
You can't just say "that's implementation dependant and so not
important" because your statement is one on implementation.
> (Since the rest of the argument is built on this, I am going to skip
> ahead.)
Convienent way of avoiding personal falure, "I'll just skip it..."
Let me guess, you're a fundy, right?
>>As an excersize in intuition and deduction try the following:
>
> [steps left out: essentially, decide how much time it would take to
> link-edit by reading individual .o files instead of a .a file.]
Reading time/effort for a set nember of bytes X is identical no matter
where the X originates. lseek is faster than open and close. lseek
does not require any additional fiel table entries. No steps were
ommited.
If I really had left anything out you would have mentioned them in some
detail instead of just deleting the entire thing and instering a
"fuzzing" generallity.
> I have already wasted more time and net bandwidth on this subject than
> I really cared to use; but here is a very simple timing comparison for
> a `hello world' program%. Viz:
>
> # N.B.: these were both run twice to prime the cache and
> # make the numbers settle.
>
> % time ld -X /lib/crt0.o -o hw0 hw.o -lc
> 0.5u 0.4s 0:02 52% 24+112k 41+3io 2pf+0w
> % time ld -X /lib/crt0.o -o hw1 *.o
> 0.2u 0.4s 0:01 48% 25+98k 30+10io 2pf+0w
> %
>
> Reading individual .o files is *FASTER*. It took the *same* amount of
> system time (to a first approximation) and *less* user time to read
> the needed .o files than it did to read (and ignore the unneeded parts
> of) the archive, for a total of 33% less time. It also took less memory
> space and fewer disk transfers.
While the extreme case, e.g. 1 object include, sometimes shows a
reduction, unfortunatley most of us compile things slightly more complex
than "hello world" programs. Your second example also is a fraud in
that it didn't search through a directory containing all the *.o files
normally found in libc.a. If it had you example would have failed. In
your "good case" example you only search for the hw.o file mentioned in
the "bad case" portion not a directory containing many *.o files.
More clearly there is no "selection and resolution" phase involved in
your second example, by manually including all the objects ( with *.o )
you are instructing the loader to use "minimum" objects sepsified. Your
example never invokes the unresolved refrences code that does all the
time consuming things we are discussing.
> -----
> % `hw.o' needs only a few .o files, but hey, I want the results to look
> good.
> -----
>
> Now, there were only a few .o files involved in this case: hw1 needed
> only the set
>
> _exit.o bcopy.o bzero.o calloc.o cerror.o close.o doprnt.o
> exit.o findiop.o flsbuf.o fstat.o getdtablesize.o getpagesize.o
> hw.o ioctl.o isatty.o lseek.o makebuf.o malloc.o printf.o
> read.o sbrk.o perror.o stdio.o write.o
>
> which is only 25 out of a potential 317 (that includes a few dozen
> compatibility routines, over one hundred syscalls, etc.). Real programs
> would need more .o files, and it will indeed require more open calls.
> There is another issue, which I shall address momentarily, and that
> is deciding which .o files are needed (which I did `by hand' above,
> so that it does not count in the output from `time').
So you admit that you didn't scan the full 317, nor the directory that
contained a full 317, you only took the files you needed. Invalidating
your example.
If you had scanned the full 317 in example 2 using the command indicated
the resulting executable would have been HUGE and this size difference
alone would be the penalty for the "speed" you "gained." You can, after
all, include any unrelated objects in a load that you chose, so the
loader doesn't have to think about the load much, so it runs faster, so
it wastes time.
>>>>How many times would you have to scan the contents of /usr/lib/*.o to
>>>>load one relatively complex c program (say vn).
>
>>>Either one time, or (preferably) zero times.
>
>>A library directory you never scan would be useless. Ncest' Pa?
>>[sic ;-)]
>
> (Ne c'est pa, if anyone cares... ne ~= not, c'est ~= is, pa ~= way:
> is not that the way.)
> Clearly we are not communicating.
>
> The linker need not `look at' any .o files. Its task is to link. To
> do this it must know which files define needed symbols, and which
> symbols those files need, and so forth, recursively, until all needed
> symbols are satisfied. Now, how might ld perform that task?
>
> For an archive random library---/lib/libc.a, for instance---it does not
> scan the entire archive. It pulls one sub-file out of the archive,
> __.SYMDEF. This file lists which symbols are defined by which files.
> It does not now list which symbols are needed by which files, but it is
> easy to imagine that, in a new scheme, the file that takes its place
> does.
>
> So what ld might do, then, is read the `.symtab' file and, using that
> information, recursively build a list of needed .o files. It could
> then open and link exactly those .o files---never touching any that are
> not needed. If your C program consists of `main() {}', all you need
> is exit.o. ld would read exactly two files from the C library. And
> hey presto! we have scanned the contents of /lib/libc/*.o zero times.
> If your C program was the hello-world example above, ld would read
> exactly 26 files (the 25 .o's plus .symtab)---and again, scan the
> contents of /lib/libc/*.o zero times.
Compare this to:
Read each .a file once. Juggle the same pointers necessary in both
examples. Write output. exit.
LD(1) says that: If any argument is a library, it is searched exactly
once at the point it is encountered in the argument list. (order is
only significant in symbol conflict, etc.)
>>I can garentee at least two scans.
>
> Perhaps you mean something else here: the number of times the kernel
> must look at directory entries to find any given .o file. If
> directories were as fast as they should be, the answer would be `1'.
> (Consider, e.g., a Unix with hashed directories.)
(in terms of directory scanning)
Scan #1: Looking for the directories mentioned (e.g. scanning parent
directories)
Scan #2: Looking for the .Symtab file. Repeat #3 for each "archive"
named. e.g. /usr/lib/libcurses /usr/lib/libc /usr/lib/libm
or whatever
Scan #n: Looking for individual .o files (then of course there is the
opening and reading and closing of whatever.)
[Scanning can be reduced with potentially infinite disk buffering, but
who has infinite real memory to put it in?]
Please compare this to:
Scan #1: Looking for the files mentioned (then of course there is the
opening and reading and closing of whatever single files.)
Your example is artifically fast because you already did the search and
extract phase manually. what was the "time" on that?
>>There are a few things I don not have to *test* to know. ... I do not
>>have to implement a looser of a library scheme using a symbol table
>>file and individual object files to know that it is a dumb idea.
>>[elipisis (...) represent conviently removed example of keyed
>>directory scanning (usenet) and arguments as to it's similarity to the
>>system purposed.]
>
> But if you do not test them, you may still be wrong. See the hello-
> world example above. Linking individual .o files is sometimes *faster*
> ---even in the current (4.3BSD-tahoe) system. And my original point
> still stands: if archives are a necessary efficiency hack, there may
> be something wrong with the underlying system. I think that, in
> principle, they *should* be unnecessary, and we should come up with
> a way to make them so. [But I must admit that it is not high on my
> priority list.]
As already stated, your "hello world" example is not accurate because
you did the extraction manually before hand, instead of having the
linker do an intellegent extract based on a symbol table. The linker
will load as many arbitrary .o files as you like, and quite a lot faster
than the normal simbol refrencing and lookup which is encountered in
both schemes. In your example there were no unresolved symbols nor
selective loading of objects done by the linker because you had doen the
selecting before hand. How long did the selecting take you? was it
longer then the .3u?
Rob.
-----------------------------
From: Caleb Hess <hess at iuvax.cs.indiana.edu>
Subject: Re: password security
Date: 23 Dec 88 16:22:23 GMT
To: unix-wizards at sem.brl.mil
In article <5005 at b-tech.ann-arbor.mi.us> zeeff at b-tech.ann-arbor.mi.us (Jon
Zeeff) writes:
>The simple solution seems to be to force users to use some non alpha
>character somewhere in the middle of their passwords. Users then tend
>to use a combination of two words which prevents the dictionary search.
>
Pardon me, but I just had to ask: Somewhere in the first 7 chars? Or the
first 8 chars? Or anywhere in an arbitrarily long password? (Oh well,
maybe the average user's vocabulary doesn't include words of more than
6 letters anyway).
-----------------------------
From: Barry Shein <bzs at encore.com>
Subject: Re: libraries
Date: 23 Dec 88 17:17:49 GMT
Posting-Front-End: GNU Emacs 18.41.15 of Tue Jun 9 1987 on xenna
(berkeley-unix)
To: unix-wizards at sem.brl.mil
It seems to me the whole point is that I can create a file __.SYMDEF
in my object directory and have ld exploit it if it's there, at which
point almost all the complaints become moot.
-Barry Shein, ||Encore||
-----------------------------
From: Barry Shein <bzs at encore.com>
Subject: Re: password security
Date: 23 Dec 88 17:47:21 GMT
Posting-Front-End: GNU Emacs 18.41.15 of Tue Jun 9 1987 on xenna
(berkeley-unix)
To: unix-wizards at sem.brl.mil
From: prh at actnyc.UUCP (Paul R. Haas)
>In article <4444 at xenna.Encore.COM> bzs at Encore.COM (Barry Shein) writes:
>>The average secretary I know is bright enough to understand rules like
>>"use two short words with some upper-case letters and/or digits thrown
>>in and separated by a punctuation, like "Hey!Jude" "FidoIS#1". Very
>>hard to guess, very easy to remember, next...
>Give a thousand secretaries that same set of instructions and you will
>get far less than a thousand different passwords. Sort them in order
>of frequency and try them all on whatever system you are trying to
>crack. You certainly won't be able to break all the accounts, but you
>will get a few.
Is this based on *anything*? Or just a wild guess, sounds utterly
baseless to me. You honestly think if I told 1000 people to:
choose two short words separated by a punctuation character
and mix some upper-lower case into the words
I would frequently get the exact same result from different people?
Gads, and what might that result be? The world of human psychology
awaits your discovery! (the only exception I can imagine is that if
you gave an example they'd all use the example, but other than that,
you can check for that easily enough.)
>If people are allowed to create their own passwords, there should not be
>a way to try ten thousand different passwords on each account with out
>triggering some alarm.
I doubt you can ever achieve this as someone only needs access to your
encryption algorithm.
>If security is really important it may be usefull to put the shadow
>password file on a separate server machine. The server machine should be
>physically and electronically remote so that the only requests it
>services are "check password/username", "add password/username",
>"remove password/username" and "changepassword
>newpassword/oldpassword/username". This implies that backups and restores
>have to be done manually. A logical migration path to a secure password
>server is to use a shadow password file which is normally only accessable
>through a small well defined interface.
Unfortunately you now have to trust your network (eg. that I can't
send "password ok" messages from a different system.)
It's a hard problem, merely adding layers of complexity is not a
particularly compelling approach. That's my whole poing.
-Barry Shein, ||Encore||
-----------------------------
From: Barry Shein <bzs at encore.com>
Subject: Re: Yet Another useful paper
Date: 23 Dec 88 18:02:50 GMT
Posting-Front-End: GNU Emacs 18.41.15 of Tue Jun 9 1987 on xenna
(berkeley-unix)
To: unix-wizards at sem.brl.mil
From: henry at utzoo.uucp (Henry Spencer)
>In article <12750 at bellcore.bellcore.com> karn at ka9q.bellcore.com (Phil Karn)
writes:
>>I too have my doubts about the effectiveness of shadow password files. My
>>fear is that it will make administrators complacent; they'll reason that
>>since no one can get at the file, then there's no need to ensure on a
>>regular basis that people pick hard-to-guess passwords.
>
>Turn it around: would you suggest deleting shadow password files, from
>systems which already have them, just to keep the sysadmins alert?
Although I agree with Phil Karn I also agree with Henry that this
reasoning is not compelling.
I tend towards the concern that if password files are made unreadable
then we admit system security demands their unreadability. Given that
we create the situation where if there's any suspicion that the pw
file has gotten out we have to admit a security crises.
For example, discovering a software bug which allowed any file to be
read by any user, I know of a few in many systems (they've been
discussed in the recent past, no secrets here.)
Right now that would be a major concern on some systems, minor on
others (eg. a system where all files are readable anyhow, not terribly
uncommon, or of no great consequence.)
By moving to shadow password files there's no choice, any bug which
permits reading of unreadable files must be admitted to be a major
security breach. Perhaps on your (universal "your") system you can
tell your management and users that it really doesn't matter if every
disgruntled employee now has a copy of the pw file but that sort of
complacency can't be counted on.
To turn it around, if you find a bug which allows anyone WRITE access
to any file on the system don't you immediately check the password
file? Unfortunately read access is more insidious since you probably
can't tell if the pw file has been read by an unauthorized user, and
it requires no tracks (that is, I can check the pw file against a
recent backup tape after a write breach, after a read breach there's
no modification to compare for.)
Or do we conclude that we'll make the pw files unreadable but not be
concerned if they happen to get read?
I claim it's a can of worms being created.
-Barry Shein, ||Encore||
-----------------------------
From: John Merrill <merrill at bucasb>
Subject: Re: password security
Date: 23 Dec 88 18:22:25 GMT
Followup-To: sci.crypt
To: unix-wizards at sem.brl.mil
In article <4469 at xenna.Encore.COM>, bzs at Encore (Barry Shein) writes:
>
>From: prh at actnyc.UUCP (Paul R. Haas)
>>In article <4444 at xenna.Encore.COM> bzs at Encore.COM (Barry Shein) writes:
>>>The average secretary I know is bright enough to understand rules like
>>>"use two short words with some upper-case letters and/or digits thrown
>>>in and separated by a punctuation, like "Hey!Jude" "FidoIS#1". Very
>>>hard to guess, very easy to remember, next...
>
>>Give a thousand secretaries that same set of instructions and you will
>>get far less than a thousand different passwords. Sort them in order
>>of frequency and try them all on whatever system you are trying to
>>crack. You certainly won't be able to break all the accounts, but you
>>will get a few.
>
>Is this based on *anything*? Or just a wild guess, sounds utterly
>baseless to me. You honestly think if I told 1000 people to:
>
> choose two short words separated by a punctuation character
> and mix some upper-lower case into the words
>
>I would frequently get the exact same result from different people?
Yes, Barry, you would. Why do I know this? Consider the following
modification of your paradigm:
choose an English word of at most eight characters, mixing
both upper and lower case in the word. You must be able to
recall this word easily---without writing the word down.
Guess what! There's a short list that covers the vast majority of
these words. This list is dominated by the hundred most common names
(in the local language), followed by a collection of folk names.
(For your test, I'd expect to see things like Frodo!Ba[ggins], at
least if the target audience was of CS nerds.)
Is the idea a bad one? No, not at all, if only because it might take
a while to extract the statistics of the process. But in the long
run, the two paradigms are probably equal.
-----------------------------
From: Dominic Dunlop <domo at riddle.uucp>
Subject: Light relief (was Re: IEEE 1003.2)
Date: 22 Dec 88 16:45:50 GMT
Keywords: tar, ar, cpio, death-wish
To: unix-wizards at sem.brl.mil
In article <4407 at xenna.Encore.COM> bzs at Encore.COM (Barry Shein) writes:
[relevant stuff deleted -- I'm making an irrelevant posting here...]
>For that matter why not just combine tar and ar and add a flag to tar
>to include an archive symbol table...
Better make that a flag to cpio. In System V, release 4, cpio will have
a new flag, -Htar, that makes it read and write tar-format archives...
--
Dominic Dunlop
domo at sphinx.co.uk domo at riddle.uucp
-----------------------------
From: Ning Zhang <zhang at zgdvda.uucp>
Subject: FIX from UC Berkeley
Date: 23 Dec 88 07:47:48 GMT
To: unix-wizards at sem.brl.mil
Hi UNIX Folks,
I just got a patch from Berkeley and it looks like this
> Subject: security problem in ?.
> Index: ? 4.3BSD
>
> Description:
> There's a security problem associated with the ? in all known
> Berkeley systems. This problem is also in most Berkeley derived
> systems, see your vendor for more information.
>
> Fix:
> Apply the following patch to the file ? and ? it.
> ......
I am very afraid that if some crackers have seen the patch, they can
break down any 4.3bsd UNIX system.
Nowadays, computer has become very important in our daily life. And
the security problem has been more concerned. But when I saw that I
become a super-user in a very easy way, I couldn't trust my eyes, and
also UNIX. A such big hole has exists in 4.3bsd system at least for 5
years! We all know the importance of the security, however, there
was no any security in 4.3bsd UNIX system. The weakness of UNIX has
shown again. If a cracker, or a worm or a virus know the hole, it
would be ... unbelievable! But we, UNIX folks are lucky, I am not a
cracker, or RTM,Jr. And I had it reported to Berkeley. UNIX society
will become safe again very soon. But, I don't think we will safe
forever after the hole is plugged. What's the real solution of the
security problem in computer systems? At least, in my mind, it is
not a good way to solve security problem as bug-report and bug-fix
circle. Of course, we should think about the complexisity of computer
systems, but I think it is is another problem. It is worse that we
always make the same error in different places, at different time!
and most of experienced programmers and experts also do it, just as
Gene Spafford said in his worm report. We all should consider this
security problem carefully and seriously.
The above is my opinion only. I think I have an alliance.
_______ -^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-
/____ / Ning Zhang (zhang at zgdvda.uucp)
___/ / Zentrum fuer Graphische Datenverarbeitung e.V. (ZGDV)
/__ / Wilhelminenstrasse 7, D-6100 Darmstadt, F. R. of Germany
/ /____ Phone: +49/6151/1000-67 Telex: 4197367 agd d
/______/ -v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-v-
P.S.: I have been a part-time system manager for 5 years in China.
P.S.: But now I am working on Computer Graphics (It's my major).
P.S.: If you give me a chance, I will do the number one for you :-)
-----------------------------
From: Jim Budler <jim at eda.com>
Subject: Re: unshar business
Date: 23 Dec 88 18:14:36 GMT
To: unix-wizards at sem.brl.mil
In article <167 at ecicrl.UUCP> clewis at ecicrl.UUCP (Chris Lewis) writes:
| In article <397 at eda.com> jim at eda.com (Jim Budler) writes:
| >[...]
| >| >I may modify the source to disallow any '/'.
|
| >First, you totally ignored the statement above.
|
| First, you said "may". That also means "may not".
OK
| >Second, though partially my fault since I failed to mention I run here
| >program under chroot(2). So there is no od(1), and no mail(1), and now
| >there is not even a sed(1) available.
|
| Thus, you're inventing excuses after the fact.
No I was not *inventing* anything.
| Your approach requires that something (mapsh if you are using uuhosts) has
| to be setuid root so that chroot can be used. A lot of SA's out there
| won't run setuid root programs if they can possibly help it.
That's is their problem. A setuid program for which I have the source
seems relatively safe.
| With Jef Poskanzer simple suggestions, Cathy's program wouldn't have to use
| chroot. What's wrong with that? Why did you react to a very constructive
| posting from Jef with a flame? Is it that you are simply a twit?
You call this constructive?
| >| >In article <7876 at well.UUCP> Jef Poskanzer <jef at rtsg.ee.lbl.gov> writes:
| >| >| Well, I have looked at Cathy's program, all 93 lines of it, and unless
| >| >| I'm reading it wrong she wasn't paying much attention either.....
At this point in time my memory is that in addition to the *constructive*
comments above he mentioned using uns to unpack something into /etc/passwd.
To which I replied that news was not allowed to write to /etc/passwd, and
that I might disallow '/'. Your analysis of this statement is above.
The other *constructive* comment was something like:
and the program uses gets().
Now *if* people have been watching news for a while, and if they
have caught the articles in question that statement might be
amplified in there mind into a documentary on the security aspects
of using gets() instead of fgets().
|
| > 1) Someone supplied some source code, presented as a possible
| > solution to a problem.
|
| For which I applaud her attempt. Not your flames in retaliation for
| a couple of simple suggestions by Jef.
I don't and didn't feel that Jef's comments were constructive. I'll
agree they were simple.
|
| > 3) You supplied neither a better solution, nor helped to
| > fix it in any positive way ( or did I miss your posting of
| > the traditional Usenet source code assistance, a diff).
|
| Yes I did. Ever since I got involved in this discussion I have been
| telling everyone to use uuhosts or something similar. Cathy's program
| enhanced with Jef's suggestions is even better - because you *don't*
| need chroot and because you *don't* have to setuid root.
I've been running uuhosts as long as I've been on the net (this job)
and started using it when it first came out, (previous job). Wasn't that
your suggestion? uuhosts is better that cron running sh on the maps.
But it isn't perfect.
| >Cathy's program, slightly modified, wrapped within an edit of
| >Mr. Quartermain's uuhosts script and mapsh program, increased
| >the security of unpacking the maps.
|
| Which is dumb. If you've using mapsh why in the hell do you need Cathy's
| program? mapsh is a setuid root chroot'd shar. Which is probably safe
| (but undesirable).
Which is not dumb. First mapsh is not a shar. It is just
(cd $maps; chroot; sh). uuhosts pipes particular commands to it.
As was pointed out in these discussions, chroot() does
not prevent damage by using up the inodes.
| What would be even better is to remove mapsh and
| replace it completely with Cathy's program.
Probably, when I get the time to finish disallowing '/', and replacing
gets() with fgets(). At that time I'll probably eliminate uuhosts
entirely for unpacking maps, gut it and retain its other useful map display
and indexing features.
|
| >What did your postings really contribute?
|
| Regarding postings (plural):
|
[verbal self congratulations]
|
| Jim, read my lips:
|
| - There is no bug. THEREFORE patch input is useless. There's nothing
| to patch.
Make up your mind. Either Jef suggested fixes to the program, or there
is no bug. It can't be both. My request for patch input was a statement
about Jef's statements about Cathy's program. Was he making constructive
criticism or rude remarks. I felt he was making rude remarks, and hence
my posting.
|
| - There are already several packages available that unpack maps safely.
| THEREFORE we didn't need to post any of them.
|
| - All we've been trying to do is hit SA's over the head hard enough
| for them to pay attention and plug their own bloody holes with
| software that ALREADY EXISTS.
|
| Because Larry and I made fools of ourselves, Cathy wrote her program.
| Many other people wrote similar programs. Many other people thought
| that their pet unshars were safe. Most of them were wrong and found out.
| And in the end:
|
So what are you crying about? I posted about what I felt was Jef's
unhelpful attitude. You jumped on me, I responded. Classic Usenet
tradition.
| MANY SA'S PLUGGED THE HOLE!!!!!
|
| Which is exactly what we were intending! Cosmic wow! And I helped!
| Take a bow Chris and Larry! And all of us (except possibly you)
| learned something in the process!
Congratulations! Does that make you feel better? Some of us, including me
learned from Cathy. Some of us, including me were made aware by Jef
of two holes in Cathy's program. But Jef was not truely constructive in
the manner in which he presented these holes.
|
| regarding "posting" singular:
|
| Because you obviously didn't know what you were doing. And are inventing
| excuses post-facto.
Oh, calling me a liar again. And obviously didn't know what I was doing?
Where did you get that from? There is nothing *wrong* about what I am
doing. Overkill, is probably the most descriptive word. But wrong?
|
| >And no I haven't finished my mods to the program, yet, so I know
| >it isn't perfect yet, and given your response to less than perfection
| >I may never post it,
|
| Which is no great loss considering how well you understand uuhosts and
| what mapsh does.
Thanks, I needed that. How do you know what I know about uuhosts? Oh,
that's right, I forgot, I lied about using it. And you obviously know
all about it. Quoting you:
| program? mapsh is a setuid root chroot'd shar. Which is probably safe
|
| >but instead sit here more secure, in the grand
| >tradition of all those who sat back and said "I've known about that
| >hole for years." Why post source, I'll just get flames from the
| >perfect people out there. <----- *more sarcasm*
| [gosh, I'd never have noticed!]
| [ ^ this is sarcasm too! ]
|
| Nah, you couldn't be referring to me. I post source.
|
That's nice, so do I.
| >Like I said lighten up.
|
| Interesting. You say that in almost all of your postings. Most of
| which are rabid flames in response to what appear to be relatively mild
| comments or suggestions. Have you some sort of psychological problem?
|
I doubt that you see most of my postings. I didn't feel that Jef's
statements were relatively mild comments or suggestions. I didn't
feel his suggestions were clear. And they were presented very
poorly.
| In contrast, I only flame twits. <-------- *personal insult*
| [ ^ *more sarcasm* ]
Try sending a few to yourself then. I felt, and I feel that Jef did
a very great disservice to a new source poster. In the process the
two suggestions hidden within his posting may assist the Usenet.
But he could have done the same service to Usenet in a manner which
did not put down the efforts of another. But maybe that is too
much to ask.
| --
| Chris Lewis, Markham, Ontario, Canada
Call me a twit if you like. The world around has an opinion of
all the players in this small drama. They undoubtedly have made
up their mind about Jim Budler, Chris Lewis, and Jef Poskanzer.
I can live with you opinion of me, and I'm sure you can live with my
opinion of you. And we probably will never know the opinions of
the great majority.
Merry Christmas.
jim
--
Jim Budler address = uucp: ...!{decwrl,uunet}!eda!jim OR domain: jim at eda.com
#define disclaimer "I do not speak for my employer"
Notice: I record license plate numbers of tailgaters
-----------------------------
From: Jim Budler <jim at eda.com>
Subject: Re: unshar business
Date: 23 Dec 88 21:14:52 GMT
To: unix-wizards at sem.brl.mil
In article <419 at eda.com> jim at eda.com (Jim Budler) writes:
| In article <167 at ecicrl.UUCP> clewis at ecicrl.UUCP (Chris Lewis) writes:
Chris doesn't like what I said, but one of the things I said was
that I intended to make a couple of changes to Cathy's uns.c and then
run it out from under uuhosts instead of under uuhosts/mapsh.
I'll put my mouth where my mouth was, since I am on vacation and
have been spurred to find the time. I do not do this because my previous
way of running it was insecure (under uuhosts and mapsh), but because
with these trivial changes the security is maintained, while the
processing is simplified.
An advantage gained compared to the original uuhosts,
with or without mapsh, is increased security. mapsh prevented most
problems, but could have been susceptible to malicious inode usage.
Uuhosts itself did *limited* checking of the map shar before passing it
to sh.
Another advantage over the original uuhosts is a single letter to
news (aliased to me) logging the actions, instead of a letter for
each map file.
The changes I made:
Lengthened the input filename buffer to allow the method I use,
detailed below.
Lengthened the line buffer to allow longer lined shars.
Dissallowed '/' in the output filenames. It must be run in the
map directory.
Thank you Cathy Segedy <decvax!gsg!segedy> for uns.c
Details:
My news sys file entry related to maps:
=================
maps:world,comp.mail.maps:F:/usr/spool/news/maps/comp.mail.maps/Batch
=================
My crontab entry:
=================
30 5 * * * /usr/spool/news/maps/comp.mail.maps/Process > /dev/null 2>&1
=================
Note: I have a sysV type crontab with different crontabs for each user.
This crontab entry runs as news, not root.
A v7/BSD one *might* look like:
=================
30 5 * * * /bin/su news < /usr/spool/news/maps/comp.mail.maps/Process >
/dev/null 2>&1
=================
I could be wrong about that, check your manual.
The script /usr/spool/news/maps/comp.mail.maps/Process :
=================
#! /bin/sh
# unbatch the maps, then make install paths
umask 2
cd /usr/spool/news/maps/comp.mail.maps
if [ -f Batch ]; then
# /usr/local/bin/uuhosts -unbatch
# using uns instead of uuhosts to unbatch
mv Batch Batch.working
for file in `cat Batch.working`
do
uns $file >> Batch.log
done
# use uuhosts to create the index file
/usr/local/bin/uuhosts -i
mail -s 'Map Process Log' postmaster < Batch.log
rm -f Batch.working Batch.log
make -s install
fi
=================
And finally diff. By the way for you who have been listening, Cathy's program
did not use gets(), it always used fgets().
=================
*** /tmp/,RCSt1a26060 Fri Dec 23 12:50:39 1988
--- uns.c Fri Dec 23 12:50:19 1988
***************
*** 26,35 ****
after the SHAR_EOF.
Someone might wish to shorten MAXLIN (do map files have a line limit?)
*/
#include <stdio.h>
! #define MAXLIN 256
main(argc,argv)
int argc;
--- 26,39 ----
after the SHAR_EOF.
Someone might wish to shorten MAXLIN (do map files have a line limit?)
*/
+ /* lengthened MAXLIN cause someone said they found longer lines in
+ * a shar file. I don't know if this was a map shar file.
+ * Is there a line length on a map shar file? - jim budler
+ */
#include <stdio.h>
! #define MAXLIN 1024
main(argc,argv)
int argc;
***************
*** 38,50 ****
FILE *fp, *fp2;
char buffer[MAXLIN];
int at_beginning, at_end;
! char filename[20], file2[20];
at_beginning = 0;
at_end = 0;
if(argc != 2){
! printf("bad arguements\n");
exit(1);
}
--- 42,58 ----
FILE *fp, *fp2;
char buffer[MAXLIN];
int at_beginning, at_end;
! char filename[1024], file2[20];
! /* lengthened the buffer for filename. The full path for filename is
! * presented by my method of passing the input name to uns, so
! * a longer buffer was required than 20 char. - jim budler.
! */
at_beginning = 0;
at_end = 0;
if(argc != 2){
! printf("bad arguments\n");
exit(1);
}
***************
*** 68,73 ****
--- 76,86 ----
}
printf("removing end-of-line while copying\n");
strncpy(file2,&buffer[20],(strlen(&buffer[20]) - 1));
+ /* check for / in output filenames. Disallow such files - jim budler */
+ if ( rindex ( file2, '/') != NULL ) {
+ printf ("%s contains /, aborting.\n", file2);
+ exit(1);
+ }
printf("opening file {%s}\n",file2);
if((fp2 = fopen(file2, "w")) == NULL) {
printf("can not open file {%s}\n",file2);
=================
--
Jim Budler address = uucp: ...!{decwrl,uunet}!eda!jim OR domain: jim at eda.com
#define disclaimer "I do not speak for my employer"
Notice: I record license plate numbers of tailgaters
-----------------------------
From: James Logan III <logan at vsedev.vse.com>
Subject: Re: This is strange...
Date: 23 Dec 88 16:10:32 GMT
Keywords: sed awk pipe
To: unix-wizards at sem.brl.mil
In article <1652 at ektools.UUCP> mcapron at ektools.UUCP (M. Capron) writes:
#
# CODE A: This works.
# incs=`egrep '^#[ ]*include[ ]*"' $i | awk '{printf "%s ", $2}'`
# incs=`echo "$incs" | sed 's/"//g'`
#
# CODE B: This does not work.
# incs=`egrep '^#[ ]*include[ ]*"' $i | awk '{printf "%s ", $2}' |
sed 's/"//g'`
#
Someone else already answered your question correctly, but I have another
version for you that handles all of the following cases:
#include <file>
#include "file"
# include <file>
# include "file"
and runs a little faster, since it is all contained in one awk script
and does not required additional processing by sed.
incs=`
awk '
/^#[ ]*include[ ]*/ {
if (NF == 3) {
# line is like "# include <file>"
INCFILE=$3;
} else {
# line is like "#include <file>"
INCFILE=$2;
}
print substr(INCFILE, 2, length(INCFILE) - 2);
}
' <$i;
`;
-Jim
--
Jim Logan logan at vsedev.vse.com
(703) 892-0002 uucp: ..!uunet!vsedev!logan
inet: logan%vsedev.vse.com at uunet.uu.net
-----------------------------
From: Maarten Litmaath <maart at cs.vu.nl>
Subject: Re: This is strange...
Date: 22 Dec 88 09:59:33 GMT
Keywords: sed awk pipe
To: unix-wizards at sem.brl.mil
mcapron at ektools.UUCP (M. Capron) writes:
\#!/bin/sh
\for i in *.c
\do
\#Place a list of include files in $incs seperated by spaces.
\#CODE A or CODE B goes here.
\ echo "$i : $incs"
\done
\CODE A: This works.
\incs=`egrep '^#[ ]*include[ ]*"' $i | awk '{printf "%s ", $2}'`
\incs=`echo "$incs" | sed 's/"//g'`
\CODE B: This does not work.
\incs=`egrep '^#[ ]*include[ ]*"' $i | awk '{printf "%s ", $2}' |
sed 's/"//g'`
Compare your example with the following:
% echo -n 'merry Xmas' | sed 's/.*/&, happy new year/'
%
Now get rid of the `-n' and suddenly everything works! The problem: sed won't
do anything with unfinished lines! You explicitly didn't append a newline in
the awk script. See how far that got you! :-)
Solution:
incs=`egrep '^#[ ]*include[ ]*"' $i |
awk ' {printf "%s ", $2}
END {printf "\n"}' |
sed 's/"//g'`
BTW, it's not forbidden to use newlines between backquotes!
Another interesting case:
$ cat > merry_Xmas
happy
1989
$ card=`cat merry_Xmas`
$ echo $card
happy 1989
$ echo "$card"
happy
1989
Csh hasn't got this anomaly.
--
if (fcntl(merry, X_MAS, &a)) |Maarten Litmaath @ VU Amsterdam:
perror("happy new year!"); |maart at cs.vu.nl, mcvax!botter!maart
-----------------------------
From: Leo de Wit <leo at philmds.uucp>
Subject: Re: This is strange...
Date: 23 Dec 88 11:38:43 GMT
Keywords: sed awk pipe
To: unix-wizards at sem.brl.mil
In article <1652 at ektools.UUCP> mcapron at ektools.UUCP (M. Capron) writes:
|
|Here is some bizareness I found. Below is a subset of a Bourne Shell script I
|am writing on a Sun 3/60 running SunOS 4.0. This segment generates dependency
|lists for makefiles. Note that the egrep brackets should contain a space and
|a tab.
|
|#!/bin/sh
|for i in *.c
|do
|#Place a list of include files in $incs seperated by spaces.
|#CODE A or CODE B goes here.
| echo "$i : $incs"
|done
|
|CODE A: This works.
|incs=`egrep '^#[ ]*include[ ]*"' $i | awk '{printf "%s ", $2}'`
|incs=`echo "$incs" | sed 's/"//g'`
|
|CODE B: This does not work.
|incs=`egrep '^#[ ]*include[ ]*"' $i | awk '{printf "%s ", $2}' |
sed 's/"//g'`
|
|With CODE B, $incs comes out to be nil. I can't figure out what the difference
|is, nor do I have the patience to play with it any furthing. I present it as
an
|oddity to any interested parties.
There certainly is a difference (although it may not be very obvious).
The awk script does not append a newline to the header file list it is
generating. In the case of CODE A that is not a problem: echo will send
one down the pipe to sed. In the case of CODE B sed is attached
directly to awk's output, so it will never get a newline. And since sed
needs a newline as 'input record marker' , it will exit without having
recognized a valid input record - and hence not supply any output.
The solution is simple: add a trailing print statement to the awk script,
as follows:
CODE C: This does also work.
incs=`egrep '^#[ ]*include[ ]*"' $i |
awk '{printf "%s ", $2} END {print}' | sed 's/"//g'`
Furthermore I would like to make some remarks about the script; maybe they
are of some use to someone.
1) The use of a 3 process pipeline for such a simple task seems a
little bit overdone; it all lays well within the capabilities of one,
e.g. with sed:
CODE D: This does also work.
incs=`sed -n '
/^[ ]*#[ ]*include[ ]*"/{
s/[^"]*"\([^"]*\)".*/\1/
H
}
${
g
s/\n/ /gp
}' $i`
It is even possible to avoid the echo, the `` and incs, since sed can
handle that as well:
CODE E: This does also work (omit the echo in this case).
sed -n '
/^[ ]*#[ ]*include[ ]*"/{
s/[^"]*"\([^"]*\)".*/\1/
H
}
${
g
s/\n/ /g
s/^/'$i' : /p
}' $i
The other points are more of a C issue, but I will present them here
since the script was also:
2) When searching for '#include' lines one should allow leading white space.
There is nothing that I could find that forbids white space before the #.
Some programmers even use it to clearify nested conditionals (with #ifdef).
The CODE D,E examples allow leading white space.
3) Source files are not dependent of the header files they name. This
is a commonly made mistake. To understand this, you must realize that
the source file will not change due to a modification in a header file.
The object file however will, since code is generated from the expanded
source file (the output of the preprocessor phase).
So the dependencies should contain lines like:
file.o : incl.h (or perhaps: file.o : file.c incl.h)
instead of
file.c : incl.h
The easiest way is to strip off the .c, and use the filename without
extension:
for i in `echo *.c|sed 's/\.c//g'`
do
#CODE X goes here, using file $i.c
echo "$i.o : $incs"
done
4) Be aware that the script does not handle header files containing
header files. Note that an object (amongst others) depends upon all
(nested) included files. To handle this well, you may perhaps also
want to detect illegal recursion; this is not easy in case of
conditional inclusion, since it depends on preprocessor expressions.
Hope this helps -
Leo.
-----------------------------
From: Paul De Bra <debra at alice.uucp>
Subject: Re: password security
Date: 23 Dec 88 21:20:07 GMT
To: unix-wizards at sem.brl.mil
In article <5835 at saturn.ucsc.edu> haynes at ucscc.UCSC.EDU (Jim Haynes) writes:
}In article <5005 at b-tech.ann-arbor.mi.us> zeeff at b-tech.ann-arbor.mi.us (Jon
Zeeff) writes:
}>The simple solution seems to be to force users to use some non alpha
}>character somewhere in the middle of their passwords. Users then tend
}>to use a combination of two words which prevents the dictionary search.
}
}the 4.3-tahoe-BSD version of passwd seems to do this. At least the last
}time I logged into a tahoe system and tried to change my password it
}wouldn't rest until I had put a non-alphabetic character into it.
}Had the same experience on a Convex machine.
}
Requiring the use of a non-alphanumeric character is not at all sufficient.
Many people react to this by just putting a special character (usually ".")
in front of their old password...
Now, if you start by forcing users to put the non alphanumeric char somewhere
in the middle of the password this would no longer work, but users will still
come up with passwords that are a lot easier to guess than zXk.4;ur...
Paul.
--
------------------------------------------------------
|debra at research.att.com | uunet!research!debra |
------------------------------------------------------
-----------------------------
From: "James C. Benz" <jcbst3 at cisunx.uucp>
Subject: Re: rsh environment
Date: 23 Dec 88 23:30:18 GMT
Keywords: no /etc/profile sourced?
To: unix-wizards at sem.brl.mil
In article <1276 at uwbull.uwbln.UUCP> ckl at uwbln.UUCP (Christoph Kuenkel) writes:
>Is there any way to alter the default environment setting used when
>rsh (the bsd remote shell) executes commands?
>
>our rsh (bull sps9 with spix os) sets up an default environment
>
HUH? (cr,h,...)ackers anyone? Isn't rsh RESTRICTED shell? Anyway,
why not just set these in .profile using standard UNIX syntax ala
HOME=/usr/mydirectory;export HOME
That is, if you have permissions on .profile.
Or is YOUR UNIX *different* than mine (AT&T)?
-----------------------------
End of UNIX-WIZARDS Digest
**************************
More information about the Comp.unix.wizards
mailing list