libraries
Robert C. White Jr.
rwhite at nusdhub.UUCP
Wed Dec 21 10:28:37 AEST 1988
in article <15080 at mimsy.UUCP>, chris at mimsy.UUCP (Chris Torek) says:
>
> thinking about it.% A Unix `.a' `library' file is simply a file containing
> other files, plus (depending on your system) a symbol table (in the
> `sub-file' __.SYMDEF). Now then, what is a Unix directory?
> If your answer was `a file containing other files', congratulations.
Wrong-O kid! An archive library is a "File which contains the
original contents of zero-or-more external sources, usually text or
object files, which have been reduced to a single system object."
As subjective proof of this address this question: "Can you 'archive'
a a device special file and then access the device or service through
direct refrence to the archive?" The answer is OF COURSE *NO* because
(1) device special files have no "contents" per-se and (2) The archive
does not preserve the "file" concept on an individual-entry basis. If
you do not understand the difference between a "system object" file,
and "the contents of a" file go to FILE MGMT. THEORY 101. Do not pass
go, do not collect your next paycheck.
> -----
> % Especially if it is one of my articles. :-) I might also add a cheap
> shot here about using `spell'...
> -----
I might counter with a cheap shot about doing research. ;-)
> Now, aside from the actual implementation, what is the difference between
> a library file that contains other files and a library directory that
> contains other files?
>
> If your answer was `none', congratulations again.
Wrong-O again kid. An archive is a single system-object which
contains the contents of something which may-or-may-not have ever been
a system object. A directory is a system object which organizes
(by inclusive internal grouping) refrents to other system objects.
System objects are files (or other things on non UNIX System
arcitectures), the contents of systems objects are not.
(I say refrent because of the whole i-node/multiple link issue; I say
may-not-have-been a system object because having the ability recreate
a file because you have the image of its contents does not mean that
you had the file in the first place. See cpio or tar specs and
compare these to the inode structure of your machine.)
As an excersize in intuition and deduction try the
following: (1) get a listing of the modules in /lib/libc.a (If you
can't do this you might as well leave the concept of libraries
compleetly alone from here on out.) (2) Compare the number of entries
in this library to the number of files you may have open at the same
time. (3) Multiply the number of entries in libc.a times the amount
of memory required by your system to manage a single open file. (4)
Multiply the entries by the amount of time necessary to open, read,
and close a 30-to-3000 byte file. (5) calculate about how much the
buffer colision of all this filing would cost each of your users.
Now take all that system performance information and multiply it by
three or four libraries (avrage for a large job) and then multiply
that by the number of programmers.
You can't just say "asside form the implementation" because in these
things implementation is everything. After all "asside form the
implementation" Faster-Than-Light travel is a workable solution to
space flight.
>>How many times would you have to scan the contents of /usr/lib/*.o to
>>load one relatively complex c program (say vn).
>
> Either one time, or (preferably) zero times.
A library directory you never scan would be useless. Ncest' Pa?
[sic ;-)]
>>As modules called modules that the program itself didn't use, you introduce
>>the probability that the directory would have to be searched multiple times.
>>If you tried to aleviate that the files would have to be ordered by names
>>that reflected dependancies instead of content.
>
> This is all quite false. Even without using a ranlib (symbol table
> file) scheme, the directory need only be searched once, and every file
> within it opened once to build the linker's symbol table; then, or
> after reading the symdef file, those files that contained needed
> routines would have to be opened once to read their contents.
How many i-node caches do you think your system has? Try "sar -a"
sometime and then compare this to the number of entries in /lib/libc.a
and ... (see above)
I can garentee at least two scans. More if more than one person is
compiling and the compiles are not in *PERFECT* sync.
>>Then you would have all the extra system calls that would spring up
>>to open, search, and close all those files.
>
> The extra system calls argument is valid: if you needed N object
> modules out of the `-lc' library, you would have to open N+1 files (1
> for the symtab file) rather than 1 file. It is, however, the very same
> argument that `proves' that fork()+exec() is wrong. I claim that the
> open and close calls---there are no `search' calls, though there may be
> a higher percentage of read()s, along with fewer or no lseek()s---
> *should* not be so expensive. I also find it very likely that those
> who claim it is too expensive have not tested it.
There are a few things I don not have to *test* to know. They can be
proved by induction. In the same sense that I have dropped things
that have fallen on my foot, heavy things do damage when dropped,
therefore I do not need to drop an anvil on my foot to find out if my
foot would be damaged by that; I may say I know what directory searches
cost when run solo because I have run expire (from usenet); I have
also run expire when others have been using the system and have seen that
it eats prefromance to hell and runs slowly; therefore I do not have
to implement a looser of a library scheme using a symbol table file
and individual object files to know that it is a dumb idea.
IF you would like to see an example of what a dog it would be to
create/replace a symbol table run "expire -r -h -i" which will open
every article once and create a history entry for it. (how did you
think news worked anyway? If you had to read /usr/lib/news/history to
get your articles by group you would never last out to reading the
articles. While I will cede that there may(?) be more articles in the
system at one moment than there would be objects via. your aproach I
strongly suspect that analogy is stronger than you will chose to
admit.)
Rob.
p.s. You try being dislexic for a few years and then make comments
about spelling.
More information about the Comp.unix.wizards
mailing list