compare.G
jsq at ut-sally.UUCP
jsq at ut-sally.UUCP
Tue Aug 9 07:13:36 AEST 1983
- 27 -
6.1 System V
6.1.1 _N_e_w__f_i_l_e__s_y_s_t_e_m__b_l_o_c_k__s_i_z_e System V has introduced a
revised file system which allows a choice of a 512 or 1K
byte block. The information concerning the type of a file
system is recorded in its superblock, so it is possible to
have both kinds of file system on the same system.
Robustness is enhanced by carefully controlling the
order in which inode and directory information is written
out in order to prevent serious file system inconsistencies
in the event of a crash.
6.1.2 _F_a_s_t_e_r__a_c_c_e_s_s Other enhancements claimed to improve
efficiency include multiple (3-7) physical I/O buffers (upon
which _d_f_s_c_k, a multi-drive version of _f_s_c_k, depends); a
larger number of system buffers (up to 400); free list
management of the file table; and hashing of the in-core
inode table.
A utility, _d_c_o_p_y, is provided to allow reordering of a
file system to optimize access time by compressing directory
``holes'' and spacing file blocks at the disk rotational
gap. Its frequent use is recommended.
6.2 4.1C BSD
6.2.1 _R_e_i_m_p_l_e_m_e_n_t_a_t_i_o_n__f_o_r__e_f_f_i_c_i_e_n_c_y 4.1C has a file
system that uses a block size and a fragment size that are
settable per file system. The basic block size (usually
4096 or 8192 bytes) is the largest block size used in a
file, and all blocks but the last are this size. The last
one may be any multiple of the fragment size (usually 512 or
1024, and no more than a factor of eight less than the basic
block size). Inodes are divided among several cylinder
groups on a file system, and blocks in a file are usually
localized in a single cylinder group. In-core inode copies
are hashed.
The standard I/O library has been modified to use the
block size returned by the modified _s_t_a_t call to determine
the size of its transfers.
Various changes were made for robustness, as well,
beyond those found in 4.1. For example, static information
from the superblock (such as the block and fragment sizes)
is duplicated in each cylinder group.
Measurements made at Berkeley indicate the new file
system is up to a factor of 16 faster than the old (4.1) one
under ideal conditions, and a factor of 10 is not unusual in
- 28 -
actual use.
4.1C keeps defaults for the various parameters needed
by the new file system in /etc/disktab (a _t_e_r_m_c_a_p-like
file), where _n_e_w_f_s (a frontend to _m_k_f_s) uses them in
constructing a file system, storing them in the super block.
Various other programs, such as _b_a_d_1_4_4, which handles bad
sector marking, also use /etc/disktab. This file is a
kludge used because the information is not yet kept on the
disk and accessible by an _i_o_c_t_l.
6.2.2 _O_t_h_e_r__m_o_d_i_f_i_c_a_t_i_o_n_s In addition, 4.1C has very long
file names (compile time parameter of 255 characters)
analogous to the long C identifiers, a reworked directory
implementation, symbolic links, and _m_k_d_i_r, _r_m_d_i_r, and _r_e_n_a_m_e
as system calls.
The use of file names that are actually 255 characters
long is not, of course, recommended. The idea is to set the
limit high enough that ordinary use will never hit it.
A simulation library for the new directory format has
been distributed several times over USENET; it is a good
idea to use it even if conversion to 4.2 is never planned,
since it solves several old Unix directory access problems
(e.g. insuring null termination of file names extracted
from a directory).
A symbolic link is simply a file containing a pathname,
which is interpreted by the kernel after the pathname of the
link itself. Thus cross-device links and links to
directories are possible.
The motivation for moving _m_k_d_i_r, _r_m_d_i_r, and _r_e_n_a_m_e into
the kernel was to make them extensible in a network
environment. In the case of _r_e_n_a_m_e, robustness during
system crashes was also a factor.
6.2.3 _E_x_t_e_n_d_e_d__(_n_e_t_w_o_r_k_)__f_i_l_e__s_y_s_t_e_m Neither 4.1C nor 4.2
have the extended file system that makes it possible to
mount, on one machine, a file system existing on a disk
connected to another machine, with file transfers then
proceeding over the network connecting the two machines.
There are several implementations of such a facility but
none will appear in 4.2.
- 29 -
7. Interprocess Communications (IPC)
This is one of the areas where the systems diverge the
most.
7.1 System V
System V provides several somewhat different paths for
achieving interprocess communication, mostly developed for
real time support.
The fifo, or named pipe, has been retained from
System III, allowing a process to open a pipe by name rather
than needing a parent to set up appropriate file
descriptors.
The message queue operations associate a unique
identifier with a system message queue and data structure
that includes information about the last processes to send
and receive messages, the times at which these events
occurred, etc.
The semaphore operations associate a unique identifier
with a set of semaphores and a data structure that includes
time and pid of last operation, number of processes
suspended while waiting for a particular change in the
semaphore's value, etc.
The shared memory operations associate a unique
identifier with a shared memory segment (which may be
attached to the data segment of a process) and a data
structure containing the size of the segment, time and pid,
etc.
As an adjunct to the above, process segment locking
(text, data, or both) via the _p_l_o_c_k system call is also
provided.
The number of message queues, size of each queue,
number of semaphores, number of shared memory segments, etc.
are all parameters which are determined by the system
administrator at system configuration time.
7.2 4.1C BSD
4.1C has dropped the V7 multiplexed files (MPXs) that
were retained in 4.1 in favor of a new interprocess
communication facility. This new socket IPC integrates the
pipes, file and device I/O, and network I/O into one
interface, which allows blocking or non-blocking I/O,
multiplexing several I/O streams in one process by use of
- 30 -
non-blocking I/O and the _s_e_l_e_c_t system call, and
scatter/gather I/O.
The socket IPC solves most of the traditional Unix IPC
problems, and is more general than the various mechanisms
which have preceded it, such as pipes, MPXs, Rand ports, BBN
await/capac, etc.
The _m_m_a_p shared memory facility described in the 4.2BSD
System Manual is not supported in 4.1C, and will not be in
4.2. It will, however, appear in 4.3BSD, along with the
revised _f_o_r_k system call that makes _v_f_o_r_k obsolete by only
copying pages when they are modified. Various other memory
management-related changes will also come with 4.3.
8. Networks
With the increased use of networks of small
workstations and larger file or compute servers, this
subject is gaining importance.
8.1 System V
While it is said that System VI will incorporate the
Berkeley network code, most network support in System V is
implemented using KMC-11Bs.
8.1.1 _X_._2_5 System V documents the use of its VPM facility
to support X.25 in a KMC-11B peripheral processor, and the
same technique can be used for other networks. However, the
X.25 support package was not included on our distribution
tape, and the documentation leads one to believe this was
intentional.
Rumor has it that there is a current project to
implement X.25 under the 4.2 network framework.
8.1.2 _P_C_L__n_e_t_w_o_r_k System V provides a driver for the PCL-
11B network bus, used to interconnect multiple CPUs for fast
parallel communications. A local network of UNIX machines
is made practical by the inclusion of the _n_e_t command, which
allows commands to be executed on remote system. It is very
reminiscent of _b_e_r_k_n_e_t.
4.2 has a PCL driver.
8.1.3 _N_S_C__n_e_t_w_o_r_k System V documents an interface
specification for the NSC A-410 processor and its associated
software, used to access an NSC local net (Hyperchannel).
Neither a driver nor applications software was provided with
- 31 -
the distribution, however.
4.1C and 4.2 have an NSC driver.
8.1.4 _R_J_E__t_o__I_B_M System V implements software which
communicates with IBM JES by emulating a 360 remote
workstation. It relies on a VPM script running in a PCD,
say, the KMC-11B. Facilities are provided for queueing
jobs, monitoring the status of the RJE, and notifying users
of the arrival of output.
8.2 4.1C BSD
Networking is one of the strongest points of 4.1C.
8.2.1 _G_e_n_e_r_a_l__n_e_t_w_o_r_k_i_n_g__f_r_a_m_e_w_o_r_k The network mechanisms
were designed with the intention of supporting a variety of
network protocols and hardware.
The socket IPC provides an interface common to both
networks (the internet domain in particular) and internal
Unix facilities (the Unix domain).
The internal networking mechanisms support easy
implementation of further protocols or interface drivers,
and are clearly documented.
8.2.2 _V_a_r_i_e_t_y__o_f__h_a_r_d_w_a_r_e__a_n_d__p_r_o_t_o_c_o_l_s__s_u_p_p_o_r_t_e_d Hardware
currently supported includes several kinds of ethernet*
interfaces (3COM, Interlan, Xerox 3Mb experimental), several
ARPANET IMP interfaces (ACC LH/DH, DEC IMP11-A, SRI) a ring
network interface (Proteon 10Mb), and various others, such
as DMC-11, NSC Hyperchannel, and Ungerman-Bass with DR-11/W.
4.2 (but not 4.1C) has a PCL driver.
ISO/OSI** Network, Transport, and lower layer protocols
supported include 3Mb and 10Mb ethernet, Proteon proNET 10Mb
ring, and the DoD internet family (TCP/IP and relatives).
__________
* Ethernet is a trademark of Xerox Corporation.
** International Standards Organization Open Systems
Interconnection: a meta-protocol designed to promote
compatibility among networks.
More information about the Comp.sources.unix
mailing list