From asbesto at freaknet.org Fri Jul 4 20:01:27 2014 From: asbesto at freaknet.org (asbesto) Date: Fri, 4 Jul 2014 12:01:27 +0200 Subject: [TUHS] Museo dell'Informatica Funzionante - CONTRIBUTE! Message-ID: <20140704100127.GA11054@freaknet.org> Hi there, Sorry for an eventual Offtopic but this is strictly related to our Computer Museum activity... We launched a campaign to get some help for our upcoming initiative, that will be the starting point for a big step forward to a new Museum! So please take time to read it, and please share everywhere to anyone interested! http://igg.me/at/insertcoin love asbesto From dnied at tiscali.it Mon Jul 7 08:48:33 2014 From: dnied at tiscali.it (Dario Niedermann) Date: Mon, 07 Jul 2014 00:48:33 +0200 Subject: [TUHS] 1st Unix w/ ISO-8859-1 support? Message-ID: <53b9d241.Lh/7GB72hPbYEMtZ%dnied@tiscali.it> Keywords: encoding, charset, latin, roman, accented, diacritical Hi! Does anyone know which was the earliest Unix release to support the ISO-8859-1 character set? From tim.newsham at gmail.com Thu Jul 10 04:08:51 2014 From: tim.newsham at gmail.com (Tim Newsham) Date: Wed, 9 Jul 2014 08:08:51 -1000 Subject: [TUHS] soviet computing Message-ID: slightly off-topic but probably of interest to mailing list members: Pioneers of Soviet Computing http://www.sigcis.org/files/SIGCISMC2010_001.pdf -- Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com From crossd at gmail.com Thu Jul 10 04:11:40 2014 From: crossd at gmail.com (Dan Cross) Date: Wed, 9 Jul 2014 14:11:40 -0400 Subject: [TUHS] soviet computing In-Reply-To: References: Message-ID: Loading rather slowly; must be UUCP latency from kremvax. On Wed, Jul 9, 2014 at 2:08 PM, Tim Newsham wrote: > slightly off-topic but probably of interest to mailing list members: > Pioneers of Soviet Computing > http://www.sigcis.org/files/SIGCISMC2010_001.pdf > > -- > Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | > thenewsh.blogspot.com > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aps at ieee.org Thu Jul 10 05:03:59 2014 From: aps at ieee.org (Armando Stettner) Date: Wed, 9 Jul 2014 12:03:59 -0700 Subject: [TUHS] soviet computing In-Reply-To: References: Message-ID: <068B51FD-7179-489E-8004-59A468206F5A@ieee.org> If I've said once, I've said it a million times: DECvax is NOT connected to kremvax. :) Begin forwarded message: > From: Dan Cross > Subject: Re: [TUHS] soviet computing > Date: July 9, 2014 at 11:11:40 AM PDT > To: Tim Newsham > Cc: "tuhs at minnie.tuhs.org" > > Loading rather slowly; must be UUCP latency from kremvax. > > > On Wed, Jul 9, 2014 at 2:08 PM, Tim Newsham wrote: > slightly off-topic but probably of interest to mailing list members: > Pioneers of Soviet Computing > http://www.sigcis.org/files/SIGCISMC2010_001.pdf > > -- > Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs From ron at ronnatalie.com Thu Jul 10 05:13:51 2014 From: ron at ronnatalie.com (ron at ronnatalie.com) Date: Wed, 9 Jul 2014 15:13:51 -0400 (EDT) Subject: [TUHS] soviet computing In-Reply-To: <068B51FD-7179-489E-8004-59A468206F5A@ieee.org> References: <068B51FD-7179-489E-8004-59A468206F5A@ieee.org> Message-ID: <58430.20.132.68.133.1404933231.squirrel@webmail.tuffmail.net> For those of you feeling nostalgic... Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.1 4/1/83 (SU840401); site kremvax.UUCP Path: utzoo!linus!philabs!mcvax!moskvax!kremvax!chernenko From: chernenko at kremvax.UUCP (K. Chernenko) Newsgroups: net.general,eunet.general,net.politics,eunet.politics Subject: USSR on Usenet Message-ID: <0001 at kremvax.UUCP> Date: Sun, 1-Apr-84 11:02:52 EST Article-I.D.: kremvax.0001 Posted: Sun Apr 1 11:02:52 1984 Date-Received: Tue, 3-Apr-84 19:42:40 EST Organization: MIIA, Moscow <.....> Well, today, 840401, this is at last the Socialist Union of Soviet Republics joining the Usenet network and saying hallo to everybody. One reason for us to join this network has been to have a means of having an open discussion forum with the American and European people and making clear to them our strong efforts towards attaining peaceful coexistence between the people of the Soviet Union and those of the United States and Europe. We have been informed that on this network many people have given strong anti-Russian opinions, but we believe they have been misguided by their leaders, especially the American administration, who is seeking for war and domination of the world. By well informing those people from our side we hope to have a possibility to make clear to them our intentions and ideas. Some of those in the Western world, who believe in the truth of what we say have made possible our entry on this network; to them we are very grateful. We hereby invite you to freely give your comments and opinions. Here are the data for our backbone site: Name: moskvax Organization: Moscow Institute for International Affairs Contact: K. Chernenko Phone: +7 095 840401 Postal-Address: Moscow, Soviet Union Electronic-Address: mcvax!moskvax!kremvax!chernenko News: mcvax kremvax kgbvax Mail: mcvax kremvax kgbvax And now, let's open a flask of Vodka and have a drink on our entry on this network. So: NA ZDAROVJE! -- K. Chernenko, Moscow, USSR ...{decvax,philabs}!mcvax!moskvax!kremvax!chernenko From dave at horsfall.org Thu Jul 10 05:14:53 2014 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 10 Jul 2014 05:14:53 +1000 (EST) Subject: [TUHS] soviet computing In-Reply-To: References: Message-ID: On Wed, 9 Jul 2014, Dan Cross wrote: > Loading rather slowly; must be UUCP latency from kremvax. Came to Australia OK, comrade; ciavax!nsavax maybe? -- Dave From cowan at mercury.ccil.org Thu Jul 10 05:50:02 2014 From: cowan at mercury.ccil.org (John Cowan) Date: Wed, 9 Jul 2014 15:50:02 -0400 Subject: [TUHS] soviet computing In-Reply-To: <068B51FD-7179-489E-8004-59A468206F5A@ieee.org> References: <068B51FD-7179-489E-8004-59A468206F5A@ieee.org> Message-ID: <20140709195002.GM6016@mercury.ccil.org> Armando Stettner scripsit: > If I've said once, I've said it a million times: DECvax is NOT connected > to kremvax. :) I thought that was only true on April 1. -- John Cowan http://www.ccil.org/~cowan cowan at ccil.org Lope de Vega: "It wonders me I can speak at all. Some caitiff rogue did rudely yerk me on the knob, wherefrom my wits yet wander." An Englishman: "Ay, belike a filchman to the nab'll leave you crank for a spell." --Harry Turtledove, Ruled Britannia From norman at oclsc.org Thu Jul 10 12:20:47 2014 From: norman at oclsc.org (Norman Wilson) Date: Wed, 9 Jul 2014 22:20:47 -0400 (EDT) Subject: [TUHS] soviet computing Message-ID: <20140710022047.AB64F1DE37C@lignose.oclsc.org> Armando P. Stettner: If I've said once, I've said it a million times: DECvax is NOT connected to kremvax. :) ==== Not any more, anyway. Norman Wilson Toronto ON (once upon a time, research!norman) From doug at cs.dartmouth.edu Thu Jul 10 12:49:22 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Wed, 09 Jul 2014 22:49:22 -0400 Subject: [TUHS] Subject:unpipeIt'seasy for a process to insert a new process intoapipelineeither upstream or down unpipe Message-ID: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> It's easy for a process to insert a new process into a pipeline either upstrean or downstream. Was there ever a flavor of Unix in which a process could excise itselfa from a pipeline without breaking the pipeline? Doug From wkt at tuhs.org Thu Jul 10 14:52:23 2014 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 10 Jul 2014 14:52:23 +1000 Subject: [TUHS] Excise process from a pipe In-Reply-To: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> Message-ID: <20140710045223.GA19076@www.oztivo.net> On Wed, Jul 09, 2014 at 10:49:22PM -0400, Doug McIlroy wrote: > It's easy for a process to insert a new process into a > pipeline either upstream or downstream. Was there ever a > flavor of Unix in which a process could excise itself > from a pipeline without breaking the pipeline? If in the middle of a pipeline, all I can think of is: close fd 0 and fd 1 dup() read end of pipe 1 to be stdin (fd 0) dup() write end of pipe 2 to be stdout (fd 1) exec("/bin/cat") Cheers, Warren From dave at horsfall.org Thu Jul 10 15:00:41 2014 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 10 Jul 2014 15:00:41 +1000 (EST) Subject: [TUHS] Excise process from a pipe In-Reply-To: <20140710045223.GA19076@www.oztivo.net> References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> <20140710045223.GA19076@www.oztivo.net> Message-ID: On Thu, 10 Jul 2014, Warren Toomey wrote: > If in the middle of a pipeline, all I can think of is: > > close fd 0 and fd 1 My Unix kernel knowledge is a little rusty (shame on me!), but wouldn't that generate pipe errors on both sides i.e. EPIPE and the infamous ENOTOBACCO? -- Dave From cjsvance at gmail.com Thu Jul 10 15:06:11 2014 From: cjsvance at gmail.com (Christopher Vance) Date: Thu, 10 Jul 2014 15:06:11 +1000 Subject: [TUHS] Excise process from a pipe In-Reply-To: <20140710045223.GA19076@www.oztivo.net> References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> <20140710045223.GA19076@www.oztivo.net> Message-ID: On Thu, Jul 10, 2014 at 2:52 PM, Warren Toomey wrote: > On Wed, Jul 09, 2014 at 10:49:22PM -0400, Doug McIlroy wrote: > > It's easy for a process to insert a new process into a > > pipeline either upstream or downstream. Was there ever a > > flavor of Unix in which a process could excise itself > > from a pipeline without breaking the pipeline? > > If in the middle of a pipeline, all I can think of is: > > close fd 0 and fd 1 > dup() read end of pipe 1 to be stdin (fd 0) > dup() write end of pipe 2 to be stdout (fd 1) > exec("/bin/cat") > > Cheers, Warren > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > Hi, Warren. That still leaves a process, even if it is a relatively lean one. Besides your fd 0 is presumably already the read end of the input pipe, and fd 1 is already the write end of the output pipe. You could probably reduce the whole thing to the last line. I don't think Doug's request can be done without some new kernel call (or other kernelly goodness) to munge file table entries (nomenclature?) for the process on at least one side (or more likely both) of the self-excisor. Someone may have done it, since I have heard rumours of some novel hacks, but it presumably didn't get very far. Assuming you have pipes on each side, consider what to do with any buffered data. -- Christopher Vance -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Thu Jul 10 18:43:57 2014 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 10 Jul 2014 18:43:57 +1000 Subject: [TUHS] Excise process from a pipe In-Reply-To: References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> <20140710045223.GA19076@www.oztivo.net> Message-ID: <20140710084357.GA27008@www.oztivo.net> > On Thu, Jul 10, 2014 at 2:52 PM, Warren Toomey <[1]wkt at tuhs.org> wrote: > Â Â Â Â close fd 0 and fd 1 > Â Â Â Â dup() read end of pipe 1 to be stdin (fd 0) > Â Â Â Â dup() write end of pipe 2 to be stdout (fd 1) > Â Â Â Â exec("/bin/cat") On Thu, Jul 10, 2014 at 03:06:11PM +1000, Christopher Vance wrote: > Hi, Warren. > That still leaves a process, even if it is a relatively lean one. Hi Chris! Very true. > Besides your fd 0 is presumably already the read end of the input pipe, > and fd 1 is already the write end of the output pipe. You could > probably reduce the whole thing to the last line. Of course. If the shell set up the pipeline then we only have to exec("cat") and leave /bin/cat shuffling the data from one pipe-end to the other. As there are two distinct pipes, each with their own buffers, I can't see a way of coalescing them into a single pipe without, as Chris suggests, some kernelly goodness. Indeed, ugliness and complexity kernel-wise! Cheers, Warren From doug at cs.dartmouth.edu Thu Jul 10 22:03:58 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Thu, 10 Jul 2014 08:03:58 -0400 Subject: [TUHS] Excise process from a pipe In-Reply-To: <20140710045223.GA19076@www.oztivo.net> References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> <20140710045223.GA19076@www.oztivo.net> Message-ID: <201407101203.s6AC3w1K026596@coolidge.cs.dartmouth.edu> In the suggested answer, the code changes but the process survives. From doug at cs.dartmouth.edu Thu Jul 10 22:04:43 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Thu, 10 Jul 2014 08:04:43 -0400 Subject: [TUHS] Excise process from a pipe In-Reply-To: <20140710045223.GA19076@www.oztivo.net> References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> <20140710045223.GA19076@www.oztivo.net> Message-ID: <201407101204.s6AC4hKQ026601@coolidge.cs.dartmouth.edu> In the suggested answer, the code changes but the process survives. I suspect the answer to my original question is no, but I know only a tiny fraction of the cumulative API of the extended Unix family. Doug >> Was there ever a >> flavor of Unix in which a process could excise itself >> from a pipeline without breaking the pipeline? > > If in the middle of a pipeline, all I can think of is: > > close fd 0 and fd 1 > dup() read end of pipe 1 to be stdin (fd 0) > dup() write end of pipe 2 to be stdout (fd 1) > exec("/bin/cat") From lm at mcvoy.com Fri Jul 11 00:45:03 2014 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 10 Jul 2014 07:45:03 -0700 Subject: [TUHS] Excise process from a pipe In-Reply-To: <201407101204.s6AC4hKQ026601@coolidge.cs.dartmouth.edu> References: <201407100249.s6A2nMh3017869@coolidge.cs.dartmouth.edu> <20140710045223.GA19076@www.oztivo.net> <201407101204.s6AC4hKQ026601@coolidge.cs.dartmouth.edu> Message-ID: <20140710144502.GA24876@mcvoy.com> I'm pretty aware of the various flavors of Unix and unless the process in question is willing to help I can't see how this could work. There are system calls for passing file descriptors but you have the problem that the pipe itself is a buffer of some size and you'd have the problem of draining it. Every utility that you put in a pipeline would have to be reworked to pass file descriptors around, it would be really unpleasant and not at all Unix like. On Thu, Jul 10, 2014 at 08:04:43AM -0400, Doug McIlroy wrote: > In the suggested answer, the code changes but the process survives. > > I suspect the answer to my original question is no, but I know only a tiny > fraction of the cumulative API of the extended Unix family. > > Doug > > >> Was there ever a > >> flavor of Unix in which a process could excise itself > >> from a pipeline without breaking the pipeline? > > > > If in the middle of a pipeline, all I can think of is: > > > > close fd 0 and fd 1 > > dup() read end of pipe 1 to be stdin (fd 0) > > dup() write end of pipe 2 to be stdout (fd 1) > > exec("/bin/cat") > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From jnc at mercury.lcs.mit.edu Fri Jul 11 01:10:21 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 10 Jul 2014 11:10:21 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140710151021.3ABE018C09F@mercury.lcs.mit.edu> > From: Larry McVoy > Every utility that you put in a pipeline would have to be reworked to > pass file descriptors around Unless the whole operation is supported in the OS directly: if ((pipe1 = process1->stdout) == process2->stdin) && ((pipe2 = process2->stdout) == process3->stdin) { prepend_buffer_contents(pipe1, pipe2); process1->stdout = process2->stdout; kill_pipe(pipe1); } to be invoked from the chain's parent (e.g. shell). (The code would probably want to do something with process2's stdin and stdout, like close them; I wouldn't have the call kill process2 directly, that could be left to the parent, except in the rare cases where it might have some use for the spliced-out process.) Noel From lm at mcvoy.com Fri Jul 11 01:11:11 2014 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 10 Jul 2014 08:11:11 -0700 Subject: [TUHS] Excise process from a pipe In-Reply-To: <20140710151021.3ABE018C09F@mercury.lcs.mit.edu> References: <20140710151021.3ABE018C09F@mercury.lcs.mit.edu> Message-ID: <20140710151111.GE24876@mcvoy.com> In BitKeeper we've got lots of code that deals with jiggering where stdout goes. Making that work on all the Unix variants and Windows has been, um, challenging. Making what you are talking about work is gonna be a mess of buffer management and it's going to be hard to design system calls that would work and still give you reasonable semantics on the pipe. Consider calls that want to know if there is data in the pipe followed by a reconnect. If you really think that this could be done I'd suggest trying to write the man page for the call. I'm not trying to be snarky, in my personal experience I've found the best way to prove out my own ideas is to try and document them for other programmers. If the docs feel like they make sense then the idea usually has merit. I don't know how I'd write the docs for this stuff and have it work with the existing semantics. On Thu, Jul 10, 2014 at 11:10:21AM -0400, Noel Chiappa wrote: > > From: Larry McVoy > > > Every utility that you put in a pipeline would have to be reworked to > > pass file descriptors around > > Unless the whole operation is supported in the OS directly: > > if ((pipe1 = process1->stdout) == process2->stdin) && > ((pipe2 = process2->stdout) == process3->stdin) { > prepend_buffer_contents(pipe1, pipe2); > process1->stdout = process2->stdout; > kill_pipe(pipe1); > } > > to be invoked from the chain's parent (e.g. shell). > > (The code would probably want to do something with process2's stdin and > stdout, like close them; I wouldn't have the call kill process2 directly, that > could be left to the parent, except in the rare -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From jnc at mercury.lcs.mit.edu Fri Jul 11 02:06:58 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 10 Jul 2014 12:06:58 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140710160658.DCB1318C09E@mercury.lcs.mit.edu> > From: Larry McVoy > Making what you are talking about work is gonna be a mess of buffer > management and it's going to be hard to design system calls that would > work and still give you reasonable semantics on the pipe. Consider > calls that want to know if there is data in the pipe Oh, I didn't say it would work well, and cleanly! :-) I mean, taking one element in an existing, operating, chain, and blowing it away, is almost bound to cause problems. My previous note was merely to say that the file descriptor/pipe re-arrangement involved might be easier done with a system call - in fact, now that I think about it, as someone has already sort of pointed out, without a system call to merge the two pipes into one, you have to keep the middle process around, and have it turn into a 'cat'. Thinking out loud for a moment, though, along the lines you suggest.... Here's one problem - suppose process2 has read some data, but not yet processed it and output it towards process3, when you go to do the splice. How would the anything outside the process (be it the OS, or the command interpreter or whatever is initiating the splice) even detect that, much less retrieve the data? Even using a heuristic such as 'wait for process2 to try and read data, at which point we can assume that it no longer has any internally buffered data, and it's OK to do the splice' fails, because process2 may have decided it didn't have a complete semantic unit in hand (e.g. a complete line), and decided to go back and get the rest of the unit before outputting the complete, processed semantic unit (i.e. including data it had previously buffered internally). And suppose the reads _never_ happen to coincide with the semantic units being output; i.e. process2 will _always_ have some buffered data inside it, until the whole chain starts to shut down with EOFs from the first stage? In short, maybe this problem isn't solvable in the general case. In which case I guess we're back to your "Every utility that you put in a pipeline would have to be reworked". Stages would have to have some way to say 'I am not now holding any buffered data', and only when that state was true could they be spliced out. Or there could be some signal defined which means 'go into "not holding any buffered data" state'. At which point my proposed splice() system call might be some use... :-) Noel From lm at mcvoy.com Fri Jul 11 02:04:54 2014 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 10 Jul 2014 09:04:54 -0700 Subject: [TUHS] Excise process from a pipe In-Reply-To: <20140710160658.DCB1318C09E@mercury.lcs.mit.edu> References: <20140710160658.DCB1318C09E@mercury.lcs.mit.edu> Message-ID: <20140710160454.GI24876@mcvoy.com> On Thu, Jul 10, 2014 at 12:06:58PM -0400, Noel Chiappa wrote: > Stages would have to have some way to say 'I am not now holding any buffered > data', and only when that state was true could they be spliced out. Or there > could be some signal defined which means 'go into "not holding any buffered > data" state'. At which point my proposed splice() system call might be some > use... :-) Heh. I already claimed splice(2) back in 1998; the Linux guys did implement part of it but never really carried to the logical end I envisioned: http://www.mcvoy.com/lm/bitmover/lm/papers/splice.ps -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From jnc at mercury.lcs.mit.edu Fri Jul 11 02:12:01 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 10 Jul 2014 12:12:01 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140710161201.6878718C09B@mercury.lcs.mit.edu> PS: I see I have over-generalized the problem. Doug's original message say "a process could excise itself from a pipeline". So presumably the initiation would come from process2 itself, and it would know when it had no internally-buffered data. So now we're back to the issue of 'either we need a system call to merge two pipes into one, or the process has to hang around and turn itself into a cat'. Noel From wkt at tuhs.org Fri Jul 11 11:31:31 2014 From: wkt at tuhs.org (Warren Toomey) Date: Fri, 11 Jul 2014 11:31:31 +1000 Subject: [TUHS] Compiling the unix v5 kernel Message-ID: <20140711013131.GA15385@www.oztivo.net> All, just received this from a fellow who isn't on the TUHS mail list (yet). I've answered him about using mknod (after reading the 6e docs: we don't have 5e docs). I thought I'd forward the e-mail here as a record of an attempt to rebuild the 5e kernel. Cheers, Warren ----- Forwarded message from Mark ----- I hope you don't mind me asking you about compiling the unix v5 kernel. I haven't been able to find any documentation for it. I tried this: ./mkconf rk tm tc dc lp ctrl-d # as mch.s # mv a.out mch.o # cc -c c.c # as l.s # ld -x a.out mch.o c.o ../lib1 ../lib2 There was no m40.s in v5 so I substituted mch.s for m40.s and that seemed to create a kernel and it booted but I can't access /dev/mt0. Any pointers are appreciated. Thanks for all your work on early unix, I thought it was very interesting. Mark ----- End forwarded message ----- From wkt at tuhs.org Fri Jul 11 14:30:09 2014 From: wkt at tuhs.org (Warren Toomey) Date: Fri, 11 Jul 2014 14:30:09 +1000 Subject: [TUHS] Compiling the unix v5 kernel Message-ID: <20140711043009.GB21711@www.oztivo.net> here's the e-mail that I sent on to Mark in the hope that it would give him enough information to get his 5th Edition kernel working with a tape device. He has also now joined the list. Welcome aboard, Mark. Warren ----- Forwarded message from Warren Toomey ----- On Thu, Jul 10, 2014 at 05:56:04PM -0400, Mark Longridge wrote: > There was no m40.s in v5 so I substituted mch.s for m40.s and that > seemed to create a kernel and it booted but I can't access /dev/mt0. Mark, glad to hear you were able to rebuild the kernel. I've never tried on 5th Edition. Just reading through the 6th Edition docs, it says this: ----- Next you must put in all of the special files in the directory /dev using mknod‐VIII. Print the configuration file c.c created above. This is the major device switch of each device class (block and character). There is one line for each device configured in your system and a null line for place holding for those devices not configured. The block special devices are put in first by executing the fol‐ lowing generic command for each disk or tape drive. (Note that some of these files already exist in the directory /dev. Examine each file with ls‐I with −l flag to see if the file should be removed.) /etc/mknod /dev/NAME b MAJOR MINOR The NAME is selected from the following list: c.c NAME device rf rf0 RS fixed head disk tc tap0 TU56 DECtape rk rk0 RK03 RK05 moving head disk tm mt0 TU10 TU16 magtape rp rp0 RP moving head disk hs hs0 RS03 RS04 fixed head disk hp hp0 RP04 moving head disk The major device number is selected by counting the line number (from zero) of the device’s entry in the block con‐ figuration table. Thus the first entry in the table bdevsw would be major device zero. The minor device is the drive number, unit number or partition as described under each device in section IV. The last digit of the name (all given as 0 in the table above) should reflect the minor device number. For tapes where the unit is dial selectable, a special file may be made for each possible selection. The same goes for the character devices. Here the names are arbitrary except that devices meant to be used for teletype access should be named /dev/ttyX, where X is any character. The files tty8 (console), mem, kmem, null are already correctly configured. The disk and magtape drivers provide a ‘raw’ interface to the device which provides direct transmission between the user’s core and the device and allows reading or writing large records. The raw device counts as a character device, and should have the name of the corresponding standard block special file with ‘r’ prepended. Thus the raw magtape files would be called /dev/rmtX. When all the special files have been created, care should be taken to change the access modes (chmod‐I) on these files to appropriate values. ----- Looking at the c.c generated, it has: int (*bdevsw[])() { &nulldev, &nulldev, &rkstrategy, &rktab, &tmopen, &tmclose, &tmstrategy, &tmtab, /* 1 */ &nulldev, &tcclose, &tcstrategy, &tctab, 0 }; int (*cdevsw[])() { &klopen, &klclose, &klread, &klwrite, &klsgtty, &nulldev, &nulldev, &mmread, &mmwrite, &nodev, &nulldev, &nulldev, &rkread, &rkwrite, &nodev, &tmopen, &tmclose, &tmread, &tmwrite, &nodev, /* 3 */ &dcopen, &dcclose, &dcread, &dcwrite, &dcsgtty, &lpopen, &lpclose, &nodev, &lpwrite, &nodev, 0 }; Following on from the docs, you should be able to make the /dev/mt0 device file by doing: /etc/mknod /dev/tm0 b 1 0 And possibly also: /etc/mknod /dev/rmt0 c 3 0 Cheers, Warren From cubexyz at gmail.com Sun Jul 13 14:36:45 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Sun, 13 Jul 2014 00:36:45 -0400 Subject: [TUHS] Unix v5 and beyond Message-ID: Hi folks, I'm interested in comparing notes with C programmers who have written programs for Unix v5, v6 and v7. Also I'm interested to know if there's anything similar to the scanf function for unix v5. Stdio and iolib I know well enough to do file IO but v5 predates iolib. Back in 1988 I tried to write a universal rubik's cube program which I called unirubik and after discovering TUHS I tried to backport it to v7 (which was easy) and v6 (which was a bit harder) and now I'm trying to backport it to v5. The v5 version currently doesn't have the any file IO capability as yet. Here are a few links to the various versions: http://www.maxhost.org/other/unirubik.c.v7 http://www.maxhost.org/other/unirubik.c.v6 http://www.maxhost.org/other/unirubik.c.v5 Also I've compiled the file utility from v6 in v5 and it seemed to work fine. Once I got /dev/mt0 working for unix v5 (thanks to Warren's help) I transferred the binary for the paging utility pg into it. This version of pg I believe was from 1BSD. I did some experimenting with math functions which can be seen here: http://www.maxhost.org/other/math1.c This will compile on unix v5. My initial impression of Unix v5 was that it was a primitive and almost unusable version of Unix but now that I understand it a bit better it seems a fairly complete system. I'm a bit foggy on what the memory limits are with v5 and v6. Unix v7 seems to run under simh emulating a PDP-11/70 with 2 megabytes of ram (any more than that and the kernel panics). Also I'd be interested in seeing the source code for Ken Thompson's APL interpreter for Unix v5. I know it does exist as it is referenced in the Unix v5 manual. The earliest version I could find was dated Oct 1976 and I've written some notes on it here: http://apl.maxhost.org/getting-apl-11-1976-to-work.txt Ok, that's about it for now. Is there any chance of going further back to v4, v3, v2 etc? Mark From dave at horsfall.org Sun Jul 13 15:02:20 2014 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 13 Jul 2014 15:02:20 +1000 (EST) Subject: [TUHS] Unix v5 and beyond In-Reply-To: References: Message-ID: On Sun, 13 Jul 2014, Mark Longridge wrote: > I'm interested in comparing notes with C programmers who have written > programs for Unix v5, v6 and v7. I'll try and remember, but this was about 40 years ago... > Also I'm interested to know if there's anything similar to the scanf > function for unix v5. Stdio and iolib I know well enough to do file IO > but v5 predates iolib. Not a chance; about all it had were the system calls. Portable I/O came with either Edition 6 or PWB, then Standard I/O replaced it. I could be wrong, of course... Ed5 may have had getc()/putc() - I dunno. > Back in 1988 I tried to write a universal rubik's cube program which I > called unirubik and after discovering TUHS I tried to backport it to v7 > (which was easy) and v6 (which was a bit harder) and now I'm trying to > backport it to v5. The v5 version currently doesn't have the any file IO > capability as yet. Here are a few links to the various versions: > > http://www.maxhost.org/other/unirubik.c.v7 > http://www.maxhost.org/other/unirubik.c.v6 > http://www.maxhost.org/other/unirubik.c.v5 Hmmm... I must have a peek at them, and for laughs port the v7 one to BSD/Linux/Mac. [...] > My initial impression of Unix v5 was that it was a primitive and almost > unusable version of Unix but now that I understand it a bit better it > seems a fairly complete system. I'm a bit foggy on what the memory > limits are with v5 and v6. Unix v7 seems to run under simh emulating a > PDP-11/70 with 2 megabytes of ram (any more than that and the kernel > panics). Well, complete for the day... Memory limits were basically 64kw for each space (I'm not even sure whether Ed5 had sep/id space). The irony of the PDP-11 was that it could support virtual memory in theory, but simply didn't have enough address registers. Or am I thinking of some other box? > Also I'd be interested in seeing the source code for Ken Thompson's APL > interpreter for Unix v5. I know it does exist as it is referenced in the > Unix v5 manual. The earliest version I could find was dated Oct 1976 and > I've written some notes on it here: > > http://apl.maxhost.org/getting-apl-11-1976-to-work.txt Gawd; I'd love to see APL again! I used it on the IBM-360. > Ok, that's about it for now. Is there any chance of going further back > to v4, v3, v2 etc? Very little; Ed5 was the first public release, so unless an old-timer has them squirreled away somewhere... -- Dave From wkt at tuhs.org Sun Jul 13 16:00:01 2014 From: wkt at tuhs.org (Warren Toomey) Date: Sun, 13 Jul 2014 16:00:01 +1000 Subject: [TUHS] Unix v5 and beyond In-Reply-To: References: Message-ID: <816d95f0-0f6b-467e-aa69-7d0084ccbf88@email.android.com> Mark, we did resurrect the 1st Edition kernel and with it the C compiler from 2nd Edition. I think from memory the address space for each process is 16K and there are no structs on the C language at this point. Not sure what your program needs in terms of language support. Cheers, Warren -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Sun Jul 13 16:04:48 2014 From: wkt at tuhs.org (Warren Toomey) Date: Sun, 13 Jul 2014 16:04:48 +1000 Subject: [TUHS] Unix v5 and beyond In-Reply-To: <816d95f0-0f6b-467e-aa69-7d0084ccbf88@email.android.com> References: <816d95f0-0f6b-467e-aa69-7d0084ccbf88@email.android.com> Message-ID: <8f25f085-8427-4d8b-bdde-feacbee7eaf3@email.android.com> The URL for the 1st edition stuff is http://code.google.com/p/unix-jun72/. Warren On 13 July 2014 16:00:01 AEST, Warren Toomey wrote: >Mark, we did resurrect the 1st Edition kernel and with it the C >compiler from 2nd Edition. I think from memory the address space for >each process is 16K and there are no structs on the C language at this >point. Not sure what your program needs in terms of language support. >Cheers, Warren >-- >Sent from my Android phone with K-9 Mail. Please excuse my brevity. > >------------------------------------------------------------------------ > >_______________________________________________ >TUHS mailing list >TUHS at minnie.tuhs.org >https://minnie.tuhs.org/mailman/listinfo/tuhs -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cubexyz at gmail.com Mon Jul 14 06:38:02 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Sun, 13 Jul 2014 16:38:02 -0400 Subject: [TUHS] Hello World compiled in v1/v2 Message-ID: Hi folks, Yes I have managed to compile Hello World on v1/v2. the cp command seems different from all other versions, I'm not sure I understand it so I used the mv command instead which worked as expected. I had to "as crt0.s" and put crt0.o in /usr/lib and then it compiled without issue. Is the kernel in /etc? I saw a core file in /etc that looked like it would be about the right size. No unix file in the root directory which surprised me. At least I know what crt0.s does now. I guess a port of unirubik to v1/v2 is in the cards (maybe). Mark From dave at horsfall.org Mon Jul 14 07:05:20 2014 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 14 Jul 2014 07:05:20 +1000 (EST) Subject: [TUHS] Hello World compiled in v1/v2 In-Reply-To: References: Message-ID: On Sun, 13 Jul 2014, Mark Longridge wrote: > Yes I have managed to compile Hello World on v1/v2. Congrats! > the cp command seems different from all other versions, I'm not sure I > understand it so I used the mv command instead which worked as expected. I'm intrigued; in what way is it different? [...] > At least I know what crt0.s does now. I guess a port of unirubik to > v1/v2 is in the cards (maybe). crt0.s -> C Run Time (support). It jiggers the stack pointer in some obscure manner which I never did quite grok. -- Dave From jnc at mercury.lcs.mit.edu Mon Jul 14 09:21:20 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 13 Jul 2014 19:21:20 -0400 (EDT) Subject: [TUHS] Hello World compiled in v1/v2 Message-ID: <20140713232120.CB77818C0BE@mercury.lcs.mit.edu> > From: Dave Horsfall > crt0.s -> C Run Time (support). It jiggers the stack pointer in some > obscure manner It's the initial startup; it sets up the arguments into the canonical C form, and then calls main(). (It does not do the initial stack frame, a canonical call to CSV from inside main() will do that.) Here are the exact details: On an exec(), once the exec() returns, the arguments are available at the very top of memory: the arguments themselves are at the top, as a sequence of zero-terminated byte strings. Below them is an array of word pointers to the arguments, with a -1 in the last entry. (I.e. if there are N arguments, the array of pointers has N+1 entries, with the last being -1.) Below that is a word containing the size of that array (i.e. N+1). The Stack Pointer register points to that count word; all other registers (including the PC) are cleared. All CRT0.s does is move that argument count word down one location on the stack, adjust the SP to point to it, and put a pointer to the argument pointer table in the now-free word (between the argument count, and the first element of the argument pointer table). Hence the canonical C main() argument list of: int argc; int **argv; If/when main() returns, it takes the return value (passed in r0) and calls exit() with it. (If using the stdio library, that exit() flushes the buffers and closes all open files.) Should _that_ return, it does a 'sys exit'. There are two variant forms: fcrt0.s arranges for the floating point emulation to be loaded, and hooked up; mcrt0.s (much more complicated) arranges for process monitoring to be done. Noel From cubexyz at gmail.com Mon Jul 14 10:16:46 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Sun, 13 Jul 2014 20:16:46 -0400 Subject: [TUHS] Unix v1/v2 cp command Message-ID: >> the cp command seems different from all other versions, I'm not sure I >> understand it so I used the mv command instead which worked as expected. > > I'm intrigued; in what way is it different? It seems that one must first cp a file to another file then do a mv to actually put it into a different directory: e.g. while in /usr/src as ctr0.s cp a.out ctr0.o mv ctr0.o /usr/lib ...rather than trying to just "cp a.out /usr/lib/ctr0.o" Mark From dave at horsfall.org Mon Jul 14 10:32:35 2014 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 14 Jul 2014 10:32:35 +1000 (EST) Subject: [TUHS] Unix v1/v2 cp command In-Reply-To: References: Message-ID: On Sun, 13 Jul 2014, Mark Longridge wrote: > > I'm intrigued; in what way is it different? > > It seems that one must first cp a file to another file then do a mv to > actually put it into a different directory: That generally means that you don't have write permission on the file; I assume that you checked for that? -- Dave From doug at cs.dartmouth.edu Tue Jul 15 00:13:06 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Mon, 14 Jul 2014 10:13:06 -0400 Subject: [TUHS] Excise process from a pipe Message-ID: <201407141413.s6EED6D7015657@coolidge.cs.dartmouth.edu> Larry wrote in separate emails > If you really think that this could be done I'd suggest trying to > write the man page for the call. > I already claimed splice(2) back in 1998; the Linux guys did > implement part of it ... I began to write the following spec without knowing that Linux had appropriated the name "splice" for a capability that was in DTSS over 40 years ago under a more accurate name, "copy". The spec below isn't hard: just hook two buffer chains together and twiddle a couple of file desciptors. For stdio, of course, one would need fsplice(3), which must flush the in-process buffers--penance for stdio's original sin of said buffering. Incidentally, the question is not abstract. I have code that takes quadratic time because it grows a pipeline of length proportional to the input, though only a bounded number of the processes are usefully active at any one time; the rest are cats. Splicing out the cats would make it linear. Linear approaches that don't need splice are not nearly as clean. Doug SPLICE(2) SYNOPSIS int splice(int fd0, int fd1); DESCRIPTION Splice connects the source for a reading file descriptor fd0 directly to the destination for a writing file descriptor fd1 and closes both fd0 and fd1. Either the source or the destination must be another process (via a pipe). Data buffered for fd0 at the time of splicing follows such data for fd1. If both source and destination are processes, they become connected by a pipe. If the source (destination) is a process, the file descriptor in that process becomes write-only (read-only). If file descriptor fd0 is associated with a pipe and fd1 is not, then fd1 is updated to reflect the effect of buffered data for fd0, and the pipe's other descriptor is replaced with a duplicate of fd1. The same statement holds when "fd0" is exchanged with "fd1" and "write" is exchanged with "read". Splice's effect on any file descriptor propagates to shared file descriptors in all processes. NOTES One file must be a pipe lest the spliced data stream have no controlling process. It might seem that a socket would suffice, ceding control to a remote system; but that would allow the uncontrolled connection file-socket-socket-file. The provision about a file descriptor becoming either write-only or read-only sidesteps complications due to read-write file descriptors. From jnc at mercury.lcs.mit.edu Tue Jul 15 01:12:00 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 14 Jul 2014 11:12:00 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140714151200.5CBE918C0C2@mercury.lcs.mit.edu> > From: Doug McIlroy > The spec below isn't hard: just hook two buffer chains together and > twiddle a couple of file desciptors. How amusing! I was about to send a message with almost the exact same description - it even had the exact same syntax for the splice() call! A couple of points from my thoughts which were not covered in your message: In thinking about how to implement it, I was thinking that if there was any buffered data in an output pipe, that the process doing the splice() would wait (inside the splice() system call) on all the buffered data being read by the down-stream process. The main point of this is for the case where the up-stream is the head of the chain (i.e. it's reading from a file), where one more or less has to wait, because one will want to set the down-streams' file descriptor to point to the file - but one can't really do that until all the buffered data was consumed (else it will be lost - one can't exactly put it into the file :-). As a side-benefit, if one adopted that line, one wouldn't have to deal with the case (in the middle of the chain) of a pipe-pipe splice with buffered data in both pipes (where one would have to copy the data across); instead one could just use the exact same code for both cases, and in that case the wait would be until the down-stream pipe can simply be discarded. One thing I couldn't decide is what to do if the upstream is a pipe with buffered data, and the downstream is a file - does one discard the buffered data, write it to the file, abort the system call so the calling process can deal with the buffered data, or what? Perhaps there could be a flag argument to control the behaviour in such cases. Speaking of which, I'm not sure I quite grokked this: > If file descriptor fd0 is associated with a pipe and fd1 is not, then > fd1 is updated to reflect the effect of buffered data for fd0, and the > pipe's other descriptor is replaced with a duplicate of fd1. But what happens to the data? Is it written to the file? (That's the implication, but it's not stated directly.) > The same statement holds when "fd0" is exchanged with "fd1" and "write" > is exchanged with "read". Ditto - what happens to the data? One can't simply stuff it into the input file? I think the 'wait in the system call until it drains' approach is better. Also, it seemed to me that the right thing to do was to bash the entry in the system-wide file table (i.e. not the specific pointers in the u area). That would automatically pick up any children. Finally, there are 'potential' security issues (I say 'potential' because I'm not sure they're really problems). For instance, suppose that an end process (i.e. reading/writing a file) has access to that file (e.g. because it executed a SUID program), but its neighbour process does not. If the end process wants to go away, should the neighbour process be allowed access to the file? A 'simple' implementation would do so (since IIRC file permissions are only checked at open time, not read/write time). I don't pretend that this is a complete list of issues - just what I managed to think up while considering the new call. > For stdio, of course, one would need fsplice(3), which must flush the > in-process buffers--penance for stdio's original sin of said buffering. Err, why is buffering data in the process a sin? (Or was this just a humourous aside?) Noel From doug at cs.dartmouth.edu Tue Jul 15 12:31:27 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Mon, 14 Jul 2014 22:31:27 -0400 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] Message-ID: <201407150231.s6F2VRK0022875@coolidge.cs.dartmouth.edu> > Err, why is buffering data in the process a sin? (Or was this just a humourous aside?) Process A spawns process B, which reads stdin with buffering. B gets all it deserves from stdin and exits. What's left in the buffer, intehded for A, is lost. Sinful. From lm at mcvoy.com Tue Jul 15 12:40:42 2014 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 14 Jul 2014 19:40:42 -0700 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <201407150231.s6F2VRK0022875@coolidge.cs.dartmouth.edu> References: <201407150231.s6F2VRK0022875@coolidge.cs.dartmouth.edu> Message-ID: <20140715024042.GD13698@mcvoy.com> On Mon, Jul 14, 2014 at 10:31:27PM -0400, Doug McIlroy wrote: > > Err, why is buffering data in the process a sin? (Or was this just a > humourous aside?) > > Process A spawns process B, which reads stdin with buffering. B gets > all it deserves from stdin and exits. What's left in the buffer, > intehded for A, is lost. Sinful. It really depends on what you want. That buffering is a big win for some use cases. Even on today's processors reading a byte at a time via read(2) is costly. Like 5000x more costly on the laptop I'm typing on: calvin:~/tmp lmdd opat=1 move=100m of=XXX 104.8576 MB in 0.1093 secs, 959.5578 MB/sec calvin:~/tmp time a.out fd < XXX real 0m14.754s user 0m1.516s sys 0m13.201s calvin:~/tmp time a.out stdio < XXX real 0m0.003s user 0m0.000s sys 0m0.000s calvin:~/tmp bc 14.754/.003 4918.00000000000000000000 #include #include #include #define unless(x) if (!(x)) #define streq(a, b) !strcmp(a, b) main(int ac, char **av) { char c; unless (ac == 2) exit(1); if (streq(av[1], "stdio")) { while ((c = fgetc(stdin)) != EOF) ; } else { while (read(0, &c, 1) == 1) ; } exit(0); } From scj at yaccman.com Wed Jul 16 04:55:20 2014 From: scj at yaccman.com (scj at yaccman.com) Date: Tue, 15 Jul 2014 11:55:20 -0700 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <201407150231.s6F2VRK0022875@coolidge.cs.dartmouth.edu> References: <201407150231.s6F2VRK0022875@coolidge.cs.dartmouth.edu> Message-ID: <5f3101eb46bd1515ea3182d9c2ef89c5.squirrel@webmail.yaccman.com> Bah! This is a bug in Unix, IMHO. We would consider it a bug if a buffered output file refused to dump it's output buffer upon exit. It seems to me to be just as much a bug if a buffered input file refuses to push back its unused input on exit. Unix should have provided a mechanism to permit this... Steve >> Err, why is buffering data in the process a sin? (Or was this just a > humourous aside?) > > Process A spawns process B, which reads stdin with buffering. B gets > all it deserves from stdin and exits. What's left in the buffer, > intehded for A, is lost. Sinful. > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > From doug at cs.dartmouth.edu Wed Jul 16 09:43:49 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 15 Jul 2014 19:43:49 -0400 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] Message-ID: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> Yes, an evil necessary to get things going. The very definition of original sin. Doug Larry McVoy wrote: >>>> For stdio, of course, one would need fsplice(3), which must flush the >>>> in-process buffers--penance for stdio's original sin of said buffering. >>> Err, why is buffering data in the process a sin? (Or was this just a >>> humourous aside?) >> Process A spawns process B, which reads stdin with buffering. B gets >> all it deserves from stdin and exits. What's left in the buffer, >> intehded for A, is lost. Sinful. > It really depends on what you want. That buffering is a big win for > some use cases. Even on today's processors reading a byte at a time via > read(2) is costly. Like 5000x more costly on the laptop I'm typing on: From lm at mcvoy.com Wed Jul 16 10:32:20 2014 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 15 Jul 2014 17:32:20 -0700 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> Message-ID: <20140716003220.GA24974@mcvoy.com> I dunno, we have a distributed source management system that uses a lot of network I/O. We've carefully layered stdio on top of it because we had many cases where it was a performance bummer. Personally, I've come to really love stdio, at least our version of it. Want your stream compressed or uncompressed? fpush(&stdin, fopen_vzip(stdin, "r")); Want your stream integrity checked with a CRC per block and an XOR block at the end so you can correct any single block error? fpush(&stdout, fopen_crc(stdout, "w", 0, 0)); I'm a performance guy for the most part and while read/write seem like the fastest way to move stuff around that's only true for really nicely formed data, page sized blocks or bigger. Fine for benchmarking but if you want to approach that performance with poorly formed data, like small blocks, different sized blocks, that buffering layer smooths things out. You pay an extra bcopy() but that's typically lost in the noise. I used to hate the idea of stdio but working in real world applications where I can't control the size of the data coming at me, yeah, I've come to love stdio. It's pretty darn useful. On Tue, Jul 15, 2014 at 07:43:49PM -0400, Doug McIlroy wrote: > Yes, an evil necessary to get things going. > The very definition of original sin. > > Doug > > Larry McVoy wrote: > > >>>> For stdio, of course, one would need fsplice(3), which must flush the > >>>> in-process buffers--penance for stdio's original sin of said buffering. > > >>> Err, why is buffering data in the process a sin? (Or was this just a > >>> humourous aside?) > > >> Process A spawns process B, which reads stdin with buffering. B gets > >> all it deserves from stdin and exits. What's left in the buffer, > >> intehded for A, is lost. Sinful. > > > It really depends on what you want. That buffering is a big win for > > some use cases. Even on today's processors reading a byte at a time via > > read(2) is costly. Like 5000x more costly on the laptop I'm typing on: -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From cowan at mercury.ccil.org Wed Jul 16 13:53:03 2014 From: cowan at mercury.ccil.org (John Cowan) Date: Tue, 15 Jul 2014 23:53:03 -0400 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <20140716003220.GA24974@mcvoy.com> References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> <20140716003220.GA24974@mcvoy.com> Message-ID: <20140716035303.GO10065@mercury.ccil.org> Larry McVoy scripsit: > Want your stream compressed or uncompressed? > > fpush(&stdin, fopen_vzip(stdin, "r")); Me, I would have done it with freopen(stdin, "rv"). -- John Cowan http://www.ccil.org/~cowan cowan at ccil.org Si hoc legere scis, nimium eruditionis habes. From lm at mcvoy.com Wed Jul 16 14:05:09 2014 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 15 Jul 2014 21:05:09 -0700 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <20140716035303.GO10065@mercury.ccil.org> References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> <20140716003220.GA24974@mcvoy.com> <20140716035303.GO10065@mercury.ccil.org> Message-ID: <20140716040509.GA27375@mcvoy.com> On Tue, Jul 15, 2014 at 11:53:03PM -0400, John Cowan wrote: > Larry McVoy scripsit: > > > Want your stream compressed or uncompressed? > > > > fpush(&stdin, fopen_vzip(stdin, "r")); > > Me, I would have done it with freopen(stdin, "rv"). We tried that but the problem is that you can't encode all the options you want in just a character. Compression doesn't take options, the CRC/XOR layer wants to know how big you might think the file is (because we support blocksizes from about 256B to 256K and we want to know the file size to guess the block size). -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From cowan at mercury.ccil.org Wed Jul 16 16:03:58 2014 From: cowan at mercury.ccil.org (John Cowan) Date: Wed, 16 Jul 2014 02:03:58 -0400 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <20140716040509.GA27375@mcvoy.com> References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> <20140716003220.GA24974@mcvoy.com> <20140716035303.GO10065@mercury.ccil.org> <20140716040509.GA27375@mcvoy.com> Message-ID: <20140716060358.GP10065@mercury.ccil.org> Larry McVoy scripsit: > We tried that but the problem is that you can't encode all the options you > want in just a character. Compression doesn't take options, the CRC/XOR > layer wants to know how big you might think the file is (because we > support blocksizes from about 256B to 256K and we want to know the > file size to guess the block size). It's a string: you can have as many characters as you want. -- John Cowan http://www.ccil.org/~cowan cowan at ccil.org Dievas dave dantis; Dievas duos duonos --Lithuanian proverb Deus dedit dentes; deus dabit panem --Latin version thereof Deity donated dentition; deity'll donate doughnuts --English version by Muke Tever God gave gums; God'll give granary --Version by Mat McVeagh From lm at mcvoy.com Thu Jul 17 00:30:39 2014 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 16 Jul 2014 07:30:39 -0700 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <20140716060358.GP10065@mercury.ccil.org> References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> <20140716003220.GA24974@mcvoy.com> <20140716035303.GO10065@mercury.ccil.org> <20140716040509.GA27375@mcvoy.com> <20140716060358.GP10065@mercury.ccil.org> Message-ID: <20140716143039.GA31888@mcvoy.com> On Wed, Jul 16, 2014 at 02:03:58AM -0400, John Cowan wrote: > Larry McVoy scripsit: > > > We tried that but the problem is that you can't encode all the options you > > want in just a character. Compression doesn't take options, the CRC/XOR > > layer wants to know how big you might think the file is (because we > > support blocksizes from about 256B to 256K and we want to know the > > file size to guess the block size). > > It's a string: you can have as many characters as you want. I understand your desire to have one API. We tried and it just wasn't practical. Imagine pushing an encryption layer that wants a key, XOR layer that wants block size, etc. From crossd at gmail.com Thu Jul 17 00:56:57 2014 From: crossd at gmail.com (Dan Cross) Date: Wed, 16 Jul 2014 10:56:57 -0400 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: <20140716143039.GA31888@mcvoy.com> References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> <20140716003220.GA24974@mcvoy.com> <20140716035303.GO10065@mercury.ccil.org> <20140716040509.GA27375@mcvoy.com> <20140716060358.GP10065@mercury.ccil.org> <20140716143039.GA31888@mcvoy.com> Message-ID: Why can't those be embedded in the relevant string? freopen(fp, "rx{128}") or something? On Wed, Jul 16, 2014 at 10:30 AM, Larry McVoy wrote: > On Wed, Jul 16, 2014 at 02:03:58AM -0400, John Cowan wrote: > > Larry McVoy scripsit: > > > > > We tried that but the problem is that you can't encode all the options > you > > > want in just a character. Compression doesn't take options, the > CRC/XOR > > > layer wants to know how big you might think the file is (because we > > > support blocksizes from about 256B to 256K and we want to know the > > > file size to guess the block size). > > > > It's a string: you can have as many characters as you want. > > I understand your desire to have one API. We tried and it just wasn't > practical. Imagine pushing an encryption layer that wants a key, > XOR layer that wants block size, etc. > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Jul 17 01:41:53 2014 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 16 Jul 2014 08:41:53 -0700 Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] In-Reply-To: References: <201407152343.s6FNhnUT001960@coolidge.cs.dartmouth.edu> <20140716003220.GA24974@mcvoy.com> <20140716035303.GO10065@mercury.ccil.org> <20140716040509.GA27375@mcvoy.com> <20140716060358.GP10065@mercury.ccil.org> <20140716143039.GA31888@mcvoy.com> Message-ID: <20140716154153.GC31888@mcvoy.com> What is being provided is a generic layering system on top of stdio. Any sort of conversion you want. Encryption, CRC, XOR block, compression (we support gzip and lz4). Those are just the layers we use right now, it's easy to imagine others being added. The point is that there is no one API that is going to pleasantly encode all of the options to all of those layers and any that may come later. Are you seriously suggesting that you want to read the freopen(3) man page and see all of these options explained? That's the classic open source way, dump everything in one poorly thought out man page. It's not the Unix way, people think about it harder. For the record, I pushed for the single string encoding as well but got pushed off it as I realized the API wasn't as simple as I imagined. While you could do it that way you shouldn't do it that way, it's just not a good API. I'm very pleased with how it turned out in our code, other than a handful of fpush() calls, it just looks like stock stdio. On Wed, Jul 16, 2014 at 10:56:57AM -0400, Dan Cross wrote: > Why can't those be embedded in the relevant string? freopen(fp, "rx{128}") > or something? > > > On Wed, Jul 16, 2014 at 10:30 AM, Larry McVoy wrote: > > > On Wed, Jul 16, 2014 at 02:03:58AM -0400, John Cowan wrote: > > > Larry McVoy scripsit: > > > > > > > We tried that but the problem is that you can't encode all the options > > you > > > > want in just a character. Compression doesn't take options, the > > CRC/XOR > > > > layer wants to know how big you might think the file is (because we > > > > support blocksizes from about 256B to 256K and we want to know the > > > > file size to guess the block size). > > > > > > It's a string: you can have as many characters as you want. > > > > I understand your desire to have one API. We tried and it just wasn't > > practical. Imagine pushing an encryption layer that wants a key, > > XOR layer that wants block size, etc. > > _______________________________________________ > > TUHS mailing list > > TUHS at minnie.tuhs.org > > https://minnie.tuhs.org/mailman/listinfo/tuhs > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From cubexyz at gmail.com Thu Jul 17 04:14:36 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Wed, 16 Jul 2014 14:14:36 -0400 Subject: [TUHS] Unix v1/v2 cp command In-Reply-To: References: Message-ID: On 7/13/14, Dave Horsfall wrote: > On Sun, 13 Jul 2014, Mark Longridge wrote: > >> > I'm intrigued; in what way is it different? >> >> It seems that one must first cp a file to another file then do a mv to >> actually put it into a different directory: > > That generally means that you don't have write permission on the file; I > assume that you checked for that? > > -- Dave Hi Dave, The version 1 manual actually mentions that: A directory convention as used in mv should be adopted to cp. ken, dmr Also I was root when I used cp. Mark From cubexyz at gmail.com Thu Jul 17 04:55:41 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Wed, 16 Jul 2014 14:55:41 -0400 Subject: [TUHS] shutdown for pre-v7 unix Message-ID: Hi folks, I've been typing sync;sync at the shell prompt then hitting ctrl-e to get out of simh to shutdown v5 and v6 unix. So far this has worked fairly well but I was wondering if there might be a better way to do a shutdown on early unix. There's a piece of code for Unix v7 that I came across for doing a shutdown: http://www.maxhost.org/other/shutdown.c I doesn't work on pre-v7 unix, but maybe it could be modified to work? Mark From jnc at mercury.lcs.mit.edu Thu Jul 17 05:47:51 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 Jul 2014 15:47:51 -0400 (EDT) Subject: [TUHS] shutdown for pre-v7 unix Message-ID: <20140716194751.2291F18C0A3@mercury.lcs.mit.edu> > From: Mark Longridge > I was wondering if there might be a better way to do a shutdown on > early unix. Not really; I don't seem to recall our having one on the MIT V6 machine. (We did add a 'reboot' system call so we could reboot the machine without having to take the elevator up to the machine room [the console was on our floor, and the reboot() call just jumped into the hardware bootstrap], but in the source it doesn't even bother to do an update(). Well, I should't say that: I only have the source for the kernel, which doesn't; I don't at the moment have access to the source for the rest of the system - although I do have some full dump tapes, once I can work out how to read them. Anyway, so maybe the user command for rebooting the system did a sync() first.) I suppose you could set the switch register to 173030 and send a 'kill -1 1', which IIRC kills of all shells except the one on the console, but somehow I doubt you're running multi-user anyway... :-) Noel From brantley at coraid.com Thu Jul 17 06:52:06 2014 From: brantley at coraid.com (Brantley Coile) Date: Wed, 16 Jul 2014 20:52:06 +0000 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: Message-ID: <10F2AC96-0CB3-4EB6-808F-A7567A7C7916@coraid.com> I never used shutdown, always three sync commands. And when you type sync, you should type three sync commands on separate lines hitting newline after each. Only one is needed, but the extras makes sure the operators didn’t type ‘sync’ then halt the box before the buffers were flushed. On Jul 16, 2014, at 2:55 PM, Mark Longridge wrote: > Hi folks, > > I've been typing sync;sync at the shell prompt then hitting ctrl-e to > get out of simh to shutdown v5 and v6 unix. > > So far this has worked fairly well but I was wondering if there might > be a better way to do a shutdown on early unix. > > There's a piece of code for Unix v7 that I came across for doing a shutdown: > > http://www.maxhost.org/other/shutdown.c > > I doesn't work on pre-v7 unix, but maybe it could be modified to work? > > Mark > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs From dave at horsfall.org Thu Jul 17 07:12:43 2014 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 17 Jul 2014 07:12:43 +1000 (EST) Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: Message-ID: On Wed, 16 Jul 2014, Mark Longridge wrote: > I've been typing sync;sync at the shell prompt then hitting ctrl-e to > get out of simh to shutdown v5 and v6 unix. The "correct" way used to be: sync sync sync -- Dave From dave at horsfall.org Thu Jul 17 07:23:52 2014 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 17 Jul 2014 07:23:52 +1000 (EST) Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: Message-ID: And on that note, there was some debate over whether it was safer to write-protect the RK-05s before or after hitting HALT. Something to do with the hardware on the -11 waiting for something else... This probably belongs over on PUPS :-) -- Dave From jnc at mercury.lcs.mit.edu Thu Jul 17 07:31:31 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 Jul 2014 17:31:31 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140716213131.825F918C0A2@mercury.lcs.mit.edu> >> From: Doug McIlroy >> The spec below isn't hard: just hook two buffer chains together and >> twiddle a couple of file desciptors. > In thinking about how to implement it, I was thinking that if there was > any buffered data in an output pipe, that the process doing the > splice() would wait (inside the splice() system call) on all the > buffered data being read by the down-stream process. > ... > As a side-benefit, if one adopted that line, one wouldn't have to deal > with the case (in the middle of the chain) of a pipe-pipe splice with u > buffered data in both pipes (where one would have to copy the data > across); instead one could just use the exact same code for both cases So a couple of days ago I suffered a Big Hack Attack and actually wrote the code for splice() (for V6, of course :-). It took me a day or so to get 'mostly' running. (I got tripped up by pointer arithmetic issues in a number of places, because V6 declares just about _everything_ to be "int *", so e.g. "ip + 1" doesn't produce the right value for sleep() if ip is declared to be "struct inode *", which is what I did automatically.) My code only had one real bug so far (I forgot to mark the user's channels as closed, which resulted in their file entries getting sub-zero usage counts when the middle (departing) process exited). However, now I have run across a real problem: I was just copying the system file table entry for the middle process' input channel over to the entry for the downstream's input (so further reads on its part would read the channel the middle process used to be reading). Copying the data from one entry to another meant I didn't have to go chase down file table pointers in the other process' U structure, etc. Alas, this simple approach doesn't work. Using the approach I outlined (where the middle channel waits for the downstream pipe to be empty, so it can discard it and do the splice by copying the file table entries) doesn't work, because the downstream process is in the middle of a read call (waiting for more data to be put in the pipe), and it has already computed a pointer to the pipe's inode, and it's looping waiting for that inode to have data. So now I have to regroup and figure out how to deal with that. My most likely approach is to copy the inode data across (so I don't have to go mess with the downstream process to get it to go look at another inode), but i) I want to think about it a bit first, and ii) I have to check that it won't screw anything else up if I move the inode data to another slot. Noel From jnc at mercury.lcs.mit.edu Thu Jul 17 07:46:32 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 Jul 2014 17:46:32 -0400 (EDT) Subject: [TUHS] the sin of buffering [offshoot of excise process from a pipeline] Message-ID: <20140716214632.6F3DB18C0A2@mercury.lcs.mit.edu> > From: Doug McIlroy > Process A spawns process B, which reads stdin with buffering. B gets > all it deserves from stdin and exits. What's left in the buffer, > intehded for A, is lost. Ah. Got it. The problem is not with buffering as a generic approach, the problem is that you're trying to use a buffering package intended for simple, straight-forward situations in one which doesn't fall into that category! :-) Clearly, either B has to i) be able to put back data which was not for it ('ungets' as a system call), or ii) not read the data that's not for it - but that may be incompatible with the concept of buffering the input (depending on the syntax, and thus the ability to predict the approaching of the data B wants, the only way to avoid the need for ungetc() might be to read a byte at a time). If B and its upstream (U) are written together, that could be another way to deal with it: if U knows where B's syntatical boundaries are, it can give it advance warning, and B could then use a non-trivial buffering package to do the right thing. E.g. if U emits 'records' with a header giving the record length X, B could tell its buffering package 'don't read ahead more than X bytes until I tell you to go ahead with the next record'. Of course, that's not a general solution; it only works with prepared U's. Really, the only general, efficient way to deal with that situation that I can see is to add 'ungets' to the operating system... Noel From drsalists at gmail.com Thu Jul 17 12:09:11 2014 From: drsalists at gmail.com (Dan Stromberg) Date: Wed, 16 Jul 2014 19:09:11 -0700 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: Message-ID: On Wed, Jul 16, 2014 at 2:12 PM, Dave Horsfall wrote: > On Wed, 16 Jul 2014, Mark Longridge wrote: > >> I've been typing sync;sync at the shell prompt then hitting ctrl-e to >> get out of simh to shutdown v5 and v6 unix. > > The "correct" way used to be: > > sync > sync > sync 3 sync's was net.wisdom for a long time, but some discussions in the Linux mailing lists suggested that 2 was enough all along. The first schedules all dirty buffers to be flushed to disk. The second does the same, but to provide an ordering guarantee, doesn't return until the dirty buffers from the first are finished. Hence the 2. But I'm a relative newcomer to *ix - I didn't get involved until SunOS 4. From treese at acm.org Thu Jul 17 12:29:20 2014 From: treese at acm.org (Win Treese) Date: Wed, 16 Jul 2014 22:29:20 -0400 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: <10F2AC96-0CB3-4EB6-808F-A7567A7C7916@coraid.com> References: <10F2AC96-0CB3-4EB6-808F-A7567A7C7916@coraid.com> Message-ID: <9908552D-48AC-4069-A410-32FFC791D2AD@acm.org> On Jul 16, 2014, at 4:52 PM, Brantley Coile wrote: > I never used shutdown, always three sync commands. And when you type sync, you should type three sync commands on separate lines hitting newline after each. Only one is needed, but the extras makes sure the operators didn’t type ‘sync’ then halt the box before the buffers were flushed. From MIT Project Athena, in the mid-80s (when we were actually running 4.2BSD, but many of the Athena hackers were well familiar with earlier versions): When thou shuttest down the system, thou shalt sync three times. No more, no less. Three shall be the number of the syncing, and the number of the syncing shall be three. Four times shalt thou not sync, neither sync twice, except that thou proceedest to sync a third time… - Win From norman at oclsc.org Thu Jul 17 12:40:06 2014 From: norman at oclsc.org (Norman Wilson) Date: Wed, 16 Jul 2014 22:40:06 -0400 Subject: [TUHS] shutdown for pre-v7 unix Message-ID: <1405564809.9840.for-standards-violators@oclsc.org> After a day and an evening of fighting with modern hardware, the modern tangle that passes for UNIX nowadays, and modern e-merchandising, I am too lazy to go look up the details. But as I remember it, two syncs was indeed probably enough. I believe that when sync(2) returned, all unflushed I/O had been queued to the device driver, but not necessarily finished, so the second sync was just a time-filling no-op. If all the disks were in view, it probably sufficed just to watch them until all the lights (little incandescent bulbs in those days, not LEDs) had stopped blinking. I usually typed sync three or four times myself. It gave me a comfortable feeling (the opposite of a syncing feeling, I suppose). I still occasionally type `sync' to the shell as a sort of comfort word while thinking about what I'm going to do next. Old habits die hard. (sync; sync; sync) Norman Wilson Toronto ON From jnc at mercury.lcs.mit.edu Thu Jul 17 13:55:59 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 Jul 2014 23:55:59 -0400 (EDT) Subject: [TUHS] shutdown for pre-v7 unix Message-ID: <20140717035559.0375D18C0BE@mercury.lcs.mit.edu> > From: Norman Wilson > I believe that when sync(2) returned, all unflushed I/O had been queued > to the device driver, but not necessarily finished Yes. I have just looked at update() (the internal version of 'sync') again, and it does three things: writes out super-blocks, any modified inodes, and (finally) any cached disk blocks (in that order). In all three cases, the code calls (either directly or indirectly) bwrite(), the exact operation of which (wait for completion, or merely schedule the operation) on any given buffer depends on the flag bits on that buffer. At least one of the cases (the third), it sets the 'ASYNC' bit on the buffer, i.e. it doesn't wait for the I/O to complete, merely schedules it. For the first two, though, it looks like it probably waits. > so the second sync was just a time-filling no-op. If all the disks were > in view, it probably sufficed just to watch them until all the lights > ... had stopped blinking. Yes. If the system is single-user, and you say 'sync', if you wait a bit for the I/O to complete, any later syncs won't actually do anything. I don't know of any programmatic way to make sure that all the disk I/O has completed (although obviously one could be written); even the 'unmount' call doesn't check to make sure all the I/O is completed (it just calls update()). Watching the lights was as good as anything. > I usually typed sync three or four times myself. I usually just type it once, wait a moment, and then halt the machine. I've never experienced disk corruption from so doing. With modern ginormous disk caches, you might have to wait more than a moment, but we're talking older machines here... Noel From cubexyz at gmail.com Thu Jul 17 15:28:23 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Thu, 17 Jul 2014 01:28:23 -0400 Subject: [TUHS] Program compiled on unix v6 works on unix v5 Message-ID: Ok, this is cheating a bit but I was wondering if I could possibly compile my unix v6 version of unirubik which has working file IO and run it under unix v5. At first I couldn't figure out how to send a binary from unix v6 to unix v5 but I did some experimenting and found: tp m1r unirubik which would output unirubik to mag tape #1 and tp m1x unirubik which would input unirubik from mag tape #1. I don't know what cc does exactly but I thought "well if it compiles to PDP-11 machine code and it's statically linked it could work". And it actually does work! I still want to try to get unirubik to compile under Unix v5 cc but it's interesting that a program that uses iolib functions can work under unix v5. Mark From jnc at mercury.lcs.mit.edu Fri Jul 18 01:42:05 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 17 Jul 2014 11:42:05 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140717154205.5061418C0D2@mercury.lcs.mit.edu> > the downstream process is in the middle of a read call (waiting for > more data to be put in the pipe), and it has already computed a pointer > to the pipe's inode, and it's looping waiting for that inode to have > data. > So now I have to regroup and figure out how to deal with that. My most > likely approach is to copy the inode data across So I've had a good look at the pipe code, and it turns out that the simple hack won't work, for two reasons. First, the pipe on the _other_ side of the middle process is _also_ probably in the middle of a write call, and so you can't snarf its inode out from underneath it. (This whole problem reminds me of 'musical chairs' - I just want the music to stop so everything will go quiet so I can move things around! :-) Second, if the process that wants to close down and do a splice is either the start or end process, its neighbour is going to go from having a pipe to having a plain file - and the pipe code knows the inode for a pipe has two users, etc. So I think it would be necessary to make non-trivial adjustments to the pipe and file reading/writing code to make this work; either i) some sort of flag bit to say 'you've been spliced, take appropriate action' which the pipe code would have to check on being woken up, and then back out to let the main file reading/writing code take another crack at it, or ii) perhaps some sort of non-local goto to forcefully back out the call to readp()/writep(), back to the start of the read/write sequence. (Simply terminating the read/write call will not work, I think, because that will often, AFAICT, return with 0 bytes transferred, which will look like an EOF, etc; so the I/O will have to be restarted.) I'm not sure I want to do the work to make this actually work - it's not clear if anyone is really that interested? And it's not something that I'm interested in having for my own use. Anyway, none of this is in any way a problem with the fundamental service model - it's purely kernel implementation issues. Noel From imp at bsdimp.com Fri Jul 18 01:58:52 2014 From: imp at bsdimp.com (Warner Losh) Date: Thu, 17 Jul 2014 09:58:52 -0600 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: Message-ID: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> On Jul 16, 2014, at 8:09 PM, Dan Stromberg wrote: > On Wed, Jul 16, 2014 at 2:12 PM, Dave Horsfall wrote: >> On Wed, 16 Jul 2014, Mark Longridge wrote: >> >>> I've been typing sync;sync at the shell prompt then hitting ctrl-e to >>> get out of simh to shutdown v5 and v6 unix. >> >> The "correct" way used to be: >> >> sync >> sync >> sync > > 3 sync's was net.wisdom for a long time, but some discussions in the > Linux mailing lists suggested that 2 was enough all along. > > The first schedules all dirty buffers to be flushed to disk. > > The second does the same, but to provide an ordering guarantee, > doesn't return until the dirty buffers from the first are finished. > > Hence the 2. > > But I'm a relative newcomer to *ix - I didn't get involved until SunOS 4. But it wasn’t about the ordering… The reason three syncs were recommended was that you needed time to pass to flush the buffers, and some early versions would only schedule the I/O and not wait for it to actually complete before returning. Later versions waited, but by then the disks had buffers of their own that would get missed up if you didn’t wait a smidge before turning them off. So what the second two syncs really accomplished was the passage of time before you did anything stupid. I’m not so sure about the “ordering guarantee” logic presented in that thread. There’s no “barrier” that the first sync puts in that the second sync waits for. Earlier discussions have suggested the first sync flushes all the dirty blocks and sent out the superblock marked dirty as well. The second sync would see no I/O has happened and send out the superblock clean. And the third sync was because of {lots of theories here}, but mostly was for the passage of time to keep the rule simple. Warner -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From clemc at ccc.com Fri Jul 18 02:15:28 2014 From: clemc at ccc.com (Clem Cole) Date: Thu, 17 Jul 2014 12:15:28 -0400 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> Message-ID: But if you look through at the FS code of everything until after Gobble worked on 4.1 & did the reordering work, unless you were careful with the syncs, it was easy to have a corrupted FS. Ted's fsck(1) program made cleaned up of corrupted FS much easier. But it was really ghg's work that got the kernel right. When Kirk's BSD FFS was done (aka UFS) and went into 4.1A (maybe B) you needed the Purdue mods if you wanted to have a 4.1 or V7 that could reasonable survive a power hit without FS damage. It funny, I still type: syncsyncsync before I type reboot it was burned into the ROMs in the fingers so long ago. My CS major daughter once asked me my I type that ;-) Clem On Thu, Jul 17, 2014 at 11:58 AM, Warner Losh wrote: > > On Jul 16, 2014, at 8:09 PM, Dan Stromberg wrote: > > > On Wed, Jul 16, 2014 at 2:12 PM, Dave Horsfall > wrote: > >> On Wed, 16 Jul 2014, Mark Longridge wrote: > >> > >>> I've been typing sync;sync at the shell prompt then hitting ctrl-e to > >>> get out of simh to shutdown v5 and v6 unix. > >> > >> The "correct" way used to be: > >> > >> sync > >> sync > >> sync > > > > 3 sync's was net.wisdom for a long time, but some discussions in the > > Linux mailing lists suggested that 2 was enough all along. > > > > The first schedules all dirty buffers to be flushed to disk. > > > > The second does the same, but to provide an ordering guarantee, > > doesn't return until the dirty buffers from the first are finished. > > > > Hence the 2. > > > > But I'm a relative newcomer to *ix - I didn't get involved until SunOS 4. > > But it wasn’t about the ordering… > > The reason three syncs were recommended was that you needed time to pass > to flush the buffers, and some early versions would only schedule the I/O > and not wait for it to actually complete before returning. Later versions > waited, but by then the disks had buffers of their own that would get > missed up if you didn’t wait a smidge before turning them off. So what the > second two syncs really accomplished was the passage of time before you did > anything stupid. > > I’m not so sure about the “ordering guarantee” logic presented in that > thread. There’s no “barrier” that the first sync puts in that the second > sync waits for. Earlier discussions have suggested the first sync flushes > all the dirty blocks and sent out the superblock marked dirty as well. The > second sync would see no I/O has happened and send out the superblock > clean. And the third sync was because of {lots of theories here}, but > mostly was for the passage of time to keep the rule simple. > > Warner > > > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Fri Jul 18 04:04:37 2014 From: ron at ronnatalie.com (Ronald Natalie) Date: Thu, 17 Jul 2014 14:04:37 -0400 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> Message-ID: Sync works like this: 1. If the update-lock is already set, just return. 2. Set the lock 3. Write any superblocks that are marked as modified 4. Wirte any inodes that are marked as needing update 5. Clear the lock. 5 Write all the dirty blocks in the buffer cache (which it does at spl6()); Once it returned you should be good to go. The only time typing multiple ones helps is if there was other activity going on while you were trying to do all this. From clemc at ccc.com Fri Jul 18 06:16:29 2014 From: clemc at ccc.com (Clem Cole) Date: Thu, 17 Jul 2014 16:16:29 -0400 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> Message-ID: I think that's is a problem in that it needs to be data blocks, inodes, and finally superblocks to do the least damage in a crash. It was a nice piece of work on George's part at the time. I remember the USENIX when he talked about it and many of us had an aha style moment. I remember talking to dmr about it dinner that night and he made a comment about it being slightly embarrassing that nobody had looked it / paid attention to it before. Those were the days of UNIX vs RSX or UNIX vs VMS and remember one of the prime knocks that the UNIX haters would say was that the UNIX (FS) was not reliable. Between Ted's fsck to replace the Xcheck() family and ghg's kernel changes people started to stop making that claim. UNIX was just as good if not better than the "commercial" OSses. Clem The other tool that showed up around then was fsdb, but I admit I never really felt comfortable working with it. I did it so rarely and I always had the manual open. But it was so easy to do more damage with it. Fortunately, fsck usually was good enough. On Thu, Jul 17, 2014 at 2:04 PM, Ronald Natalie wrote: > Sync works like this: > > 1. If the update-lock is already set, just return. > 2. Set the lock > 3. Write any superblocks that are marked as modified > 4. Wirte any inodes that are marked as needing update > 5. Clear the lock. > 5 Write all the dirty blocks in the buffer cache (which it does at > spl6()); > > Once it returned you should be good to go. The only time typing multiple > ones helps is if there was other activity going on while you were trying to > do all this. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Fri Jul 18 12:26:55 2014 From: ron at ronnatalie.com (Ronald Natalie) Date: Thu, 17 Jul 2014 22:26:55 -0400 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> Message-ID: <2DF13A78-6D26-4E01-A65A-7746A51F44FB@ronnatalie.com> On Jul 17, 2014, at 4:16 PM, Clem Cole wrote: > I think that's is a problem in that it needs to be data blocks, inodes, and finally superblocks to do the least damage in a crash. That is definitely the case and that was perhaps the biggest fix in BSD (and other later) was to make the file system writing more consistent so at least you didn't get trashed filesystems but at worst got some orphaned blocks that needed intervention to reclaim. It was mandatory for operators at JHU to understand how the file system was laid out on disk, and what icheck/dcheck reported and what the options to fix things. Link counts that were too low and dups in free should NEVER happen with an intelligently ordered set of I/O operations, but thats not what Version 6 UNIX had. It wasn't uncommon to find several errors in the file system that would be degenerate system faults if not corrected. But all that aside, even in those shakey days, typing sync multiple times really didn't accomplish anything and it because less useful as the file systems became more stable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.newsham at gmail.com Fri Jul 18 12:52:01 2014 From: tim.newsham at gmail.com (Tim Newsham) Date: Thu, 17 Jul 2014 16:52:01 -1000 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: <2DF13A78-6D26-4E01-A65A-7746A51F44FB@ronnatalie.com> References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> <2DF13A78-6D26-4E01-A65A-7746A51F44FB@ronnatalie.com> Message-ID: One sync for the disks and two for the operator's peace of mind... On Thu, Jul 17, 2014 at 4:26 PM, Ronald Natalie wrote: > > On Jul 17, 2014, at 4:16 PM, Clem Cole wrote: > > I think that's is a problem in that it needs to be data blocks, inodes, and > finally superblocks to do the least damage in a crash. > > > That is definitely the case and that was perhaps the biggest fix in BSD (and > other later) was to make the file system writing more consistent so at least > you didn't get trashed filesystems but at worst got some orphaned blocks > that needed intervention to reclaim. > > It was mandatory for operators at JHU to understand how the file system was > laid out on disk, and what icheck/dcheck reported and what the options to > fix things. Link counts that were too low and dups in free should NEVER > happen with an intelligently ordered set of I/O operations, but thats not > what Version 6 UNIX had. It wasn't uncommon to find several errors in the > file system that would be degenerate system faults if not corrected. > > But all that aside, even in those shakey days, typing sync multiple times > really didn't accomplish anything and it because less useful as the file > systems became more stable. > > > > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > -- Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com From milov at cs.uwlax.edu Fri Jul 18 12:58:00 2014 From: milov at cs.uwlax.edu (Milo Velimirovic) Date: Thu, 17 Jul 2014 21:58:00 -0500 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> <2DF13A78-6D26-4E01-A65A-7746A51F44FB@ronnatalie.com> Message-ID: Three for the Elven-kings under the sky, On Jul 17, 2014, at 9:52 PM, Tim Newsham wrote: > One sync for the disks and two for the operator's peace of mind... > > On Thu, Jul 17, 2014 at 4:26 PM, Ronald Natalie wrote: >> >> On Jul 17, 2014, at 4:16 PM, Clem Cole wrote: >> >> I think that's is a problem in that it needs to be data blocks, inodes, and >> finally superblocks to do the least damage in a crash. >> >> >> That is definitely the case and that was perhaps the biggest fix in BSD (and >> other later) was to make the file system writing more consistent so at least >> you didn't get trashed filesystems but at worst got some orphaned blocks >> that needed intervention to reclaim. >> >> It was mandatory for operators at JHU to understand how the file system was >> laid out on disk, and what icheck/dcheck reported and what the options to >> fix things. Link counts that were too low and dups in free should NEVER >> happen with an intelligently ordered set of I/O operations, but thats not >> what Version 6 UNIX had. It wasn't uncommon to find several errors in the >> file system that would be degenerate system faults if not corrected. >> >> But all that aside, even in those shakey days, typing sync multiple times >> really didn't accomplish anything and it because less useful as the file >> systems became more stable. >> >> >> >> _______________________________________________ >> TUHS mailing list >> TUHS at minnie.tuhs.org >> https://minnie.tuhs.org/mailman/listinfo/tuhs >> > > > > -- > Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs From imp at bsdimp.com Fri Jul 18 13:42:18 2014 From: imp at bsdimp.com (Warner Losh) Date: Thu, 17 Jul 2014 21:42:18 -0600 Subject: [TUHS] shutdown for pre-v7 unix In-Reply-To: References: <699EC97F-61D6-4102-99E1-8752E8CBD381@bsdimp.com> <2DF13A78-6D26-4E01-A65A-7746A51F44FB@ronnatalie.com> Message-ID: <6CA9C4BB-AA25-4C3C-AFEA-7C7C3D89C749@bsdimp.com> One sync to rule them all, One sync to find them, One sync to bring them all and in the darkness bind them In the Land of Kernel where the buffers lie. On Jul 17, 2014, at 8:58 PM, Milo Velimirovic wrote: > Three for the Elven-kings under the sky, > > On Jul 17, 2014, at 9:52 PM, Tim Newsham wrote: > >> One sync for the disks and two for the operator's peace of mind... >> >> On Thu, Jul 17, 2014 at 4:26 PM, Ronald Natalie wrote: >>> >>> On Jul 17, 2014, at 4:16 PM, Clem Cole wrote: >>> >>> I think that's is a problem in that it needs to be data blocks, inodes, and >>> finally superblocks to do the least damage in a crash. >>> >>> >>> That is definitely the case and that was perhaps the biggest fix in BSD (and >>> other later) was to make the file system writing more consistent so at least >>> you didn't get trashed filesystems but at worst got some orphaned blocks >>> that needed intervention to reclaim. >>> >>> It was mandatory for operators at JHU to understand how the file system was >>> laid out on disk, and what icheck/dcheck reported and what the options to >>> fix things. Link counts that were too low and dups in free should NEVER >>> happen with an intelligently ordered set of I/O operations, but thats not >>> what Version 6 UNIX had. It wasn't uncommon to find several errors in the >>> file system that would be degenerate system faults if not corrected. >>> >>> But all that aside, even in those shakey days, typing sync multiple times >>> really didn't accomplish anything and it because less useful as the file >>> systems became more stable. >>> >>> >>> >>> _______________________________________________ >>> TUHS mailing list >>> TUHS at minnie.tuhs.org >>> https://minnie.tuhs.org/mailman/listinfo/tuhs >>> >> >> >> >> -- >> Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com >> _______________________________________________ >> TUHS mailing list >> TUHS at minnie.tuhs.org >> https://minnie.tuhs.org/mailman/listinfo/tuhs > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From doug at cs.dartmouth.edu Fri Jul 18 15:31:35 2014 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Fri, 18 Jul 2014 01:31:35 -0400 Subject: [TUHS] Hazards of open source. Message-ID: <201407180531.s6I5VZ1k030838@coolidge.cs.dartmouth.edu> From jnc at mercury.lcs.mit.edu Sat Jul 19 01:33:45 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 18 Jul 2014 11:33:45 -0400 (EDT) Subject: [TUHS] Excise process from a pipe Message-ID: <20140718153345.DC5EA18C0B4@mercury.lcs.mit.edu> >> the downstream process is in the middle of a read call (waiting for >> more data to be put in the pipe), and it has already computed a pointer >> to the pipe's inode, and it's looping waiting for that inode to have >> data. > I think it would be necessary to make non-trivial adjustments to the > pipe and file reading/writing code to make this work; either i) some > sort of flag bit to say 'you've been spliced, take appropriate action' > which the pipe code would have to check on being woken up, and then > back out to let the main file reading/writing code take another crack > at it > ... > I'm not sure I want to do the work to make this actually work - it's > not clear if anyone is really that interested? And it's not something > that I'm interested in having for my own use. So I decided that it was silly to put all that work into this, and not get it to work. I did 'cut a corner', by not handling the case where it's the first or last process which is bailing (which requires a file-pipe splice, not a pipe-pipe; the former is more complex); i.e. I was just doing a 'working proof of concept', not a full implementation. I used the 'flag bit on the inode' approach; the pipe-pipe case could be dealt with entirely inside pipe.c/readp(). Here's the added code in readp() (at the loop start): if ((ip->i_flag & ISPLICE) != 0) { closei(ip, 0); ip = rp->f_inode; } It worked first time! In more detail, I had written a 'splicetest' program that simply passed input to its output, looking for a line with a single keyword ("stop"); at that point, it did a splice() call and exited. When I did "cat input | splicetest | cat > output", with appropriate test data in "input", all of the test data (less the "stop" line) appeared in the output file! For the first time (AFAIK) a process succesfully departed a pipeline, which continued to operate! So it is do-able. (If anyone has any interest in the code, let me know.) Noel From cubexyz at gmail.com Sun Jul 27 12:37:41 2014 From: cubexyz at gmail.com (Mark Longridge) Date: Sat, 26 Jul 2014 22:37:41 -0400 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS Message-ID: Hi folks, I was digging around trying to figure out which Unixes would run on a PDP-11 with QBUS. It seems that the very early stuff like v5 was strictly UNIBUS and that the first version of Unix that supported QBUS was v7m (please correct me if this is wrong). I was thinking that the MicroPDP-11's were all QBUS and that it would be easier to run a Unix on a MicroPDP because they are the most compact. So I figured I would try to obtain a Unix v7m distribution tape image. I see the Jean Huens files on tuhs but I'm not sure what to do with them. I have hopes to eventually run a Unix on real hardware but for now I'm going to stick with simh. It seems like DEC just didn't make a desktop that could run Bell Labs Unix, e.g. we can't just grab a DEC Pro-350 and stick Unix v7 on it. Naturally I'll still have fun checking out Unix v5 on the emulator but it would be nice to eventually run a Unix with all the source code at hand on a real machine. Mark From jnc at mercury.lcs.mit.edu Sun Jul 27 13:26:39 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 26 Jul 2014 23:26:39 -0400 (EDT) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS Message-ID: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> > From: Mark Longridge > I was digging around trying to figure out which Unixes would run on a > PDP-11 with QBUS. It seems that the very early stuff like v5 was > strictly UNIBUS and that the first version of Unix that supported QBUS > was v7m (please correct me if this is wrong). That may or may not be true; let me explain. The 11/23 is almost indistinguishable, in programming terms, from an 11/40. There is only one very minor difference (which UNIX would care about) that I know of - the 11/23 does not have a hardware switch register. Yes, UNIBUS devices can't be plugged into a QBUS, and vice versa, _but_ i) there a programming-compatible QBUS versions of many UNIBUS devices, and ii) there were UNIBUS-QBUS converters which actually allowed a QBUS processor to have UNIBUS peripherals. So I don't know which version of Unix was the first run on an 11/23 - but it could have been almost any. It is quite possible to run V6 on an 11/23, provided you make a very small number of very minor changes, to avoid use of the CSWR. I have done this, and run V6 on a simulated 11/23 (I have a short note explaining what one needs to do, if anyone is interested.) Admittedly, this is not the same as running it on a real 11/23, but I see no resons the latter would not be doable. I had started in on the work needed to get V6 running on a real 11/23, which was the (likely) need to load Unix into the machine over a serial line. WKT has done this for V7: http://www.tuhs.org/Archive/PDP-11/Tools/Tapes/Vtserver/ but it needs a little tweaking for V6; I was about to start in on that. > I have hopes to eventually run a Unix on real hardware As do a lot of us... :-) > It seems like DEC just didn't make a desktop that could run Bell Labs > Unix, e.g. we can't just grab a DEC Pro-350 and stick Unix v7 on it. I'm not sure about that; I'd have to check into the Pro-350. If it has memory mapping, it should not be hard. Also, even if it doesn't have memory mapping, there was a Mini-Unix done for PDP-11's without memory mapping; I can dig up some URLs if you're interested. The feeling is, I gather, very similar. > it would be nice to eventually run a Unix with all the source code at > hand on a real machine. Having done that 'back in the day', I can assure you that it doesn't feel that different from the simulated experience (except that the latter are noticeably faster :-). In fact, even if/when I do have a real 11, I'll probably still mostly use the simulator, for a variety of reasons; e.g. the ability to edit source with a nice modern editor, etc, etc is just too nice to pass up! :-) Noel From norman at oclsc.org Sun Jul 27 15:39:36 2014 From: norman at oclsc.org (Norman Wilson) Date: Sun, 27 Jul 2014 01:39:36 -0400 (EDT) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS Message-ID: <20140727053936.4AAB91DE381@lignose.oclsc.org> Many Q-bus devices were indeed programmed exactly as if on a UNIBUS. This isn't surprising: Digital wanted their own operating systems to port easily as well. That won't help make UNIX run on a Pro-350 or Pro-380, though. Those systems had standard single-chip PDP-11 CPUs (F11, like that in the 11/23, for the 350; J11, like that in the 11/73, for the 380), but they didn't have a Q-bus; they used the CTI (`computing terminal interconnect'), a bus used only for the Pro-series systems. DEC's operating systems wouldn't run on the Pro either without special hacks. I think the P/OS, the standard OS shipped with those systems, was a hacked-up RSX-11M. I don't know whether there was ever an RT-11 for the Pro. There were UNIX ports but they weren't just copies of stock V7. I vaguely remember, from my days at Caltech > 30 years ago, helping someone get a locally-hacked-up V7 running on an 11/24, the same as an 11/23 except is has a UNIBUS instead of a Q-bus. I don't think they chose the 11/24 over the 11/23 to make it easier to get UNIX running; probably it had something to do with specific peripherals they wanted to use. It was a long time ago and I didn't keep notebooks back then, so the details may be unrecoverable. Norman Wilson Toronto ON From dave at horsfall.org Sun Jul 27 15:49:47 2014 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 27 Jul 2014 15:49:47 +1000 (EST) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> References: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> Message-ID: On Sat, 26 Jul 2014, Noel Chiappa wrote: > That may or may not be true; let me explain. The 11/23 is almost > indistinguishable, in programming terms, from an 11/40. There is only > one very minor difference (which UNIX would care about) that I know of - > the 11/23 does not have a hardware switch register. I recall that there were other differences as well, but only minor. In my paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all about porting V6 to the thing. I vaguely remember that the LTC had to be disabled during the boot process, for example, with an external switch. Then again, I could be thinking of some other weird box to which I'd ported V6. As far as I know, it was the first such port in Australia (if there were others then I never heard about them). -- Dave From cowan at mercury.ccil.org Sun Jul 27 16:02:02 2014 From: cowan at mercury.ccil.org (John Cowan) Date: Sun, 27 Jul 2014 02:02:02 -0400 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <20140727053936.4AAB91DE381@lignose.oclsc.org> References: <20140727053936.4AAB91DE381@lignose.oclsc.org> Message-ID: <20140727060201.GA8700@mercury.ccil.org> Norman Wilson scripsit: > I think the P/OS, the standard OS shipped with those systems, was a > hacked-up RSX-11M. Several sources agree that it was, and speak of a menu shell. > I don't know whether there was ever an RT-11 for the Pro. claims that RT-11 ran: whether stock or modified, the page doesn't say. This is confirmed by a squib in InfoWorld 6:23 (June 4, 1984) on p. 84 , which also speaks of a V7 derivative called VII-M. Venix 2.0 (aka System III) was definitely available. -- John Cowan http://www.ccil.org/~cowan cowan at ccil.org Evolutionary psychology is the theory that men are nothing but horn-dogs, and that women only want them for their money. --Susan McCarthy (adapted) From pechter at gmail.com Mon Jul 28 00:10:08 2014 From: pechter at gmail.com (Bill Pechter) Date: Sun, 27 Jul 2014 10:10:08 -0400 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <20140727060201.GA8700@mercury.ccil.org> References: <20140727053936.4AAB91DE381@lignose.oclsc.org> <20140727060201.GA8700@mercury.ccil.org> Message-ID: Version 5.x supported the Pro325/350. I assume the 380 also worked. They did the emulation for the screen. IIRC they did VT52 console support so K52 worked. I don't think they added VT100 support. The Pro was a pretty nice RT box. Too bad they didn't use that instead of the annoying menu driven POS. Venix for the Pro was a free download on the net when I last looked. It's still mentioned here as a download. http://www.vintage-computer.com/dec_pro_350.shtml Bill -- d|i|g|i|t|a|l had it THEN. Don't you wish you could still buy it now! pechter-at-gmail.com On Sun, Jul 27, 2014 at 2:02 AM, John Cowan wrote: > Norman Wilson scripsit: > > > I think the P/OS, the standard OS shipped with those systems, was a > > hacked-up RSX-11M. > > Several sources agree that it was, and speak of a menu shell. > > > I don't know whether there was ever an RT-11 for the Pro. > > claims that > RT-11 ran: whether stock or modified, the page doesn't say. This > is confirmed by a squib in InfoWorld 6:23 (June 4, 1984) on p. 84 > , which also > speaks of a V7 derivative called VII-M. Venix 2.0 (aka System III) was > definitely available. > > -- > John Cowan http://www.ccil.org/~cowan cowan at ccil.org > Evolutionary psychology is the theory that men are nothing but horn-dogs, > and that women only want them for their money. --Susan McCarthy (adapted) > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Mon Jul 28 03:16:19 2014 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 28 Jul 2014 03:16:19 +1000 (EST) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: <20140727053936.4AAB91DE381@lignose.oclsc.org> <20140727060201.GA8700@mercury.ccil.org> Message-ID: On Sun, 27 Jul 2014, Bill Pechter wrote: > Version 5.x supported the Pro325/350. I assume the 380 also worked.They > did the emulation for the screen. IIRC they did VT52 console support so > K52 worked. I don't think they added VT100 support. I vaguely recall getting Minix to run on a 350. It was as slow as all get out and paddle, so the project was abandoned. > -- > d|i|g|i|t|a|l had it THEN. Don't you wish you could still buy it now! > pechter-at-gmail.com Indeed... -- Dave From jnc at mercury.lcs.mit.edu Mon Jul 28 23:27:17 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 28 Jul 2014 09:27:17 -0400 (EDT) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS Message-ID: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> > From: Dave Horsfall > I recall that there were other differences as well, but only minor. In > my paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all > about porting V6 to the thing. I did a google for that, but couldn't find it. Is it available anywhere online? (I'd love to read it.) I seem to recall vaguely that AUUGN stuff were online, but if so, I'm not sure why the search didn't turn it up. > I vaguely remember that the LTC had to be disabled during the boot > process, for example, with an external switch. I think you might be right, which means the simulated 11/23 I tested on wasn't quite right - but keep reading! I remember being worried about this when I started doing the V6 11/23 version a couple of months back, because I remembered the 11/03's didn't have a programmable clock, just a switch. So I was reading through the 11/23 documentation (I had used 11/23s, but on this point my memory had faded), trying to see if they too did not have a programmable clock. As best I can currently make out, the answer is 'yes/no, depending on the exact model'! E.g. the 11/23-PLUS _does_ seem to have a programmable clock (see pg. 610 of the 1982 edition of "microcomputers and memories"), but the base 11/23 _apparently_ does not. Anyway, the simulated 11/23 (on Ersatz11) does have the LTC (I just checked, and 'lks' contains '0177546', so it thinks it has one :-). But this will be easy to code around; if no link clock is found (in main.c), I'd probably set 'lks' to point somewhere harmless (054, say - I'm using 050/052 to hold the pointer to the CSW, and the software CSW if there isn't a hardware one). That way I can limit the changes to be in main.c, I won't have to futz with clock.c too. Noel PS: On at least the 11/40 (and maybe the /45 too), the line clock was an option! It was a single-height card, IIRC. From ron at ronnatalie.com Tue Jul 29 01:57:21 2014 From: ron at ronnatalie.com (Ron Natalie) Date: Mon, 28 Jul 2014 10:57:21 -0500 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: Message-ID: <12864D2F-74BC-4B9C-B7C4-10C8E6F99BB5@ronnatalie.com> We had miniunix that we were running on a 11/40 without mm moved to the 11/03 in the lab at JHU. We replaces that with an 11/23 running our own v6-derived kernel with little difficulty. Sent from my iPhone > On Jul 26, 2014, at 9:37 PM, Mark Longridge wrote: > > Hi folks, > > I was digging around trying to figure out which Unixes would run on a > PDP-11 with QBUS. It seems that the very early stuff like v5 was > strictly UNIBUS and that the first version of Unix that supported QBUS > was v7m (please correct me if this is wrong). > > I was thinking that the MicroPDP-11's were all QBUS and that it would > be easier to run a Unix on a MicroPDP because they are the most > compact. So I figured I would try to obtain a Unix v7m distribution > tape image. I see the Jean Huens files on tuhs but I'm not sure what > to do with them. > > I have hopes to eventually run a Unix on real hardware but for now I'm > going to stick with simh. It seems like DEC just didn't make a desktop > that could run Bell Labs Unix, e.g. we can't just grab a DEC Pro-350 > and stick Unix v7 on it. Naturally I'll still have fun checking out > Unix v5 on the emulator but it would be nice to eventually run a Unix > with all the source code at hand on a real machine. > > Mark > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs From wkt at tuhs.org Tue Jul 29 08:04:47 2014 From: wkt at tuhs.org (Warren Toomey) Date: Tue, 29 Jul 2014 08:04:47 +1000 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> Message-ID: <20140728220447.GA19660@www.oztivo.net> On Sun, Jul 27, 2014 at 03:49:47PM +1000, Dave Horsfall wrote: > I recall that there were other differences as well, but only minor. In my > paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all about > porting V6 to the thing. http://minnie.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V03.2.pdf page 11 :-) Warren From dave at horsfall.org Tue Jul 29 08:23:12 2014 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 29 Jul 2014 08:23:12 +1000 (EST) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> References: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> Message-ID: On Mon, 28 Jul 2014, Noel Chiappa wrote: > > I recall that there were other differences as well, but only minor. In > > my paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all > > about porting V6 to the thing. > > I did a google for that, but couldn't find it. Is it available anywhere > online? (I'd love to read it.) I seem to recall vaguely that AUUGN stuff > were online, but if so, I'm not sure why the search didn't turn it up. There was a project a few years ago to scan all issues of AUUGN (Australian Unix Users Group Newsletter); the last I heard was that all issues had been obtained, and handed over to some Google mob for archiving. Apparently the scanning process is destructive but makes for an ideal copy for as many as you like. The originals, being up to 40 years or so old, would have been in bad shape anyway. A search for "auugn" reveals a few pointers, but AUUG itself dissolved a few years ago because we had achieved our purpose i.e. bring Unix to the mass market in Australia (its competition at the time was RSTS, RSX, and PICK of all things). Guess which one survived? Concurrent CP/M never really had a hold, MS-DOS thankfully died (I was still using CP/M at the time; heck, I even had UUCP on it, which was pretty impressive considering that the Microbee didn't have a serial port), and I predict that Windoze will go the way of the Irish potato crop and for the same reason. Warren may know more about the archived issues. > > I vaguely remember that the LTC had to be disabled during the boot > > process, for example, with an external switch. > > I think you might be right, which means the simulated 11/23 I tested on > wasn't quite right - but keep reading! It was hilarious, in a morbid sort of way. I cottoned on when the bootstrap process crapped itself for no apparent reason (it got interrupted when no ISR was in place), and we'd occasionally forget to enable it... > I remember being worried about this when I started doing the V6 11/23 > version a couple of months back, because I remembered the 11/03's didn't > have a programmable clock, just a switch. So I was reading through the > 11/23 documentation (I had used 11/23s, but on this point my memory had > faded), trying to see if they too did not have a programmable clock. > > As best I can currently make out, the answer is 'yes/no, depending on > the exact model'! E.g. the 11/23-PLUS _does_ seem to have a programmable > clock (see pg. 610 of the 1982 edition of "microcomputers and > memories"), but the base 11/23 _apparently_ does not. I never saw the -PLUS, so I can't help you there, and my shelf of DEC and Unix etc manuals disappeared during several moves. > Anyway, the simulated 11/23 (on Ersatz11) does have the LTC (I just > checked, and 'lks' contains '0177546', so it thinks it has one :-). Quite likely. I came up with a battery of tests at boot time, in order to determine just what sort of a model it was e.g. did it have the SLR and so on. Same thing for illegal instructions, such as floating point. We had /40s all over the place (some dedicated ones had no MMU, and ran a custom program to talk 200-UT to a remote Cyber), two or three /70s (I had no responsibility for those, but we shared code a lot), a /60 (interesting box), and a sprinkling of /23s. > But this will be easy to code around; if no link clock is found (in > main.c), I'd probably set 'lks' to point somewhere harmless (054, say - > I'm using 050/052 to hold the pointer to the CSW, and the software CSW > if there isn't a hardware one). That way I can limit the changes to be > in main.c, I won't have to futz with clock.c too. Speaking of the CSW, we came up with some amusing idle patterns. The boxes with the octal display displayed rotating 1s (I had to determine whether it had an octal display or a real one somehow; I've long since forgotten). > PS: On at least the 11/40 (and maybe the /45 too), the line clock was an > option! It was a single-height card, IIRC. Yeah; the aforementioned low-end /40s had quite an impressive program that scheduled by the use of co-routines (no LTC either). It emulated the CDC Remote Batch Station (we briefly had one of those too; it was S L O W). Fun days! -- Dave From imp at bsdimp.com Tue Jul 29 08:38:55 2014 From: imp at bsdimp.com (Warner Losh) Date: Mon, 28 Jul 2014 16:38:55 -0600 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <20140728220447.GA19660@www.oztivo.net> References: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> <20140728220447.GA19660@www.oztivo.net> Message-ID: <47087BA7-0BF9-4CB1-806D-4C695C746CAC@bsdimp.com> On Jul 28, 2014, at 4:04 PM, Warren Toomey wrote: > On Sun, Jul 27, 2014 at 03:49:47PM +1000, Dave Horsfall wrote: >> I recall that there were other differences as well, but only minor. In my >> paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all about >> porting V6 to the thing. > > http://minnie.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V03.2.pdf > page 11 Back in the days when people were succinct. There have been posts in this thread that are longer… Warner :) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From spedraja at gmail.com Tue Jul 29 19:06:52 2014 From: spedraja at gmail.com (SPC) Date: Tue, 29 Jul 2014 11:06:52 +0200 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <47087BA7-0BF9-4CB1-806D-4C695C746CAC@bsdimp.com> References: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> <20140728220447.GA19660@www.oztivo.net> <47087BA7-0BF9-4CB1-806D-4C695C746CAC@bsdimp.com> Message-ID: 2014-07-29 0:38 GMT+02:00 Warner Losh : > > On Jul 28, 2014, at 4:04 PM, Warren Toomey wrote: > > > On Sun, Jul 27, 2014 at 03:49:47PM +1000, Dave Horsfall wrote: > >> I recall that there were other differences as well, but only minor. In > my > >> paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all about > >> porting V6 to the thing. > > > > http://minnie.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V03.2.pdf > > page 11 > > Back in the days when people were succinct. There have been posts in this > thread that are longer… > > Warner :) > > I got one operative PDP-11/23-PLUS. with 4MB. I'm open to (and I'd like too) install V6 or even V7M on it. Gracias | Regards - Saludos | Greetings | Freundliche Grüße | Salutations ​ -- *Sergio Pedraja* -- mobile: +34-699-996568 twitter: @sergio_pedraja | skype: Sergio Pedraja -- http://plus.google.com/u/0/101292256663392735405 http://www.linkedin.com/in/sergiopedraja http://spedraja.wordpress.com ----- No crea todo lo que ve, ni crea que está viéndolo todo -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Tue Jul 29 19:46:48 2014 From: wkt at tuhs.org (Warren Toomey) Date: Tue, 29 Jul 2014 19:46:48 +1000 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> Message-ID: <46036519-b33d-48ec-af84-ed336e5df306@email.android.com> The old AUUG newsletters are all at http://minnie.tuhs.org/Archive/Documentation/AUUGN/ Cheers, Warren On 29 July 2014 08:23:12 AEST, Dave Horsfall wrote: >On Mon, 28 Jul 2014, Noel Chiappa wrote: > >> > I recall that there were other differences as well, but only minor. >In >> > my paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all > >> > about porting V6 to the thing. >> >> I did a google for that, but couldn't find it. Is it available >anywhere >> online? (I'd love to read it.) I seem to recall vaguely that AUUGN >stuff >> were online, but if so, I'm not sure why the search didn't turn it >up. > >There was a project a few years ago to scan all issues of AUUGN >(Australian Unix Users Group Newsletter); the last I heard was that all > >issues had been obtained, and handed over to some Google mob for >archiving. Apparently the scanning process is destructive but makes >for >an ideal copy for as many as you like. The originals, being up to 40 >years or so old, would have been in bad shape anyway. > >A search for "auugn" reveals a few pointers, but AUUG itself dissolved >a >few years ago because we had achieved our purpose i.e. bring Unix to >the >mass market in Australia (its competition at the time was RSTS, RSX, >and >PICK of all things). Guess which one survived? Concurrent CP/M never >really had a hold, MS-DOS thankfully died (I was still using CP/M at >the >time; heck, I even had UUCP on it, which was pretty impressive >considering >that the Microbee didn't have a serial port), and I predict that >Windoze >will go the way of the Irish potato crop and for the same reason. > >Warren may know more about the archived issues. > >> > I vaguely remember that the LTC had to be disabled during the boot >> > process, for example, with an external switch. >> >> I think you might be right, which means the simulated 11/23 I tested >on >> wasn't quite right - but keep reading! > >It was hilarious, in a morbid sort of way. I cottoned on when the >bootstrap process crapped itself for no apparent reason (it got >interrupted when no ISR was in place), and we'd occasionally forget to >enable it... > >> I remember being worried about this when I started doing the V6 11/23 > >> version a couple of months back, because I remembered the 11/03's >didn't >> have a programmable clock, just a switch. So I was reading through >the >> 11/23 documentation (I had used 11/23s, but on this point my memory >had >> faded), trying to see if they too did not have a programmable clock. >> >> As best I can currently make out, the answer is 'yes/no, depending on > >> the exact model'! E.g. the 11/23-PLUS _does_ seem to have a >programmable >> clock (see pg. 610 of the 1982 edition of "microcomputers and >> memories"), but the base 11/23 _apparently_ does not. > >I never saw the -PLUS, so I can't help you there, and my shelf of DEC >and >Unix etc manuals disappeared during several moves. > >> Anyway, the simulated 11/23 (on Ersatz11) does have the LTC (I just >> checked, and 'lks' contains '0177546', so it thinks it has one :-). > >Quite likely. I came up with a battery of tests at boot time, in order >to >determine just what sort of a model it was e.g. did it have the SLR and >so >on. Same thing for illegal instructions, such as floating point. We >had >/40s all over the place (some dedicated ones had no MMU, and ran a >custom >program to talk 200-UT to a remote Cyber), two or three /70s (I had no >responsibility for those, but we shared code a lot), a /60 (interesting > >box), and a sprinkling of /23s. > >> But this will be easy to code around; if no link clock is found (in >> main.c), I'd probably set 'lks' to point somewhere harmless (054, say >- >> I'm using 050/052 to hold the pointer to the CSW, and the software >CSW >> if there isn't a hardware one). That way I can limit the changes to >be >> in main.c, I won't have to futz with clock.c too. > >Speaking of the CSW, we came up with some amusing idle patterns. The >boxes with the octal display displayed rotating 1s (I had to determine >whether it had an octal display or a real one somehow; I've long since >forgotten). > >> PS: On at least the 11/40 (and maybe the /45 too), the line clock was >an >> option! It was a single-height card, IIRC. > >Yeah; the aforementioned low-end /40s had quite an impressive program >that >scheduled by the use of co-routines (no LTC either). It emulated the >CDC >Remote Batch Station (we briefly had one of those too; it was S L O W). > >Fun days! > >-- Dave >_______________________________________________ >TUHS mailing list >TUHS at minnie.tuhs.org >https://minnie.tuhs.org/mailman/listinfo/tuhs -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Tue Jul 29 19:56:55 2014 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 29 Jul 2014 19:56:55 +1000 (EST) Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> Message-ID: On Tue, 29 Jul 2014, I wrote: > We had /40s all over the place (some dedicated ones had no MMU, and ran > a custom program to talk 200-UT to a remote Cyber), two or three /70s (I > had no responsibility for those, but we shared code a lot), a /60 > (interesting box), and a sprinkling of /23s. Oops; upon re-reading my article, we had a sprinkling of /34s, with just the one /23. I think. I'd like to believe that I was the first in Australia to port V6 to the /34, the /23, and the /60 (I did a paper on that as well), but if others in the rest of the world beat me to it then I never heard about it. -- Dave From milov at cs.uwlax.edu Tue Jul 29 23:10:41 2014 From: milov at cs.uwlax.edu (=?utf-8?Q?Milo_Velimirovi=C4=87?=) Date: Tue, 29 Jul 2014 08:10:41 -0500 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> Message-ID: On Jul 28, 2014, at 5:23 PM, Dave Horsfall wrote: [snip] > There was a project a few years ago to scan all issues of AUUGN > (Australian Unix Users Group Newsletter); the last I heard was that all > issues had been obtained, and handed over to some Google mob for > archiving. Apparently the scanning process is destructive but makes for > an ideal copy for as many as you like. The originals, being up to 40 > years or so old, would have been in bad shape anyway. Sounds like what Vernor Vinge was describing in 'Rainbows End' -- a cautionary tale for librarians about destructive digitization/digitisation, among other themes. - Milo From clemc at ccc.com Tue Jul 29 23:28:37 2014 From: clemc at ccc.com (Clem Cole) Date: Tue, 29 Jul 2014 09:28:37 -0400 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: <20140728220447.GA19660@www.oztivo.net> References: <20140727032639.46B6718C0DE@mercury.lcs.mit.edu> <20140728220447.GA19660@www.oztivo.net> Message-ID: Warren, Thanks for the pointer. Boy that takes me back. I loved looking at the old ads including the one for the BBN C/70 - with it's "resistor to resistor" instructions [check out the typo about page 59]. You made my day, Clem On Mon, Jul 28, 2014 at 6:04 PM, Warren Toomey wrote: > On Sun, Jul 27, 2014 at 03:49:47PM +1000, Dave Horsfall wrote: > > I recall that there were other differences as well, but only minor. In > my > > paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all about > > porting V6 to the thing. > > http://minnie.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V03.2.pdf > page 11 > > :-) Warren > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jul 30 00:33:06 2014 From: clemc at ccc.com (Clem Cole) Date: Tue, 29 Jul 2014 10:33:06 -0400 Subject: [TUHS] First Unix that could run on a PDP-11 with QBUS In-Reply-To: References: <20140728132717.73DD218C0B2@mercury.lcs.mit.edu> Message-ID: I can not speak for Australia. I lived the Fifth, Sixth and Seventh Edition changes in the mid-late 1970s. At CMU, in the USA we had an early /34 running Fifth Edition in the EE Department summer of 75/76 ish - right after the /34 came out. There were lots of 40's around campus and at the time were popular because CMU hacked them (i.e. the E or Extended version of the 40), but I remember Gordon Bell got the EE Dept a "deal" on the new single board 11 - i.e. the /34 & it was significantly cheaper. We got V6 shortly there after when a couple folks made a road trip to NJ to see Ken and the EE system did not run V5 very long [frankly, I do not remember much about it]. I do remember that we had boot strapped /34 from some 11/40E's in CS - I remember the lack of switch register issue, but I also seem to remember it was easy fix in m40.s. I've forgotten all the differences, but I recall that they were small. I do have memories of going back and forth between the CS and EE bldgs a couple of times until we got it right. The bigger issue that bit us was we had to careful about the fact that the CS folks had implemented CSAVE/CRET in the WCS of the 11/40Es and hacked on the C compiler to generate that - so binaries from CS could not move to EE unless that hack was turned off [BTW: That job in EE was my first experience with C]. The late Ted Kowalski showed up at CMU a few weeks later for his OYOC year and updated the EE machine to be V6++ system [and also he had copies of the proofs from Dennis's upcoming book on C ]. Being CMU, we had a lot of BLISS hackers at the time - so Ted singing C praises was quite a stir -- I was not yet really indoctrinated with the merits of C and remember arguing with him since at the time the C compilers were not nearly as polished as the CMU BLISS compilers - although it ran natively which BLISS/11 could not do]. At the time, a big thing Ted did was introduce us to stdio. Until then most of the our C code was pretty hackneyed - I seem to remember something called the portable C library for V6 (I do not remember it as part of V5 but have been). However, Ted's V6+ compilers used stdio and it quickly because the EE standard. They must have migrated to CS, but I've forgotten because by them most of my hacking was on the EE system. Ted was working on this really cool program to fix the file system [fsck] - although a number of us worked with him and it was working with that code - that I started to see what a great language it was and ended up writing way more C than anything else. I would clone the EE /34 system for a new job at CMU's Mellon Institute but by then DEC had created the /34A [I've forgotten the differences]. Mellon was the first time I ran into a UNIX vs. XX war. One of the grad students wanted to run RSX-11 - since DEC "supported it." As a lowly undergrad I was really pleased I won when the EE prof behind Mellon Institute agreed we "seemed to have something" working really well in the Dept [plus it must have helped that the Biomed team, also decided to clone the EE system for their research and not use RSX]. As for the 11/60, upon graduation, I spent my first weekend in Oregon in late 1979, helping Steve Glaser update the Teklabs /60 to be more CMU like - but that was likely based on V6. I do not remember when we cut V7 in at Tek, it had been released and I don't remember the details for CMU - we had to be running V7 by the time I left. At Tek, by the end of 1979, we had done such a good job with the /60 that we managed to get an 11/70 (without any budget for it which was quite a trick). So, I spent my first Oregon Christmas bringing V7 up on that system - with the infinite storage capacity of 3 RP06s [oh boy]. My faint memory is that Tek had a V7 license, but Steve had not yet managed to get V7 running on the /60, only V6. Also, early in Winter/Spring of '79 before I had left Dan Klein and I had gone on strike to force CMU to buy a commercial license for Mellon institute -- which was not using UNIX for teaching for paid research. CMU would be the first University to get a commercial license, I believe Case-Western followed suite the next fall when Fred Park returned from Tek-Labs after heard me talk about what CMU did that summer. Clem On Tue, Jul 29, 2014 at 5:56 AM, Dave Horsfall wrote: > On Tue, 29 Jul 2014, I wrote: > > > We had /40s all over the place (some dedicated ones had no MMU, and ran > > a custom program to talk 200-UT to a remote Cyber), two or three /70s (I > > had no responsibility for those, but we shared code a lot), a /60 > > (interesting box), and a sprinkling of /23s. > > Oops; upon re-reading my article, we had a sprinkling of /34s, with just > the one /23. I think. > > I'd like to believe that I was the first in Australia to port V6 to the > /34, the /23, and the /60 (I did a paper on that as well), but if others > in the rest of the world beat me to it then I never heard about it. > > -- Dave > _______________________________________________ > TUHS mailing list > TUHS at minnie.tuhs.org > https://minnie.tuhs.org/mailman/listinfo/tuhs > -------------- next part -------------- An HTML attachment was scrubbed... URL: