From m.douglas.mcilroy at dartmouth.edu Mon Feb 1 01:26:22 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Sun, 31 Jan 2021 10:26:22 -0500 Subject: [TUHS] [TUHS} Qed vs ed Message-ID: I used Ken's qed in pre-Unix days. I understand its big departure from the original was regular expressions. Unix ed was the same, with multi-file capability dropped. Evidently the lost function was not much missed, for it it didn't come back when machines got bigger. I remember that fairly early in PDP-11 development ed gained three features: & in the rhs of substitutions plus k and t commands. (I'm not sure about &--that was 50 years ago.). With hindsight it's surprising that a "minimalist" design had m but not t, for m can be built from t but not vice versa. A cheat sheet for multics qed is at h ttp:// www.bitsavers.org/pdf//honeywell/multics/swenson/6906.multics-condensed-guide.pdf. It had two commands I don't remember: sort(!) and transform, which I assume is like y in sed. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From chet.ramey at case.edu Mon Feb 1 04:22:09 2021 From: chet.ramey at case.edu (Chet Ramey) Date: Sun, 31 Jan 2021 13:22:09 -0500 Subject: [TUHS] More archeology In-Reply-To: <202101281951.10SJpw9Y138529@darkstar.fourwinds.com> References: <202101281951.10SJpw9Y138529@darkstar.fourwinds.com> Message-ID: On 1/28/21 2:51 PM, Jon Steinhart wrote: > Another stack of old notebooks. I can scan these in if anyone is interested > and if they're not available elsewhere. In addition to what's below, I have > a fat notebook with the BRL CAD package docs. I don't know if anyone else has spoken up yet, but I'd love to see these made available. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ From m.douglas.mcilroy at dartmouth.edu Mon Feb 1 14:24:05 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Sun, 31 Jan 2021 23:24:05 -0500 Subject: [TUHS] [TUHS} Qed vs ed Message-ID: > fairly early in PDP-11 development ed gained three features: & in the > rhs of substitutions plus k and t commands. (I'm not sure about & .... Oh, and backreferencing, which took regular expressions way up the complexity hierarchy--into NP-complete territory were it not for the limit of 9 backreferenced substrings. (Proof hint: reduce the knapsack problem to an ed regex.) Also g and s were generalized to allow escaped newlines. I was indeed wrong about &. It was in v1. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Mon Feb 1 16:00:01 2021 From: arnold at skeeve.com (Arnold Robbins) Date: Mon, 01 Feb 2021 08:00:01 +0200 Subject: [TUHS] QED archive READMEs updated Message-ID: Hello All. I have updated various READMEs in the QED archive I set up a while back: https://github.com/arnoldrobbins/qed-archive. Now included is a link to Leah's blog, mention that the SDS files came from Al Kossow, and Doug's link to the Multics QED cheat sheet. Thanks, Arnold From woods at robohack.ca Tue Feb 2 12:20:57 2021 From: woods at robohack.ca (Greg A. Woods) Date: Mon, 01 Feb 2021 18:20:57 -0800 Subject: [TUHS] reboot(2) system call In-Reply-To: References: Message-ID: At Sun, 31 Jan 2021 09:27:10 +1100 (EST), Dave Horsfall wrote: Subject: Re: [TUHS] reboot(2) system call > > On Tue, 26 Jan 2021, Greg A. Woods wrote: > > > The lore I was told at the time was that you alwasy ran three and > > that it didn't matter if they were all on the same line with > > semicolons or not because of the very fact that the second one would > > block. > > What I was taught was: > > % sync > % sync > % sync > > and never: > > % sync; sync; sync > > The theory was that by waiting for the shell prompt each time, it gave > the buffer pool enough time to be flushed. If waiting was the true reason, then any sane person would have put a sleep in there instead so as to avoid any variance in typing (and terminal) speed. On at least a large number of old systems I've used either the first or the second invocation did block and not return if there were still any dirty blocks it made the sync() call. It was trivial to see that the system was busy writing while one waited for the shell prompt to re-appear if one could see the disk activity lights (or hear them) from the console, as was usually easy to do on desktop systems. Since many of those old systems I used were Xenix of one flavour or another, perhaps it was only those that waited for sync I/O to complete. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From imp at bsdimp.com Tue Feb 2 12:30:47 2021 From: imp at bsdimp.com (Warner Losh) Date: Mon, 1 Feb 2021 19:30:47 -0700 Subject: [TUHS] reboot(2) system call In-Reply-To: References: Message-ID: On Mon, Feb 1, 2021, 7:22 PM Greg A. Woods wrote: > At Sun, 31 Jan 2021 09:27:10 +1100 (EST), Dave Horsfall > wrote: > Subject: Re: [TUHS] reboot(2) system call > > > > On Tue, 26 Jan 2021, Greg A. Woods wrote: > > > > > The lore I was told at the time was that you alwasy ran three and > > > that it didn't matter if they were all on the same line with > > > semicolons or not because of the very fact that the second one would > > > block. > > > > What I was taught was: > > > > % sync > > % sync > > % sync > > > > and never: > > > > % sync; sync; sync > > > > The theory was that by waiting for the shell prompt each time, it gave > > the buffer pool enough time to be flushed. > > If waiting was the true reason, then any sane person would have put a > sleep in there instead so as to avoid any variance in typing (and > terminal) speed. > > On at least a large number of old systems I've used either the first or > the second invocation did block and not return if there were still any > dirty blocks it made the sync() call. It was trivial to see that the > system was busy writing while one waited for the shell prompt to > re-appear if one could see the disk activity lights (or hear them) from > the console, as was usually easy to do on desktop systems. > > Since many of those old systems I used were Xenix of one flavour or > another, perhaps it was only those that waited for sync I/O to complete. > Would be nice to know which one so I can go check. I've not seen leaked xenix code though, so it may be possible. Warner -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Tue Feb 2 13:35:30 2021 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 2 Feb 2021 14:35:30 +1100 (EST) Subject: [TUHS] reboot(2) system call In-Reply-To: References: Message-ID: On Mon, 1 Feb 2021, Greg A. Woods wrote: > If waiting was the true reason, then any sane person would have put a > sleep in there instead so as to avoid any variance in typing (and > terminal) speed. I dunno; that's merely what I was taught in the early Unix days and it got ingrained into me. Nowadays I just issue "shutdown -[r|h] now" (I'm the only user)... More Unix lore: sleep(1) could return straight away at one time (the granularity was 1 second). -- Dave, who remembers when sleep(3) used to be sleep(2) From woods at robohack.ca Wed Feb 3 06:30:01 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 02 Feb 2021 12:30:01 -0800 Subject: [TUHS] reboot(2) system call In-Reply-To: References: Message-ID: At Mon, 1 Feb 2021 19:30:47 -0700, Warner Losh wrote: Subject: Re: [TUHS] reboot(2) system call > > Would be nice to know which one so I can go check. I've not seen leaked > xenix code though, so it may be possible. Well the first versions I used were on 286 machines, and then 386. I think there may be binaries available in the darker corners (maybe even on archive.org) if one wanted to test empirically on either emulation or real hardware. Maybe someday I could do that, but not soon. On a related note, one of the weird things about Unix System V, from the beginning more or less, and right up to and including the last SysVr4, the shutdown scripts use "sync; sync; sync" all on one line like that (on r4 they use the full path to "/sbin/sync", but still three times). I think on some versions the subsequent call will block briefly while the first call finishes scheduling buffer writes (i.e. the call will wait instead of just returning if another process is in the critical section), they all still just schedule async writes. If I'm not mistaken though calling "sync" in any way at all shouldn't normally be necessary during a proper shutdown as all filesystems, including the root fs, will be unmounted during the process, thus forcing all dirty buffers to be flushed first, and this is in fact the case from SysVr3 and on (i.e. after uadmin(2) with A_SHUTDOWN was introduced). Also, on SysVr4 the fsflush daemon runs as a kernel daemon process, so if I understand correctly it isn't killed during shutdown or in single user mode, thus it'll still be doing the equivalent of sync(2) right up to the end. On the other hand in the particular SysVr4-i386 version I have the code that would actually wait for all async buffer writes during umounts, i.e. in bdwait(), is commented out and replaced by a "delay(200)". I can't imagine why, nor can I imagine trying to debug that without source! -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From woods at robohack.ca Wed Feb 3 08:09:41 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 02 Feb 2021 14:09:41 -0800 Subject: [TUHS] GBACA In-Reply-To: <738c95bb-8d13-6d80-bdaf-4eb2804077d2@mhorton.net> References: <738c95bb-8d13-6d80-bdaf-4eb2804077d2@mhorton.net> Message-ID: At Fri, 18 Sep 2020 14:22:22 -0700, Mary Ann Horton wrote: Subject: [TUHS] GBACA > > The topic of GBACA (Get Back At Corporate America), the video game for > the BLIT/5620, has come up on a Facebook group. > > Does anyone happen to have any details about it, source code, author, > screen shots, ...? It was written by Pat Autilio: https://www.linkedin.com/in/patautilio/ I do have a copy of the source. It's all marked up with comments like: * Copyright (c) 1984, 1985 AT&T * All Rights Reserved * THIS IS UNPUBLISHED PROPRIETARY SOURCE * CODE OF AT&T. * The copyright notice above does not * evidence any actual or intended * publication of such source code. But then again so was all the other dmd/layers stuff at the time, and most of that was opened up, if I'm not mistaken, by Dave Dykstra (e.g. the dev5620 package even came with a GPLv2 COPYING file). So I suppose I could make it available.... -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From woods at robohack.ca Wed Feb 3 09:08:42 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 02 Feb 2021 15:08:42 -0800 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> Message-ID: At Sun, 20 Sep 2020 17:35:52 -0400, John Cowan wrote: Subject: Re: [TUHS] reviving a bit of WWB > > When 0 is coerced implicitly or explicitly to a pointer type, it becomes a > null pointer. That's true even on architectures where all-bits-zero is > *not* a null pointer. However, in contexts where there is no expected > type, as in a call to execl(), the null at the end of the args list has to > be explicitly cast to (char *)0 or some other null pointer. Yeah, that's more to do with the good/bad choice in C to do or not do integer promotion in various situations, and to default parameter types to 'int' unless they are, or are cast to, a wider type (and of course with the rather tricky and almost non-portable way C allows variable length argument lists, along with the somewhat poor way C was cajoled into offering function prototypes to support separate compilation of code units and the exceedingly poor way prototypes deal with variable length argument lists). -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From lm at mcvoy.com Wed Feb 3 09:47:03 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 2 Feb 2021 15:47:03 -0800 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> Message-ID: <20210202234703.GH4227@mcvoy.com> On Tue, Feb 02, 2021 at 03:08:42PM -0800, Greg A. Woods wrote: > At Sun, 20 Sep 2020 17:35:52 -0400, John Cowan wrote: > Subject: Re: [TUHS] reviving a bit of WWB > > > > When 0 is coerced implicitly or explicitly to a pointer type, it becomes a > > null pointer. That's true even on architectures where all-bits-zero is > > *not* a null pointer. However, in contexts where there is no expected > > type, as in a call to execl(), the null at the end of the args list has to > > be explicitly cast to (char *)0 or some other null pointer. > > Yeah, that's more to do with the good/bad choice in C to do or not do > integer promotion in various situations, and to default parameter types > to 'int' unless they are, or are cast to, a wider type I've dealt with this, here is a story of a super computer where native pointers pointed at bits but C pointers pointed at bytes and you can shake your head at the promotion problems: https://minnie.tuhs.org/pipermail/tuhs/2017-September/012050.html From woods at robohack.ca Wed Feb 3 10:07:46 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 02 Feb 2021 16:07:46 -0800 Subject: [TUHS] Fwd: Choice of Unix for 11/03 and 11/23+ Systems In-Reply-To: <20200922155943.EF32A18C09D@mercury.lcs.mit.edu> References: <20200922155943.EF32A18C09D@mercury.lcs.mit.edu> Message-ID: At Tue, 22 Sep 2020 11:59:43 -0400 (EDT), jnc at mercury.lcs.mit.edu (Noel Chiappa) wrote: Subject: Re: [TUHS] Fwd: Choice of Unix for 11/03 and 11/23+ Systems > > V6, as distributed, had no networking at all. There are two V6 systems with > networking in TUHS: > > https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC > https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-V6 > > The first is an 'NCP' Unix (unless unless you have an ARPANet); the second is > a fairly early TCP/IP from BBN (ditto, out of the box; although one could write > an Ethernet driver for it). > > There's also a fairly nice Internet-capable V6 (well, PWB1, actually) from MIT > which I keep meaning to upload; it includes SMTP, FTP, etc, etc. I also have > visions of porting an ARP I wrote to it, and bringing up an Ethernet driver > for the DEQNA/DELQA, but I've yet to get to any of that. There's a "v6net" directory in this repository. https://www.sqliteconcepts.org/cgi-bin/9995/doc/tip/doc/index.wiki I accidentally found it all when searching from something completely different the other day. I wonder if it is from either of the two ports you mention. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From dave at horsfall.org Wed Feb 3 10:11:44 2021 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 3 Feb 2021 11:11:44 +1100 (EST) Subject: [TUHS] reviving a bit of WWB In-Reply-To: <20210202234703.GH4227@mcvoy.com> References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> Message-ID: On Tue, 2 Feb 2021, Larry McVoy wrote: > I've dealt with this, here is a story of a super computer where native > pointers pointed at bits but C pointers pointed at bytes and you can > shake your head at the promotion problems: > > https://minnie.tuhs.org/pipermail/tuhs/2017-September/012050.html Holy smoking inodes! I'd forgotten that story... And yes, I really was approached by Pr1me, and really did turn them down on the basis that if they thought that "1" was prime then what else did they get wrong?[*] Oh, they went belly-up shortly afterwards, so it was probably just as well (plainly the innumerate marketoids were in charge). [*] If you assume that "1" is prime then it breaks all sorts of higher (and obscure) maths. -- Dave From woods at robohack.ca Wed Feb 3 10:12:33 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 02 Feb 2021 16:12:33 -0800 Subject: [TUHS] Fwd: Choice of Unix for 11/03 and 11/23+ Systems In-Reply-To: <3D6D5DB5-A500-470F-868D-A49F80B617E6@planet.nl> References: <3D6D5DB5-A500-470F-868D-A49F80B617E6@planet.nl> Message-ID: At Thu, 24 Sep 2020 13:02:53 +0200, Paul Ruizendaal wrote: Subject: Re: [TUHS] Fwd: Choice of Unix for 11/03 and 11/23+ Systems > > I’ve also done a port of the BBN VAX stack to V6 (running on a TI990 clone), using a serial > PPP interface to connect. Experimental, but may have the OP's interest: > https://www.jslite.net/cgi-bin/9995/dir?ci=tip Ah ha! Sorry, I should have read further in the thread before posting about the "v6net" directory I found. Unfortunately as with many threads on TUHS, this one is completely broken and not easily put into proper tree form due to missing or garbled 'in-reply-to' and 'references' headers. Too many broken half-baked MUAs seem to still be widely used. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From lm at mcvoy.com Wed Feb 3 10:19:00 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 2 Feb 2021 16:19:00 -0800 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> Message-ID: <20210203001900.GI4227@mcvoy.com> On Wed, Feb 03, 2021 at 11:11:44AM +1100, Dave Horsfall wrote: > On Tue, 2 Feb 2021, Larry McVoy wrote: > > >I've dealt with this, here is a story of a super computer where native > >pointers pointed at bits but C pointers pointed at bytes and you can shake > >your head at the promotion problems: > > > >https://minnie.tuhs.org/pipermail/tuhs/2017-September/012050.html > > Holy smoking inodes! I'd forgotten that story... I tend to log in to forums these days as "luckydude" and that story is just the first of many examples. I have seemed to have gotten lucky being building up the right experiences and then falling into a job that can use them. It really made me look better than I am, things just sort of flowed. And having almost 6 months to work on whatever I wanted and deciding to port the networking stack? Priceless down the road. No way they would have given me that task, I was too green. Fun times. From jnc at mercury.lcs.mit.edu Wed Feb 3 11:25:49 2021 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 2 Feb 2021 20:25:49 -0500 (EST) Subject: [TUHS] Fwd: Choice of Unix for 11/03 and 11/23+ Systems Message-ID: <20210203012549.2077F18C086@mercury.lcs.mit.edu> > From: Greg A. Woods > There's a "v6net" directory in this repository. > ... > I wonder if it is from either of the two ports you mention. No; the NOSC system is an NCP system, not TCP; and this one has mbufs (which the BBN v6 one did not have), so it's _probably_ a Berkleyism of some sort (or did the BBN VAX code have mbuf's too; I don't recall - yes, it did: https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-Vax-TCP see bbnnet/mbuf.c). It might also be totally new code which just chose to re-use that meme. I don't have time to look closely to see if I see any obvious descent. > Too many broken half-baked MUAs seem to still be widely used. I'm one of the offendors! Hey, this is a vintage computing list, so what's the problem with vintage mail readers? :-) Noel PS: I'm just about done collecting up the MIT PWB1 TCP system; I only have the Server FTP left to go. (Alas, it was a joint project between a student and a staffer, who left just at the end, so half the source in one's personal area, and the other half's in the other's. So I have to find all the pieces, and put them in the system's source area.) Once that's done, I'll get it to WKT to add to the repositoey. (Getting it to _actually run_ will take a while, and will happen later: I have to write a device driver for it, the code uses a rare, long-extinct board.) From rich.salz at gmail.com Wed Feb 3 12:04:56 2021 From: rich.salz at gmail.com (Richard Salz) Date: Tue, 2 Feb 2021 21:04:56 -0500 Subject: [TUHS] reviving a bit of WWB In-Reply-To: <20210203001900.GI4227@mcvoy.com> References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: > pointer to a bit BBN made a machine "optimized" for C. It was used in the first generation ARPAnet gateways. A word was 10bits. The amount of masking we had to do for some portable software was unreal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Wed Feb 3 13:32:09 2021 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 3 Feb 2021 14:32:09 +1100 (EST) Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: On Tue, 2 Feb 2021, Richard Salz wrote: > BBN made a machine "optimized" for C.  It was used in the first > generation ARPAnet gateways. > > A word was 10bits.  The amount of masking we had to do for some portable > software was unreal. I'm trying to get my head around a 10-bit machine optimised for C... Well, if you accept that chars are 10 bits wide then there shouldn't be (much of) a problem; just forget about the concept of powers of 2, I guess. Shades of the 60-bit CDC series, as handling strings was a bit of a bugger; at least the 12-bit PDP-8 was sort of manageable. -- Dave From m.douglas.mcilroy at dartmouth.edu Wed Feb 3 14:32:29 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Tue, 2 Feb 2021 23:32:29 -0500 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: > I 'm trying to get my head around a 10-bit machine optimised for C. How about 23-bits? That was one of the early ESS machines, evidently optimized to make every bit count. (Maybe a prime wordwidth helps with hashing?) Whirlwind II (built in 1952), was 16 bits. It took a long while for that to become common wisdom. Doug On Tue, Feb 2, 2021 at 10:32 PM Dave Horsfall wrote: > On Tue, 2 Feb 2021, Richard Salz wrote: > > > BBN made a machine "optimized" for C. It was used in the first > > generation ARPAnet gateways. > > > > A word was 10bits. The amount of masking we had to do for some portable > > software was unreal. > > I'm trying to get my head around a 10-bit machine optimised for C... > Well, if you accept that chars are 10 bits wide then there shouldn't be > (much of) a problem; just forget about the concept of powers of 2, I > guess. > > Shades of the 60-bit CDC series, as handling strings was a bit of a > bugger; at least the 12-bit PDP-8 was sort of manageable. > > -- Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Wed Feb 3 15:45:29 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Tue, 02 Feb 2021 22:45:29 -0700 Subject: [TUHS] GBACA In-Reply-To: References: <738c95bb-8d13-6d80-bdaf-4eb2804077d2@mhorton.net> Message-ID: <202102030545.1135jTxY017828@freefriends.org> It'd be cool if you would give this to Warren. There are even 5620 emulators out there; maybe this game could be revived? I remember playing it, it was fun. Arnold "Greg A. Woods" wrote: > At Fri, 18 Sep 2020 14:22:22 -0700, Mary Ann Horton wrote: > Subject: [TUHS] GBACA > > > > The topic of GBACA (Get Back At Corporate America), the video game for > > the BLIT/5620, has come up on a Facebook group. > > > > Does anyone happen to have any details about it, source code, author, > > screen shots, ...? > > It was written by Pat Autilio: https://www.linkedin.com/in/patautilio/ > > I do have a copy of the source. It's all marked up with comments like: > > * Copyright (c) 1984, 1985 AT&T > * All Rights Reserved > > * THIS IS UNPUBLISHED PROPRIETARY SOURCE > * CODE OF AT&T. > * The copyright notice above does not > * evidence any actual or intended > * publication of such source code. > > But then again so was all the other dmd/layers stuff at the time, and > most of that was opened up, if I'm not mistaken, by Dave Dykstra > (e.g. the dev5620 package even came with a GPLv2 COPYING file). > > So I suppose I could make it available.... > > -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms From ches at cheswick.com Wed Feb 3 17:22:37 2021 From: ches at cheswick.com (william cheswick) Date: Wed, 3 Feb 2021 02:22:37 -0500 Subject: [TUHS] GBACA In-Reply-To: <202102030545.1135jTxY017828@freefriends.org> References: <202102030545.1135jTxY017828@freefriends.org> Message-ID: I’ve always thought that a number of these would be fun, and pretty easy to make into apps. And GBACA might have a little historical interest. ches > On Feb 3, 2021, at 12:46 AM, arnold at skeeve.com wrote: > > It'd be cool if you would give this to Warren. There are even 5620 > emulators out there; maybe this game could be revived? I remember > playing it, it was fun. > > Arnold From arnold at skeeve.com Wed Feb 3 17:49:50 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 03 Feb 2021 00:49:50 -0700 Subject: [TUHS] GBACA In-Reply-To: References: <202102030545.1135jTxY017828@freefriends.org> Message-ID: <202102030749.1137nofU009145@freefriends.org> The thing to do, "obviously", is to get the 5620 emulator running as an app on Android... :-) Retrocomputing with a vengeance. Arnold william cheswick wrote: > I’ve always thought that a number of these would be fun, and pretty easy > to make into apps. And GBACA might have a little historical interest. > > ches > > > On Feb 3, 2021, at 12:46 AM, arnold at skeeve.com wrote: > > > > It'd be cool if you would give this to Warren. There are even 5620 > > emulators out there; maybe this game could be revived? I remember > > playing it, it was fun. > > > > Arnold From arnold at skeeve.com Wed Feb 3 17:59:07 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 03 Feb 2021 00:59:07 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: <202102030759.1137x7C2013543@freefriends.org> emanuel stiebler wrote: > On 2021-01-29 05:49, Arnold Robbins wrote: > > Hello All. > > > > I have made a pre-installed disk image available with a fair amount > > of software, see https://www.skeeve.com/3b1/. > > Thanks for doing & making the disk images, was an easy start! You're welcome. It's a fun side project. I think I finally get the enjoyment of retrocomputing with emulated versions of systems one used in one's youth. :-) > Do you remember, ho to set up the system to have four disk drives? > > Cheers & thanks again! I don't think it can support more than 2 drives. Certainly the emulator cannot. I don't know about real hardware. You can split a big drive into partitions when formatting with the diagnostics disk, but I don't think that's what you're asking. Sorry, Arnold From emu at e-bbes.com Wed Feb 3 17:53:38 2021 From: emu at e-bbes.com (emanuel stiebler) Date: Wed, 3 Feb 2021 02:53:38 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: On 2021-01-29 05:49, Arnold Robbins wrote: > Hello All. > > I have made a pre-installed disk image available with a fair amount > of software, see https://www.skeeve.com/3b1/. Thanks for doing & making the disk images, was an easy start! Do you remember, ho to set up the system to have four disk drives? Cheers & thanks again! From egbegb2 at gmail.com Wed Feb 3 18:53:20 2021 From: egbegb2 at gmail.com (Ed Bradford) Date: Wed, 3 Feb 2021 02:53:20 -0600 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <202102030759.1137x7C2013543@freefriends.org> References: <202102030759.1137x7C2013543@freefriends.org> Message-ID: It seems to me today's 2GHz processors should be able to emulate a 3B (*3B or not 3B, that is the question*) at a performance that far exceeds an actual 3B. Is the instruction set definition and architecture of a 3B available anywhere? Just wondering. I did such emulations for 68K machines and Cray machines. Ed Bradford ex-BTL, ex Silcon Valley, and ex IBM retiree. On Wed, Feb 3, 2021 at 2:00 AM wrote: > emanuel stiebler wrote: > > > On 2021-01-29 05:49, Arnold Robbins wrote: > > > Hello All. > > > > > > I have made a pre-installed disk image available with a fair amount > > > of software, see https://www.skeeve.com/3b1/. > > > > Thanks for doing & making the disk images, was an easy start! > > You're welcome. It's a fun side project. I think I finally get the > enjoyment of retrocomputing with emulated versions of systems one > used in one's youth. :-) > > > Do you remember, ho to set up the system to have four disk drives? > > > > Cheers & thanks again! > > I don't think it can support more than 2 drives. Certainly the emulator > cannot. I don't know about real hardware. > > You can split a big drive into partitions when formatting with the > diagnostics disk, but I don't think that's what you're asking. > > Sorry, > > Arnold > -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Wed Feb 3 18:58:56 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 03 Feb 2021 01:58:56 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> Message-ID: <202102030858.1138wuqd011051@freefriends.org> The 3B1 had an MC 68010. I don't truly remember how fast the real system ran. The emulated system seems to run more or less the same as the hardware did, taking my poor memory into account. The 5620 used the same processor as the 3B2, IIRC. There are emulators for both (maybe done by the same guy, I don't remember). I don't know of emulators for the 3B5 or 3B20. Arnold Ed Bradford wrote: > It seems to me today's 2GHz processors should be able to emulate a 3B (*3B > or not 3B, that is the question*) at a performance that far exceeds an > actual 3B. Is the instruction set definition and architecture of a 3B > available anywhere? > > Just wondering. I did such emulations for 68K machines and Cray machines. > > Ed Bradford ex-BTL, ex Silcon Valley, and ex IBM retiree. > > > On Wed, Feb 3, 2021 at 2:00 AM wrote: > > > emanuel stiebler wrote: > > > > > On 2021-01-29 05:49, Arnold Robbins wrote: > > > > Hello All. > > > > > > > > I have made a pre-installed disk image available with a fair amount > > > > of software, see https://www.skeeve.com/3b1/. > > > > > > Thanks for doing & making the disk images, was an easy start! > > > > You're welcome. It's a fun side project. I think I finally get the > > enjoyment of retrocomputing with emulated versions of systems one > > used in one's youth. :-) > > > > > Do you remember, ho to set up the system to have four disk drives? > > > > > > Cheers & thanks again! > > > > I don't think it can support more than 2 drives. Certainly the emulator > > cannot. I don't know about real hardware. > > > > You can split a big drive into partitions when formatting with the > > diagnostics disk, but I don't think that's what you're asking. > > > > Sorry, > > > > Arnold > > > > > -- > Advice is judged by results, not by intentions. > Cicero From egbegb2 at gmail.com Wed Feb 3 20:13:09 2021 From: egbegb2 at gmail.com (Ed Bradford) Date: Wed, 3 Feb 2021 04:13:09 -0600 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <202102030858.1138wuqd011051@freefriends.org> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: Hay, Arnold, MC 68K was created in 1980 or thereabouts. We talked about 10's of Megahertz, I think, in those times. I was involved (slightly) with the Zilog Z80,000 which would have competed with the 68K, NS32K and the Intel 80386. Of the instruction sets (architectures) I was most happy with, the Zilog 32-bit processor architecture was to me, the most minimalist and thorough. At the time, I managed software development for the Zilog company's Z8000 computers. It was a fun era. I bought a z8000 system and developed a CRAY simulator on it when I left Zilog and went to work for American Supercomputer Company (another interesting Silicon Valley story). The 1980's were a very interesting time in Silicon Valley. One of the saddest stories I recall is when "Eagle Computer" went public. The CEO died on the IPO day after he had become a very rich person when he crashed a Ferrari during a test drive. Eagle Computer died with the CEO. Ed On Wed, Feb 3, 2021 at 2:59 AM wrote: > The 3B1 had an MC 68010. I don't truly remember how fast the real > system ran. The emulated system seems to run more or less the same as > the hardware did, taking my poor memory into account. > > The 5620 used the same processor as the 3B2, IIRC. There are emulators > for both (maybe done by the same guy, I don't remember). I don't know > of emulators for the 3B5 or 3B20. > > Arnold > > Ed Bradford wrote: > > > It seems to me today's 2GHz processors should be able to emulate a 3B > (*3B > > or not 3B, that is the question*) at a performance that far exceeds an > > actual 3B. Is the instruction set definition and architecture of a 3B > > available anywhere? > > > > Just wondering. I did such emulations for 68K machines and Cray machines. > > > > Ed Bradford ex-BTL, ex Silcon Valley, and ex IBM retiree. > > > > > > On Wed, Feb 3, 2021 at 2:00 AM wrote: > > > > > emanuel stiebler wrote: > > > > > > > On 2021-01-29 05:49, Arnold Robbins wrote: > > > > > Hello All. > > > > > > > > > > I have made a pre-installed disk image available with a fair amount > > > > > of software, see https://www.skeeve.com/3b1/. > > > > > > > > Thanks for doing & making the disk images, was an easy start! > > > > > > You're welcome. It's a fun side project. I think I finally get the > > > enjoyment of retrocomputing with emulated versions of systems one > > > used in one's youth. :-) > > > > > > > Do you remember, ho to set up the system to have four disk drives? > > > > > > > > Cheers & thanks again! > > > > > > I don't think it can support more than 2 drives. Certainly the emulator > > > cannot. I don't know about real hardware. > > > > > > You can split a big drive into partitions when formatting with the > > > diagnostics disk, but I don't think that's what you're asking. > > > > > > Sorry, > > > > > > Arnold > > > > > > > > > -- > > Advice is judged by results, not by intentions. > > Cicero > -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From emu at e-bbes.com Wed Feb 3 20:46:47 2021 From: emu at e-bbes.com (emanuel stiebler) Date: Wed, 3 Feb 2021 05:46:47 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <202102030759.1137x7C2013543@freefriends.org> References: <202102030759.1137x7C2013543@freefriends.org> Message-ID: On 2021-02-03 02:59, arnold at skeeve.com wrote: > I don't think it can support more than 2 drives. Certainly the emulator > cannot. I don't know about real hardware. I remember seeing a 3b1, with 4 hard drives. The guy used an external enclosure (looked like a 5150 PC), to have the other three drives in it. From arnold at skeeve.com Wed Feb 3 21:13:09 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 03 Feb 2021 04:13:09 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> Message-ID: <202102031113.113BD9ou006435@freefriends.org> emanuel stiebler wrote: > On 2021-02-03 02:59, arnold at skeeve.com wrote: > > > I don't think it can support more than 2 drives. Certainly the emulator > > cannot. I don't know about real hardware. > > I remember seeing a 3b1, with 4 hard drives. The guy used an external > enclosure (looked like a 5150 PC), to have the other three drives in it. I'll take your word for it. :-) Maybe open an issue on the freebee github asking about it. A serious undertanding of the hardware is definitely beyond me. Thanks, Arnold From peter at rulingia.com Wed Feb 3 21:27:42 2021 From: peter at rulingia.com (Peter Jeremy) Date: Wed, 3 Feb 2021 22:27:42 +1100 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: On 2021-Feb-02 23:32:29 -0500, M Douglas McIlroy wrote: >> I 'm trying to get my head around a 10-bit machine optimised for C. >How about 23-bits? That was one of the early ESS machines, evidently >optimized to make every bit count. (Maybe a prime wordwidth helps >with hashing?) >Whirlwind II (built in 1952), was 16 bits. It took a long while for that >to become common wisdom. I'm not sure that 16 (or any other 2^n) bits is that obvious up front. Does anyone know why the computer industry wound up standardising on 8-bit bytes? Scientific computers were word-based and the number of bits in a word is more driven by the desired float range/precision. Commercial computers needed to support BCD numbers and typically 6-bit characters. ASCII (when it turned up) was 7 bits and so 8-bit characters wasted ⅛ of the storage. Minis tended to have shorter word sizes to minimise the amount of hardware. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From clemc at ccc.com Thu Feb 4 00:43:40 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 09:43:40 -0500 Subject: [TUHS] for now [COFF] to follow up - why a byte is 8-bits Message-ID: I will ask Warren's indulgence here - as this probably should be continued in COFF, which I have CC'ed but since was asked in TUHS I will answer On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS wrote: > I'm not sure that 16 (or any other 2^n) bits is that obvious up front. > Does anyone know why the computer industry wound up standardising on > 8-bit bytes? > Well, 'standardizing' is a little strong. Check out my QUORA answer: How many bits are there in a byte and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9 bit? for my details but the 8-bit part of the tail is here (cribbed from those posts): The Industry followed IBM with the S/360.The story of why a byte is 8- bits for the S/360 is one of my favorites since the number of bits in a byte is defined for each computer architecture. Simply put, Fred Brooks (who lead the IBM System 360 project) overruled the chief hardware designer, Gene Amdahl, and told him to make things power of two to make it easier on the SW writers. Amdahl famously thought it was a waste of hardware, but Brooks had the final authority. My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn this way: You need to remember that the 360 was designed to be IBM's first *ASCII machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was planning for a word size to be 24-bits and the byte size to be 7-bits for cost reasons. Fred kept throwing him out of his office and told him not to come back “until a byte and word are powers of two, as we just don’t know how to program it otherwise.” Brooks would eventually relent on the original pointer on the Systems 360 became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a result, (and to answer your original question) a byte first widely became 8-bit with the IBM’s Systems 360. It should be noted, that it still took some time before an 8-bit byte occurred more widely and in almost all systems as we see it today. Many systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a 36-bit word (with a single bit leftover) for a long time. I believe that the real widespread use of the 8-bit byte did not really occur until the rise of the minis such as the PDP-11 and the DG Nova in the late 1960s/early 1970s and eventually the mid-1970s’ microprocessors such as 8080/Z80/6502. Clem [1] While IBM did lead the effort to create ASCII, and System 360 actually supported ASCII in hardware, but because the software was so late, IBM marketing decided not the switch from BCD and instead used EBCDIC (their own code). Most IBM software was released using that code for the System 360/370 over the years. It was not until IBM released their Series 1 minicomputer in the late 1970s that IBM finally supported an ASCII-based system as the natural code for the software, although it had a lot of support for EBCDIC as they were selling them to interface to their ‘Mainframe’ products. [2] Gordon Bell would later observe that those two choices (32-bit word and 8-bit byte) were what made the IBM System 360 architecture last in the market, as neither would have been ‘fixable’ later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Thu Feb 4 00:47:45 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 09:47:45 -0500 Subject: [TUHS] GBACA In-Reply-To: References: <202102030545.1135jTxY017828@freefriends.org> Message-ID: 👍 (another retirement project) ᐧ ᐧ On Wed, Feb 3, 2021 at 2:28 AM william cheswick wrote: > I’ve always thought that a number of these would be fun, and pretty easy > to make into apps. And GBACA might have a little historical interest. > > ches > > > On Feb 3, 2021, at 12:46 AM, arnold at skeeve.com wrote: > > > > It'd be cool if you would give this to Warren. There are even 5620 > > emulators out there; maybe this game could be revived? I remember > > playing it, it was fun. > > > > Arnold > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.douglas.mcilroy at dartmouth.edu Thu Feb 4 00:55:13 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Wed, 3 Feb 2021 09:55:13 -0500 Subject: [TUHS] 2^n-bit operands (Was reviving a bit of WWB) Message-ID: > Does anyone know why the computer industry wound up standardising on 8-bit bytes? I give the credit to the IBM Stretch, aka 7030, and the Harvest attachment they made for NSA. For autocorrelation on bit streams--a fundamental need in codebreaking--the hardware was bit-addressable. But that was overkill for other supercomputing needs, so there was coarse-grained addressability too. Address conversion among various operand sizes made power of two a natural, lest address conversion entail division. The Stretch project also coined the felicitous word "byte" for the operand size suitable for character sets of the era. With the 360 series, IBM fully committed to multiple operand sizes. DEC followed suit and C naturalized the idea into programmers' working vocabulary. The power-of-2 word length had the side effect of making the smallest reasonable size for floating-point be 32 bits. Someone on the Apollo project once noted that the 36-bit word on previous IBM equipment was just adequate for planning moon orbits; they'd have had to use double-precision if the 700-series machines had been 32-bit. And double-precision took 10 times as long. That observation turned out to be prescient: double has become the norm. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Thu Feb 4 00:58:37 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 09:58:37 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: On Wed, Feb 3, 2021 at 5:14 AM Ed Bradford wrote: > Hay, Arnold, > > MC 68K was created in 1980 or thereabouts. We talked about 10's of > Megahertz, I think, in those times. > The original X series part was originally unnumbered but a sticker was later set for the lids that said X68000 (I had one on my desk - which was used for the Tektronix Magnolia prototype).[1] The X series ran at 8 Mhz, but the original released (distributed - MC68000) part was binned at 8 and 10 as were the later versions with the updated paging microcode called the MC68010 a year later. When the 68020 was released Moto got the speeds up to 16Mhz and later 20. By the '040 I think they were running at 50MHz [1] I think I still have the draft list of issues in my files. There was a halt and catch fire style error you had to be careful about (I've forgotten the details - but if you executed it in supervisor mode, the ucode turned on the address drivers in such a manner that they stopped working after that and the chip was ruined). I only did that once ;-) when we were debugging Magix (the OS for Magnolia) -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry.r.bent at gmail.com Thu Feb 4 01:33:37 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Wed, 3 Feb 2021 10:33:37 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: On Wed, 3 Feb 2021 at 09:59, Clem Cole wrote: > > > On Wed, Feb 3, 2021 at 5:14 AM Ed Bradford wrote: > >> Hay, Arnold, >> >> MC 68K was created in 1980 or thereabouts. We talked about 10's of >> Megahertz, I think, in those times. >> > The original X series part was originally unnumbered but a sticker was > later set for the lids that said X68000 (I had one on my desk - which was > used for the Tektronix Magnolia prototype).[1] The X series ran at 8 Mhz, > but the original released (distributed - MC68000) part was binned at 8 and > 10 as were the later versions with the updated paging microcode called > the MC68010 a year later. When the 68020 was released Moto got the speeds > up to 16Mhz and later 20. By the '040 I think they were running at 50MHz > > Was the "X" prefix always used for prototypes? I remember having an XC68020 in something - might have been an Sun 3/60, or an early Mac IIcx? -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From emu at e-bbes.com Thu Feb 4 01:20:34 2021 From: emu at e-bbes.com (emanuel stiebler) Date: Wed, 3 Feb 2021 10:20:34 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: <8df6eebd-05e9-1bf7-953d-5a147db7713a@e-bbes.com> On 2021-02-03 05:13, Ed Bradford wrote: > Hay, Arnold, > > MC 68K was created in 1980 or thereabouts. We talked about 10's > of Megahertz, I think, in those times. I was involved (slightly) with > the Zilog Z80,000 which would have competed with the 68K, NS32K and the > Intel 80386. ... > The 1980's were a very interesting time in Silicon Valley. > > One of the saddest stories I recall is when "Eagle Computer" went > public. The CEO died on the IPO day after he had become a very rich > person when he crashed a Ferrari during a test drive. Eagle Computer > died with the CEO. Did "eagle" not make an 68k emulation in bit slices? Am I mixing them up with another company? From clemc at ccc.com Thu Feb 4 02:53:45 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 11:53:45 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: Interesting, were you at Sun as I did not think Moto allowed its customers to XC chips to end-users. But may just like Intel (which has also sort of special stuff if a customer like HP wants to use preproduction chips for early ships) Moto had some way to do that. I never was familiar with that side of it. FWIW: In Motorola's terminology, anything in >>their<< customer's hand was supposed to have a X part if it was pre-production (eXperimental) [Intel calls these B step devices]. The problem for what would become the 68000 (according to my friend Les Crudele - who was on the 3 or 4 people on the original team) since it was midnight job (*i.e.* not sanctioned and basically 'off book') it did not even have an assigned experimental part number. The MC6809 was the official replacement for the MC6800. It was not even an "A step" part - it literally ran as a test run at the fab, as a favor by a few folks. I wasn't there, but I have been under the impression that Nick, Tom and Les got back a couple of test wafers and had to cut the dice and mount them in the engineering lab. You have to understand the whole project was a reaction some of the engineers had to the MC6809 and made a bet with their boss they could build a PDP-11 on a die. Since DEC had just put CalData out of business when Ken O'Mundro did the CD500, Nick and Les were careful not to directly copy the ISA, just modeled it after them. They had an PDP-11/70 running ISC's Unix port and a bunch of custom fox terminals with the Rand Editor partially running in the terminal. The rest iof history as it were. When the chip worked the first time, the team had a few (??hundred??) dice that they bonded and the >>engineers<< gave them to a couple of their partners to see what they thought. I do not know which firms all got them, but I know some folks in IBM did, and we got 10 of them in Tek Labs Computer Research in late winter '79 IIRC. We were working on a 29000 bitslice system called Tina which eventually died. I got asked by my boss if I wanted to play with these chips we had been given that so far nobody had messed with. The documental was on a lineprinter paper (clearly nroff output BTW). Roger Bates and I started to build the personal computer for ourselves. Paul Blattner wrote an assembler for it, and I hacked on the Ritchie PDP-11 C compiler [as I have said in other posts, the code it generated sucked and even put out PDP-11 code in a few cases - like for FP which I never redid). Steve Glaser and I started hacking. This would eventually become Magnolia which turned into the Tek 4404 Smalltalk system a few years later. I'm not sure when the original chip got the XC series #, but somebody (??Roger??) got a bunch of stickers that we put them on the lids, I want to say June/July of 1979 but I might not be remembering everything. Before that, the chips had been marked with some date code by hand with a sharpie or equivalent and were in a clear plastic snap case between anti-static sponge. BTW: As an amusing side note as we were talking about 'BourneGol'' a while back. Roger (being ex-Xerox PARC and recently of the Dorado) was used to BCPL so he wrote a similar set of BCPL macros in C similar to Bourne's hack for sh and adb. The CAD tool he wrote to design the boards was written in it and ran originally on V7 with a Tek 4014, then was moved to Magnolia when we had a stable OS and his new graphics display. Clem ᐧ On Wed, Feb 3, 2021 at 10:33 AM Henry Bent wrote: > On Wed, 3 Feb 2021 at 09:59, Clem Cole wrote: > >> >> >> On Wed, Feb 3, 2021 at 5:14 AM Ed Bradford wrote: >> >>> Hay, Arnold, >>> >>> MC 68K was created in 1980 or thereabouts. We talked about 10's of >>> Megahertz, I think, in those times. >>> >> The original X series part was originally unnumbered but a sticker was >> later set for the lids that said X68000 (I had one on my desk - which was >> used for the Tektronix Magnolia prototype).[1] The X series ran at 8 Mhz, >> but the original released (distributed - MC68000) part was binned at 8 and >> 10 as were the later versions with the updated paging microcode called >> the MC68010 a year later. When the 68020 was released Moto got the speeds >> up to 16Mhz and later 20. By the '040 I think they were running at 50MHz >> >> > Was the "X" prefix always used for prototypes? I remember having an > XC68020 in something - might have been an Sun 3/60, or an early Mac IIcx? > > -Henry > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From merlyn at geeks.org Thu Feb 4 02:48:55 2021 From: merlyn at geeks.org (Doug McIntyre) Date: Wed, 3 Feb 2021 10:48:55 -0600 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <202102030858.1138wuqd011051@freefriends.org> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: The 3B1 ran at 10Mhz. Pretty common for the era on MC68010 type hardware. Maybe I'll have to get mine out, fix it up to actually be runable again, and compare. I know the MFM drive kicked the bucket decades ago, and the hardware MFM emulators just haven't come down to the price level where I'd just go buy one or two. The 3B2 is all the WE 32100 CPU Seth Morabito wrote an emulator for the 3B2/DMD, and collects all the information about 3B2s here. https://archives.loomcom.com/3b2/ On Wed, Feb 03, 2021 at 01:58:56AM -0700, arnold at skeeve.com wrote: > The 3B1 had an MC 68010. I don't truly remember how fast the real > system ran. The emulated system seems to run more or less the same as > the hardware did, taking my poor memory into account. > > The 5620 used the same processor as the 3B2, IIRC. There are emulators > for both (maybe done by the same guy, I don't remember). I don't know > of emulators for the 3B5 or 3B20. > > Arnold > > Ed Bradford wrote: > > > It seems to me today's 2GHz processors should be able to emulate a 3B (*3B > > or not 3B, that is the question*) at a performance that far exceeds an > > actual 3B. Is the instruction set definition and architecture of a 3B > > available anywhere? > > > > Just wondering. I did such emulations for 68K machines and Cray machines. > > > > Ed Bradford ex-BTL, ex Silcon Valley, and ex IBM retiree. > > > > > > On Wed, Feb 3, 2021 at 2:00 AM wrote: > > > > > emanuel stiebler wrote: > > > > > > > On 2021-01-29 05:49, Arnold Robbins wrote: > > > > > Hello All. > > > > > > > > > > I have made a pre-installed disk image available with a fair amount > > > > > of software, see https://www.skeeve.com/3b1/. > > > > > > > > Thanks for doing & making the disk images, was an easy start! > > > > > > You're welcome. It's a fun side project. I think I finally get the > > > enjoyment of retrocomputing with emulated versions of systems one > > > used in one's youth. :-) > > > > > > > Do you remember, ho to set up the system to have four disk drives? > > > > > > > > Cheers & thanks again! > > > > > > I don't think it can support more than 2 drives. Certainly the emulator > > > cannot. I don't know about real hardware. > > > > > > You can split a big drive into partitions when formatting with the > > > diagnostics disk, but I don't think that's what you're asking. > > > > > > Sorry, > > > > > > Arnold > > > > > > > > > -- > > Advice is judged by results, not by intentions. > > Cicero From cowan at ccil.org Thu Feb 4 06:07:46 2021 From: cowan at ccil.org (John Cowan) Date: Wed, 3 Feb 2021 15:07:46 -0500 Subject: [TUHS] 2^n-bit operands (Was reviving a bit of WWB) In-Reply-To: References: Message-ID: On Wed, Feb 3, 2021 at 9:55 AM M Douglas McIlroy < m.douglas.mcilroy at dartmouth.edu> wrote: With the 360 series, IBM fully committed to multiple operand sizes. DEC > followed suit and C naturalized the idea into programmers' working > vocabulary. > The steady expansion of character set sizes also had a great deal to do with it. The various 6-bit character sets were fine as long as the industry was okay with English-only SHOUTING. When that was outgrown, 7-bit ASCII and 8-bit EBCDIC on multiple-of-6 word sizes (as were found on the big-endian DEC machines up to the PDP-10) were annoying to use. On the 12-bit PDP-8, where I cut my teeth, ASCII was stored as HHHHAAAAAAAA followed by LLLLBBBBBBBB, where the As represent the first character, the Bs the second, and the Hs and Ls the third. Padding was done with NUL, which meant that, for example, the TTY driver simply filled its read buffer with 0000AAAAAAAA 0000BBBBBBBB, which made rubout handling much simpler. Textual programs reading from it would already be set up to ignore NULs. On the 36-bit PDP-10, things were better: the sign bit was mostly ignored and five 7-bit ASCII characters were packed into each word, again with NUL padding. (Line editors turned on the sign bit to indicate that this word held an explicit ASCII line number.) John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org Original line from The Warrior's Apprentice by Lois McMaster Bujold: "Only on Barrayar would pulling a loaded needler start a stampede toward one." English-to-Russian-to-English mangling thereof: "Only on Barrayar you risk to lose support instead of finding it when you threat with the charged weapon." -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Thu Feb 4 06:09:04 2021 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 4 Feb 2021 07:09:04 +1100 (EST) Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: On Wed, 3 Feb 2021, Peter Jeremy wrote: > I'm not sure that 16 (or any other 2^n) bits is that obvious up front. > Does anyone know why the computer industry wound up standardising on > 8-bit bytes? Best reason I can think of is System/360 with 8-bit EBCDIC (Ugh! Who said that "J" should follow "I"?). I'm told that you could coerce it into using ASCII, although I've never seen it. > Scientific computers were word-based and the number of bits in a word is > more driven by the desired float range/precision. Commercial computers > needed to support BCD numbers and typically 6-bit characters. ASCII > (when it turned up) was 7 bits and so 8-bit characters wasted ⅛ of the > storage. Minis tended to have shorter word sizes to minimise the amount > of hardware. Why would you want to have a 7-bit symbol? Powers of two seem to be natural on a binary machine (although there is a running joke that CDC boxes has 7-1/2 bit bytes... I guess the real question is why did we move to binary machines at all; were there ever any ternary machines? -- Dave From nikke.karlsson at gmail.com Thu Feb 4 06:13:05 2021 From: nikke.karlsson at gmail.com (Niklas Karlsson) Date: Wed, 3 Feb 2021 21:13:05 +0100 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: According to Wikipedia: The first modern, electronic ternary computer, Setun , was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov ,[4] [5] and it had notable advantages over the binary computers that eventually replaced it, such as lower electricity consumption and lower production cost.[4] In 1970 Brusentsov built an enhanced version of the computer, which he called Setun-70.[4] In the United States, the ternary computing emulator Ternac working on a binary machine was developed in 1973.[6] :22 The ternary computer QTC-1 was developed in Canada.[7] Doesn't seem like they caught on otherwise, though. Niklas Den ons 3 feb. 2021 kl 21:10 skrev Dave Horsfall : > On Wed, 3 Feb 2021, Peter Jeremy wrote: > > > I'm not sure that 16 (or any other 2^n) bits is that obvious up front. > > Does anyone know why the computer industry wound up standardising on > > 8-bit bytes? > > Best reason I can think of is System/360 with 8-bit EBCDIC (Ugh! Who said > that "J" should follow "I"?). I'm told that you could coerce it into > using ASCII, although I've never seen it. > > > Scientific computers were word-based and the number of bits in a word is > > more driven by the desired float range/precision. Commercial computers > > needed to support BCD numbers and typically 6-bit characters. ASCII > > (when it turned up) was 7 bits and so 8-bit characters wasted ⅛ of the > > storage. Minis tended to have shorter word sizes to minimise the amount > > of hardware. > > Why would you want to have a 7-bit symbol? Powers of two seem to be > natural on a binary machine (although there is a running joke that CDC > boxes has 7-1/2 bit bytes... > > I guess the real question is why did we move to binary machines at all; > were there ever any ternary machines? > > -- Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at nocrew.org Thu Feb 4 06:51:51 2021 From: lars at nocrew.org (Lars Brinkhoff) Date: Wed, 03 Feb 2021 20:51:51 +0000 Subject: [TUHS] 2^n-bit operands (Was reviving a bit of WWB) In-Reply-To: (John Cowan's message of "Wed, 3 Feb 2021 15:07:46 -0500") References: Message-ID: <7wk0rolvrs.fsf@junk.nocrew.org> John Cowan wrote: > On the 36-bit PDP-10, things were better: the sign bit was mostly ignored > and five 7-bit ASCII characters were packed into each word, again with NUL > padding. (Line editors turned on the sign bit to indicate that this word > held an explicit ASCII line number.) It was not the sign bit but the least significant bit. The ILDB/IDPB byte instructions prefer it that way. From rdm at cfcl.com Thu Feb 4 07:10:02 2021 From: rdm at cfcl.com (Rich Morin) Date: Wed, 3 Feb 2021 13:10:02 -0800 Subject: [TUHS] 2^n-bit operands (Was reviving a bit of WWB) In-Reply-To: References: Message-ID: <3268852D-6845-4F4B-8EFC-3A1D78A11059@cfcl.com> > On Feb 3, 2021, at 12:07, John Cowan wrote: > > On the 36-bit PDP-10, things were better: the sign bit was mostly ignored and five 7-bit ASCII characters were packed into each word, again with NUL padding. (Line editors turned on the sign bit to indicate that this word held an explicit ASCII line number.) The PDP-7, 9, and 15 used 18-bit words, but used the same "5/7 IOPS ASCII" packing strategy. That is, five 7-bit ASCII characters were packed into a word pair. Unfortunately, they didn't have the convenient character manipulation instructions found on the PDP-10, so programmers had to do shifts, masks, etc. Grumble. -r From dave at horsfall.org Thu Feb 4 08:19:04 2021 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 4 Feb 2021 09:19:04 +1100 (EST) Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: On Tue, 2 Feb 2021, M Douglas McIlroy wrote: > > I'm trying to get my head around a 10-bit machine optimised for C. > > How about 23-bits? That was one of the early ESS machines, evidently > optimized to make every bit count. (Maybe a prime wordwidth helps with > hashing?) 23 bits? I think I'm about to throw up... Yeah, being prime I suppose it would help with hashing (and other crypto stuff). > Whirlwind II (built in 1952), was 16 bits. It took a long while for that > to become common wisdom. Now that goes back... -- Dave From m.douglas.mcilroy at dartmouth.edu Thu Feb 4 08:55:08 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Wed, 3 Feb 2021 17:55:08 -0500 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009190151.08J1pYnb066792@tahoe.cs.dartmouth.edu> <202009201842.08KIgn2f022401@freefriends.org> <04211470-AD63-452A-A0BB-6A7A6FD85AAE@gmail.com> <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: >> Whirlwind II (built in 1952), was 16 bits. It took a long while for that >> to become common wisdom. > Now that goes back... Yup. Before my time. I didn't get to use it until 1954. Doug On Wed, Feb 3, 2021 at 5:19 PM Dave Horsfall wrote: > On Tue, 2 Feb 2021, M Douglas McIlroy wrote: > > > > I'm trying to get my head around a 10-bit machine optimised for C. > > > > How about 23-bits? That was one of the early ESS machines, evidently > > optimized to make every bit count. (Maybe a prime wordwidth helps with > > hashing?) > > 23 bits? I think I'm about to throw up... Yeah, being prime I suppose it > would help with hashing (and other crypto stuff). > > > Whirlwind II (built in 1952), was 16 bits. It took a long while for that > > to become common wisdom. > > Now that goes back... > > -- Dave > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pugs at ieee.org Thu Feb 4 09:46:20 2021 From: pugs at ieee.org (Tom Lyon) Date: Wed, 3 Feb 2021 15:46:20 -0800 Subject: [TUHS] reviving a bit of WWB In-Reply-To: References: <202009202026.08KKQ2x6137303@tahoe.cs.dartmouth.edu> <20210202234703.GH4227@mcvoy.com> <20210203001900.GI4227@mcvoy.com> Message-ID: System/360s, or at least 370s, could do ASCII perfectly well. When we started UNIX on VM/370, it was clear to us that we wanted to run with ASCII. But some otherwise intelligent people told us that it *just couldn't be done* - the instructions depended on EBCDIC. But I think there was only 1 machine instruction with any hint of EBCDIC - and it was an instruction that no-one could imagine being used by a compiler, Of course, plenty of EBCDIC/ASCII conversions went on in drivers, etc, but that was easy. On Wed, Feb 3, 2021 at 12:09 PM Dave Horsfall wrote: > On Wed, 3 Feb 2021, Peter Jeremy wrote: > > > I'm not sure that 16 (or any other 2^n) bits is that obvious up front. > > Does anyone know why the computer industry wound up standardising on > > 8-bit bytes? > > Best reason I can think of is System/360 with 8-bit EBCDIC (Ugh! Who said > that "J" should follow "I"?). I'm told that you could coerce it into > using ASCII, although I've never seen it. > > > Scientific computers were word-based and the number of bits in a word is > > more driven by the desired float range/precision. Commercial computers > > needed to support BCD numbers and typically 6-bit characters. ASCII > > (when it turned up) was 7 bits and so 8-bit characters wasted ⅛ of the > > storage. Minis tended to have shorter word sizes to minimise the amount > > of hardware. > > Why would you want to have a 7-bit symbol? Powers of two seem to be > natural on a binary machine (although there is a running joke that CDC > boxes has 7-1/2 bit bytes... > > I guess the real question is why did we move to binary machines at all; > were there ever any ternary machines? > > -- Dave -- - Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnu at toad.com Thu Feb 4 10:41:45 2021 From: gnu at toad.com (John Gilmore) Date: Wed, 03 Feb 2021 16:41:45 -0800 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> Message-ID: <27567.1612399305@hop.toad.com> Clem Cole wrote: > > MC 68K was created in 1980 or thereabouts. Wikimedia Commons has a pic of a 1979 XC68000L: https://commons.wikimedia.org/wiki/File:XC68000.agr.jpg https://en.wikipedia.org/wiki/File:XC68000.agr.jpg After a USENET posting pointed me at them, I browsed the Sunnyvale Patent Library to bring home the patents for the Motorola 68000. They include a full listing of the entire microcode! I ended up copying it, taping the sheets together to reconstitute Nick Tredennick's large-format "hardware flowcharts", and hanging them in the hallway near my office at Sun. Fascinating! I never saw X68000 parts; Sun started in 1981, so Moto had production parts by then. But Sun did get early prototypes of the 68010, which we were very happy for, since we and our customers were running a swapping Unisoft UNIX because the 68000 couldn't do paging and thus couldn't run the BSD UNIX that we were porting from the Vax. Later, I was part of the Sun bringup team using the XC68020. We built a big spider-like daughterboard adapter that would let it be plugged into a 64-pin 68010 socket, so we could debug the 68020 in a Sun-2 CPU board while building 32-bit-wide boards for the Sun-3 bringup. We had it successfully running UNIX within a day of receiving it! (We later heard that our Moto rep was intending to give that precious early part to another customer, but decided during their meeting with us to give it to us, because we were so ready to get it running.) When the 68000 was announced, it was obviously head-and-shoulders better than the other clunky 8-bit and 16-bit systems, with a clean 32-bit architecture and a large address space. It seems like the designers of the other chips (e.g. the 8088) had never actually worked with real computers (mainframes and minicomputers) and kept not-learning from computing history. Some of my early experience was in APL implementation on the IBM 360 series. I knew the 68000 would be a great APL host, since its autoincrement addressing was perfect for implementing vector operations. In the process of designing an APL for it (which was never built), I wrote up a series of short suggestions to Motorola on how to improve the design. This was published in Computer Architecture News. For the 68010 they actually did one of the ideas, the "loop mode" that would detect a 1-instruction backward decrement-and-branch loop, and stop continually re-fetching the two instructions. This made memory-to-memory or register-vs-memory instruction loops run at almost the speed of memory, which was a big improvement for bcopy, bzero, add-up-a-vector-of-integers, etc. I'll append a USENET posting about the 68000 patents, followed by my addendum after visiting the Patent office. John >From decwrl!decvax!harpo!npoiv!npois!houxm!houxa!houxk!tdl (T.LOVETT) Tue Mar 15 16:55:28 1983 Subject: 68000: 16 bits. With references Newsgroups: net.micro.68k With due respect to Henry Spencer I feel that I must correct some of his statements regarding the 68000. He is correct in saying that the 68000 is basically 16 bits wide; however, his explanation of the segmented bus is incorrect. The datapath of the 68000 is divided into three pieces, each of which has two busses, address and data, running through it. Six busses total. There are muxes which can be switched so that all address busses are connected and all data busses are connected. The three sections of the datapath are the data section (includes low 16 bits of all data registers and ALU), the "low" section (contains the low 16 bits of address registers and the low half of the Address Adder(AAU)), and the "high" section (contains high 16 bits of all address and data registers and the upper half of the AAU). Theoretically they could do 6 16 bit transfers simultaneously, but in looking through the microcode I don't remember seeing more than three transfers at a time. The "low" and "high" sections can be cascaded to provide a 32 bit arithmetic unit for address calculations. 32 bit data calculations must be done in two passes through the ALU. For the masochists out there, you can learn more than you ever wanted to know about the 68000 by reading Motorola's patents on it. They are available for some nominal fee (~ one dollar) from the Office of Patents and Trademarks in Arlington. The relevant patents are: 1 - #4,307,445 "Microprogrammed Control Apparatus Having a Two Level Control Store for Data Processor", Tredennick, et al. First design of 68000 which was scrapped? 2 - #4,296,469 "Execution Unit for Data Processor using Segmented Bus structure", Gunter, et al. All about the 16 bit data path 3 - #4,312,034 "ALU and Condition Code Control Unit for Data Processor", Gunter, et al. Boring. 4 - #4,325,121 "Two-Level Control Store for Microprogrammed Data Processor", Gunter et al. Bonanza! Full of block diagrams and everything you ever wanted to know. Includes complete listing of microcode with Tredennick's "hardware flowcharts". Hope this clears things up. Tom Lovett BTL Holmdel harpo!houxk!tdl 201-949-0056 My [gnu] notes on additional 68000 patents: Pat # Appl # Filed date Issued date Inventors 4,338,661 041,201 May 21, 1979 Jul 6, 1982 Tredennick & Gunter Conditional Branch Unit for Microprogrammed Data Processor 4,342,078 041,202 May 21, 1979 Jul 27, 1982 Tredennick & Gunter Instruction Register Sequence Decoder for Microprogrammed Data Processor and Method 4,312,034 041,203 May 21, 1979 Jan 19, 1982 Gunter, Hobbs, Spak, Tredennick ALU and Condition Code Control Unit for Data Processor 4,325,121 041,135 May 21, 1979 Apr 13, 1982 Gunter, Tredennick Two-Level Control Store for Microprogrammed Data Processor Bonanza! Full of block diagrams and everything you ever wanted to know. Includes complete listing of microcode with Tredennick's "hardware flowcharts". 4,296,469 961,798 Nov 17, 1978 Oct 20, 1981 Gunter, Tredennick, McAlister Execution Unit for Data Processor Using Segmented Bus Structure All about the 16 bit data path 4,348,722 136,845 Apr 3, 1980 Sep 7, 1982 Gunter, Crudele, Zolnowsky, Mothersole Bus Error Recognition for Microprogrammed Data Processor 4,349,873 136,593 Apr 2, 1980 Sep 14, 1982 Gunter, Zolnowsky, Crudele Microprocessor Interrupt Processing 4,524,415 447,721 Dec 7, 1982 Jun 18, 1985 Mills, Moyer, MacGregor, Zolnowsky Virtual Machine Data Processor 68010 changes to 68000 4,348,741 169,558 Jul 17, 1980 Sep 7, 1982 McAlister, Gunter, Spak, Schriber Priority Encoder Used to decode the bit masks for MOVEM. XXXXXXXXX 446,801 Dec 7, 1982 Crudele, Zolnowsky, Moyer, MacGregor Virtual Memory Data Processor XXXXXXXXX 447,600 Dec 7, 1982 MacGregor, Moyer, Mills Jr, Zolnowsky Data Processor Version Validation About how bus errors store a CPU mask version # to prevent their being restarted on a different CPU mask in a multiprocessor system XXXXXXXXX 961,796 Nov 17, 1978 Tredennick et al Microprogrammed Control Apparatus for Data Processor (continued into 4,325,121, probably never issued) XXXXXXXXX 961,797 Nov 17, 1978 McAlister et al Multi-port RAM Structure for Data Processor Registers 4,307,445 961,796 Nov 17, 1978 Tredennick, et al Microprogrammed Control Apparatus Having a Two Level Control Store for Data Processor First design of 68000 which was scrapped? From aek at bitsavers.org Thu Feb 4 10:52:12 2021 From: aek at bitsavers.org (Al Kossow) Date: Wed, 3 Feb 2021 16:52:12 -0800 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <27567.1612399305@hop.toad.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: On 2/3/21 4:41 PM, John Gilmore wrote: > Clem Cole wrote: >>> MC 68K was created in 1980 or thereabouts. > > Wikimedia Commons has a pic of a 1979 XC68000L: > > https://commons.wikimedia.org/wiki/File:XC68000.agr.jpg > https://en.wikipedia.org/wiki/File:XC68000.agr.jpg > > After a USENET posting pointed me at them, I browsed the Sunnyvale > Patent Library to bring home the patents for the Motorola 68000. They > include a full listing of the entire microcode! I ended up copying it, > taping the sheets together to reconstitute Nick Tredennick's > large-format "hardware flowcharts", and hanging them in the hallway near > my office at Sun. Fascinating! Oliver Galibert's work on re the 68000 https://og.kervella.org/m68k/ http://gendev.spritesmind.net/forum/viewtopic.php?t=3023 From mah at mhorton.net Thu Feb 4 11:12:04 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Wed, 3 Feb 2021 17:12:04 -0800 Subject: [TUHS] Tiny VT100 running 2.11BSD Message-ID: This is fun to watch! https://hackaday.com/2021/01/18/a-miniature-vt102-running-a-miniature-pdp11/     Mary Ann From clemc at ccc.com Thu Feb 4 11:14:15 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 20:14:15 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <27567.1612399305@hop.toad.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: Hey John. Bad cut/paste. I did not say 1980. That was from Eds msg. I said we got what would become X series parts in winter 79. As I understand it from Les; he, Nick and Tom built try the TTL prototype in Early 78 with Les and Tom turning it into Si later that year while Nick was writing ucode and all of the writing AVTs we high ran against then TTL system. Les says Tom did a masterful job of keeping management out of their hair such they they stayed under the radar. >From what I understand there was so much focus on countering the Z80 with the 6809 that management thought they were just experimenting with a more 16 bit 6809. But what they were doing was an AD experiment. The fact that it worked was amazing. On Wed, Feb 3, 2021 at 7:41 PM John Gilmore wrote: > Clem Cole wrote: > > > MC 68K was created in 1980 or thereabouts. > > Wikimedia Commons has a pic of a 1979 XC68000L: > > https://commons.wikimedia.org/wiki/File:XC68000.agr.jpg > https://en.wikipedia.org/wiki/File:XC68000.agr.jpg > > After a USENET posting pointed me at them, I browsed the Sunnyvale > Patent Library to bring home the patents for the Motorola 68000. They > include a full listing of the entire microcode! I ended up copying it, > taping the sheets together to reconstitute Nick Tredennick's > large-format "hardware flowcharts", and hanging them in the hallway near > my office at Sun. Fascinating! > > I never saw X68000 parts; Sun started in 1981, so Moto had production > parts by then. But Sun did get early prototypes of the 68010, which we > were very happy for, since we and our customers were running a swapping > Unisoft UNIX because the 68000 couldn't do paging and thus couldn't run > the BSD UNIX that we were porting from the Vax. Later, I was part of > the Sun bringup team using the XC68020. We built a big spider-like > daughterboard adapter that would let it be plugged into a 64-pin 68010 > socket, so we could debug the 68020 in a Sun-2 CPU board while building > 32-bit-wide boards for the Sun-3 bringup. We had it successfully > running UNIX within a day of receiving it! (We later heard that our > Moto rep was intending to give that precious early part to another > customer, but decided during their meeting with us to give it to us, > because we were so ready to get it running.) > > When the 68000 was announced, it was obviously head-and-shoulders better > than the other clunky 8-bit and 16-bit systems, with a clean 32-bit > architecture and a large address space. It seems like the designers of > the other chips (e.g. the 8088) had never actually worked with real > computers (mainframes and minicomputers) and kept not-learning from > computing history. > > Some of my early experience was in APL implementation on the IBM 360 > series. I knew the 68000 would be a great APL host, since its > autoincrement addressing was perfect for implementing vector operations. > In the process of designing an APL for it (which was never built), I > wrote up a series of short suggestions to Motorola on how to improve the > design. This was published in Computer Architecture News. For the > 68010 they actually did one of the ideas, the "loop mode" that would > detect a 1-instruction backward decrement-and-branch loop, and stop > continually re-fetching the two instructions. This made > memory-to-memory or register-vs-memory instruction loops run at almost > the speed of memory, which was a big improvement for bcopy, bzero, > add-up-a-vector-of-integers, etc. > > I'll append a USENET posting about the 68000 patents, followed by my > addendum after visiting the Patent office. > > John > > From decwrl!decvax!harpo!npoiv!npois!houxm!houxa!houxk!tdl (T.LOVETT) Tue > Mar 15 16:55:28 1983 > Subject: 68000: 16 bits. With references > Newsgroups: net.micro.68k > > With due respect to Henry Spencer I feel that I must correct > some of his statements regarding the 68000. He is correct in > saying that the 68000 is basically 16 bits wide; however, > his explanation of the segmented bus is incorrect. > > The datapath of the 68000 is divided into three pieces, each of > which has two busses, address and data, running through it. Six > busses total. There are muxes which can be switched so that all > address busses are connected and all data busses are connected. > The three sections of the datapath are the data section > (includes low 16 bits of all data registers and ALU), the > "low" section (contains the low 16 bits of address registers and > the low half of the Address Adder(AAU)), and the "high" section > (contains high 16 bits of all address and data registers and > the upper half of the AAU). > > Theoretically they could do 6 16 bit transfers simultaneously, > but in looking through the microcode I don't remember seeing more > than three transfers at a time. The "low" and "high" sections can > be cascaded to provide a 32 bit arithmetic unit for address > calculations. 32 bit data calculations must be done in two passes through > the ALU. > > For the masochists out there, you can learn more than you ever wanted > to know about the 68000 by reading Motorola's patents on it. They are > available for some nominal fee (~ one dollar) from the Office > of Patents and Trademarks in Arlington. The relevant patents are: > > 1 - #4,307,445 "Microprogrammed Control Apparatus Having a Two > Level Control Store for Data Processor", Tredennick, et al. > > First design of 68000 which was scrapped? > > 2 - #4,296,469 "Execution Unit for Data Processor using Segmented > Bus structure", Gunter, et al. > > All about the 16 bit data path > > 3 - #4,312,034 "ALU and Condition Code Control Unit for Data Processor", > Gunter, et al. > > Boring. > > 4 - #4,325,121 "Two-Level Control Store for Microprogrammed Data > Processor", > Gunter et al. > > Bonanza! Full of block diagrams and everything you ever wanted > to know. Includes complete listing of microcode with > Tredennick's "hardware flowcharts". > > Hope this clears things up. > > Tom Lovett BTL Holmdel harpo!houxk!tdl 201-949-0056 > > > My [gnu] notes on additional 68000 patents: > > Pat # Appl # Filed date Issued date Inventors > > 4,338,661 041,201 May 21, 1979 Jul 6, 1982 > Tredennick & Gunter > Conditional Branch Unit for Microprogrammed Data Processor > > 4,342,078 041,202 May 21, 1979 Jul 27, 1982 > Tredennick & Gunter > Instruction Register Sequence Decoder for Microprogrammed > Data Processor and Method > > 4,312,034 041,203 May 21, 1979 Jan 19, 1982 > Gunter, Hobbs, Spak, Tredennick > ALU and Condition Code Control Unit for Data Processor > > 4,325,121 041,135 May 21, 1979 Apr 13, 1982 > Gunter, Tredennick > Two-Level Control Store for Microprogrammed Data Processor > Bonanza! Full of block diagrams and everything you ever > wanted > to know. Includes complete listing of microcode with > Tredennick's "hardware flowcharts". > > 4,296,469 961,798 Nov 17, 1978 Oct 20, 1981 > Gunter, Tredennick, McAlister > Execution Unit for Data Processor Using Segmented Bus Structure > All about the 16 bit data path > > 4,348,722 136,845 Apr 3, 1980 Sep 7, 1982 > Gunter, Crudele, Zolnowsky, Mothersole > Bus Error Recognition for Microprogrammed Data Processor > > 4,349,873 136,593 Apr 2, 1980 Sep 14, 1982 > Gunter, Zolnowsky, Crudele > Microprocessor Interrupt Processing > > 4,524,415 447,721 Dec 7, 1982 Jun 18, 1985 > Mills, Moyer, MacGregor, Zolnowsky > Virtual Machine Data Processor > 68010 changes to 68000 > > 4,348,741 169,558 Jul 17, 1980 Sep 7, 1982 > McAlister, Gunter, Spak, Schriber > Priority Encoder > Used to decode the bit masks for MOVEM. > > XXXXXXXXX 446,801 Dec 7, 1982 > Crudele, Zolnowsky, Moyer, MacGregor > Virtual Memory Data Processor > > XXXXXXXXX 447,600 Dec 7, 1982 > MacGregor, Moyer, Mills Jr, Zolnowsky > Data Processor Version Validation > About how bus errors store a CPU mask version # to > prevent their being restarted on a different CPU mask > in a multiprocessor system > > XXXXXXXXX 961,796 Nov 17, 1978 > Tredennick et al > Microprogrammed Control Apparatus for Data Processor > (continued into 4,325,121, probably never issued) > > XXXXXXXXX 961,797 Nov 17, 1978 > McAlister et al > Multi-port RAM Structure for Data Processor Registers > > 4,307,445 961,796 Nov 17, 1978 > Tredennick, et al > Microprogrammed Control Apparatus Having a Two Level > Control Store for Data Processor > First design of 68000 which was scrapped? > > -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From krewat at kilonet.net Thu Feb 4 11:10:58 2021 From: krewat at kilonet.net (Arthur Krewat) Date: Wed, 3 Feb 2021 20:10:58 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <27567.1612399305@hop.toad.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: On 2/3/2021 7:41 PM, John Gilmore wrote: > When the 68000 was announced, it was obviously head-and-shoulders better > than the other clunky 8-bit and 16-bit systems, with a clean 32-bit > architecture and a large address space. The 68K always reminded me of the VAX. art k. From clemc at ccc.com Thu Feb 4 11:20:05 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 20:20:05 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: Bad iPhone autocorrect sigh... They all ran the AVTs on the TTL prototype. On Wed, Feb 3, 2021 at 8:14 PM Clem Cole wrote: > Hey John. Bad cut/paste. I did not say 1980. That was from Eds msg. I > said we got what would become X series parts in winter 79. As I > understand it from Les; he, Nick and Tom built try the TTL prototype in > Early 78 with Les and Tom turning it into Si later that year while Nick was > writing ucode and all of the writing AVTs we high ran against then TTL > system. > > Les says Tom did a masterful job of keeping management out of their hair > such they they stayed under the radar. > > From what I understand there was so much focus on countering the Z80 with > the 6809 that management thought they were just experimenting with a more > 16 bit 6809. But what they were doing was an AD experiment. The fact > that it worked was amazing. > > On Wed, Feb 3, 2021 at 7:41 PM John Gilmore wrote: > >> Clem Cole wrote: >> > > MC 68K was created in 1980 or thereabouts. >> >> Wikimedia Commons has a pic of a 1979 XC68000L: >> >> https://commons.wikimedia.org/wiki/File:XC68000.agr.jpg >> https://en.wikipedia.org/wiki/File:XC68000.agr.jpg >> >> After a USENET posting pointed me at them, I browsed the Sunnyvale >> Patent Library to bring home the patents for the Motorola 68000. They >> include a full listing of the entire microcode! I ended up copying it, >> taping the sheets together to reconstitute Nick Tredennick's >> large-format "hardware flowcharts", and hanging them in the hallway near >> my office at Sun. Fascinating! >> >> I never saw X68000 parts; Sun started in 1981, so Moto had production >> parts by then. But Sun did get early prototypes of the 68010, which we >> were very happy for, since we and our customers were running a swapping >> Unisoft UNIX because the 68000 couldn't do paging and thus couldn't run >> the BSD UNIX that we were porting from the Vax. Later, I was part of >> the Sun bringup team using the XC68020. We built a big spider-like >> daughterboard adapter that would let it be plugged into a 64-pin 68010 >> socket, so we could debug the 68020 in a Sun-2 CPU board while building >> 32-bit-wide boards for the Sun-3 bringup. We had it successfully >> running UNIX within a day of receiving it! (We later heard that our >> Moto rep was intending to give that precious early part to another >> customer, but decided during their meeting with us to give it to us, >> because we were so ready to get it running.) >> >> When the 68000 was announced, it was obviously head-and-shoulders better >> than the other clunky 8-bit and 16-bit systems, with a clean 32-bit >> architecture and a large address space. It seems like the designers of >> the other chips (e.g. the 8088) had never actually worked with real >> computers (mainframes and minicomputers) and kept not-learning from >> computing history. >> >> Some of my early experience was in APL implementation on the IBM 360 >> series. I knew the 68000 would be a great APL host, since its >> autoincrement addressing was perfect for implementing vector operations. >> In the process of designing an APL for it (which was never built), I >> wrote up a series of short suggestions to Motorola on how to improve the >> design. This was published in Computer Architecture News. For the >> 68010 they actually did one of the ideas, the "loop mode" that would >> detect a 1-instruction backward decrement-and-branch loop, and stop >> continually re-fetching the two instructions. This made >> memory-to-memory or register-vs-memory instruction loops run at almost >> the speed of memory, which was a big improvement for bcopy, bzero, >> add-up-a-vector-of-integers, etc. >> >> I'll append a USENET posting about the 68000 patents, followed by my >> addendum after visiting the Patent office. >> >> John >> >> From decwrl!decvax!harpo!npoiv!npois!houxm!houxa!houxk!tdl (T.LOVETT) Tue >> Mar 15 16:55:28 1983 >> Subject: 68000: 16 bits. With references >> Newsgroups: net.micro.68k >> >> With due respect to Henry Spencer I feel that I must correct >> some of his statements regarding the 68000. He is correct in >> saying that the 68000 is basically 16 bits wide; however, >> his explanation of the segmented bus is incorrect. >> >> The datapath of the 68000 is divided into three pieces, each of >> which has two busses, address and data, running through it. Six >> busses total. There are muxes which can be switched so that all >> address busses are connected and all data busses are connected. >> The three sections of the datapath are the data section >> (includes low 16 bits of all data registers and ALU), the >> "low" section (contains the low 16 bits of address registers and >> the low half of the Address Adder(AAU)), and the "high" section >> (contains high 16 bits of all address and data registers and >> the upper half of the AAU). >> >> Theoretically they could do 6 16 bit transfers simultaneously, >> but in looking through the microcode I don't remember seeing more >> than three transfers at a time. The "low" and "high" sections can >> be cascaded to provide a 32 bit arithmetic unit for address >> calculations. 32 bit data calculations must be done in two passes through >> the ALU. >> >> For the masochists out there, you can learn more than you ever wanted >> to know about the 68000 by reading Motorola's patents on it. They are >> available for some nominal fee (~ one dollar) from the Office >> of Patents and Trademarks in Arlington. The relevant patents are: >> >> 1 - #4,307,445 "Microprogrammed Control Apparatus Having a Two >> Level Control Store for Data Processor", Tredennick, et al. >> >> First design of 68000 which was scrapped? >> >> 2 - #4,296,469 "Execution Unit for Data Processor using Segmented >> Bus structure", Gunter, et al. >> >> All about the 16 bit data path >> >> 3 - #4,312,034 "ALU and Condition Code Control Unit for Data Processor", >> Gunter, et al. >> >> Boring. >> >> 4 - #4,325,121 "Two-Level Control Store for Microprogrammed Data >> Processor", >> Gunter et al. >> >> Bonanza! Full of block diagrams and everything you ever wanted >> to know. Includes complete listing of microcode with >> Tredennick's "hardware flowcharts". >> >> Hope this clears things up. >> >> Tom Lovett BTL Holmdel harpo!houxk!tdl 201-949-0056 >> >> >> My [gnu] notes on additional 68000 patents: >> >> Pat # Appl # Filed date Issued date Inventors >> >> 4,338,661 041,201 May 21, 1979 Jul 6, 1982 >> Tredennick & Gunter >> Conditional Branch Unit for Microprogrammed Data Processor >> >> 4,342,078 041,202 May 21, 1979 Jul 27, 1982 >> Tredennick & Gunter >> Instruction Register Sequence Decoder for Microprogrammed >> Data Processor and Method >> >> 4,312,034 041,203 May 21, 1979 Jan 19, 1982 >> Gunter, Hobbs, Spak, Tredennick >> ALU and Condition Code Control Unit for Data Processor >> >> 4,325,121 041,135 May 21, 1979 Apr 13, 1982 >> Gunter, Tredennick >> Two-Level Control Store for Microprogrammed Data Processor >> Bonanza! Full of block diagrams and everything you ever >> wanted >> to know. Includes complete listing of microcode with >> Tredennick's "hardware flowcharts". >> >> 4,296,469 961,798 Nov 17, 1978 Oct 20, 1981 >> Gunter, Tredennick, McAlister >> Execution Unit for Data Processor Using Segmented Bus Structure >> All about the 16 bit data path >> >> 4,348,722 136,845 Apr 3, 1980 Sep 7, 1982 >> Gunter, Crudele, Zolnowsky, Mothersole >> Bus Error Recognition for Microprogrammed Data Processor >> >> 4,349,873 136,593 Apr 2, 1980 Sep 14, 1982 >> Gunter, Zolnowsky, Crudele >> Microprocessor Interrupt Processing >> >> 4,524,415 447,721 Dec 7, 1982 Jun 18, 1985 >> Mills, Moyer, MacGregor, Zolnowsky >> Virtual Machine Data Processor >> 68010 changes to 68000 >> >> 4,348,741 169,558 Jul 17, 1980 Sep 7, 1982 >> McAlister, Gunter, Spak, Schriber >> Priority Encoder >> Used to decode the bit masks for MOVEM. >> >> XXXXXXXXX 446,801 Dec 7, 1982 >> Crudele, Zolnowsky, Moyer, MacGregor >> Virtual Memory Data Processor >> >> XXXXXXXXX 447,600 Dec 7, 1982 >> MacGregor, Moyer, Mills Jr, Zolnowsky >> Data Processor Version Validation >> About how bus errors store a CPU mask version # to >> prevent their being restarted on a different CPU mask >> in a multiprocessor system >> >> XXXXXXXXX 961,796 Nov 17, 1978 >> Tredennick et al >> Microprogrammed Control Apparatus for Data Processor >> (continued into 4,325,121, probably never issued) >> >> XXXXXXXXX 961,797 Nov 17, 1978 >> McAlister et al >> Multi-port RAM Structure for Data Processor Registers >> >> 4,307,445 961,796 Nov 17, 1978 >> Tredennick, et al >> Microprogrammed Control Apparatus Having a Two Level >> Control Store for Data Processor >> First design of 68000 which was scrapped? >> >> -- > Sent from a handheld expect more typos than usual > -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Feb 4 11:33:56 2021 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 3 Feb 2021 17:33:56 -0800 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: <20210204013356.GA16541@mcvoy.com> On Wed, Feb 03, 2021 at 08:10:58PM -0500, Arthur Krewat wrote: > On 2/3/2021 7:41 PM, John Gilmore wrote: > >When the 68000 was announced, it was obviously head-and-shoulders better > >than the other clunky 8-bit and 16-bit systems, with a clean 32-bit > >architecture and a large address space. > The 68K always reminded me of the VAX. I'm not sure if that is a compliment or not. The NS320XX always reminded me more of the PDP-11 (which is by *far* my favorite assembler, so uniform, I had a TA that could read the octal dump of a PDP-11 like it was C). I wasn't that good but I could sort of see what he was seeing and I never saw that in the VAX. 68K was closer but I felt like the NS320xx was closer yet. Pity they couldn't produce bug free chips. Someone mentioned Z80000, I stopped at Z80 so I don't know if that was also a pleasant ISA. The x86 stuff is about as far away from PDP-11 as you can get. Required to know it, but so unpleasant. I have to admit that I haven't looked at ARM assembler, the M1 is making me rethink that. Anyone have an opinion on where ARM lies in the pleasant to unpleasant scale? --lm who misses comp.arch back when CPU people hung out there From clemc at ccc.com Thu Feb 4 11:35:59 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 3 Feb 2021 20:35:59 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: No, that was the 16032 from nat semi. I believe at least Nick and Les had been building oil drilling controll systems using pdp11s and custom TTL For Schulmerger before they came to Moto. They were definitely pdp11 fans. But they had used 360 systems at UT in college. So they idea of a 24 bit pointer stored in a 32 bit word they took from it. Remember the chip was a 16 chip inside. It has 16 bit Barallel shifter and All 32 bit operations took 2 ticks. And int was naturally 16 bits which is what it was in my compiler. I think it was Jack Test's compiler from MIT that was the first one ILP32 one for the 68k. I used to commute to work at Stellar with Les and he told me many of the stories btw. One of them I remember was they were worried about the Barallel shifter because it was one of the pieces that was different between the TTL prototype and the MOS implementation and it was the largest pattern on the die. It was one of the few parts of the chip that had full spice simulation. Clem On Wed, Feb 3, 2021 at 8:18 PM Arthur Krewat wrote: > On 2/3/2021 7:41 PM, John Gilmore wrote: > > When the 68000 was announced, it was obviously head-and-shoulders better > > than the other clunky 8-bit and 16-bit systems, with a clean 32-bit > > architecture and a large address space. > The 68K always reminded me of the VAX. > > art k. > > -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From aek at bitsavers.org Thu Feb 4 11:47:38 2021 From: aek at bitsavers.org (Al Kossow) Date: Wed, 3 Feb 2021 17:47:38 -0800 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <20210204013356.GA16541@mcvoy.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On 2/3/21 5:33 PM, Larry McVoy wrote: > Anyone have an opinion on where ARM lies in the pleasant > to unpleasant scale? Don't look at how they implemented bit manipulation From aek at bitsavers.org Thu Feb 4 11:57:12 2021 From: aek at bitsavers.org (Al Kossow) Date: Wed, 3 Feb 2021 17:57:12 -0800 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On 2/3/21 5:47 PM, Al Kossow wrote: > On 2/3/21 5:33 PM, Larry McVoy wrote: >> Anyone have an opinion on where ARM lies in the pleasant >> to unpleasant scale? > > Don't look at how they implemented bit manipulation > https://spin.atomicobject.com/2013/02/08/bit-banding/ From dave at horsfall.org Thu Feb 4 12:18:38 2021 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 4 Feb 2021 13:18:38 +1100 (EST) Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: On Wed, 3 Feb 2021, Arthur Krewat wrote: > The 68K always reminded me of the VAX. Pretty much the same instruction set, but on steroids :-) -- Dave, wondering whether anyone has ever used every VAX instruction From dave at horsfall.org Thu Feb 4 15:43:55 2021 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 4 Feb 2021 16:43:55 +1100 (EST) Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210131022500.GU4227@mcvoy.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> Message-ID: On Sat, 30 Jan 2021, Larry McVoy wrote: [ Usual insightful... insights ] > If you like ZFS you don't understand operating systems design. I do. Indeed... > Jeff Bonwick was a stats student at Stanford when he took my OS class, I > convinced him to come to Sun. Bill Moore worked for me. That's the two > main ZFS guys and I thought I had taught them well but they let me down. { ... ] There's no way that I'd use ZFS; lose a block in an ordinary file, well, you now have a hole (but not in the file-system sense); lose a block in a compressed system, well... Or perhaps I'm becoming conservative in my old age; I remember when I once rewrote utilities that when writing a zero block merely did a seek instead (or something like that; you had to remember to actually write out the last block). I wouldn't try it these days, as Unix file systems were simple back then. [ ... ] > Let's try it this way. Get back to me when you can show me 40 people > who have installed FreeBSD on their own, with no help. In the same > time, I can show you 40,000 people who have installed Linux on their > own, with no help. Probably 400,000. Well, I did (but without ZFS) on several boxes, with zero help. Having had SunOS experience (4.4 was the best) helped :-) I can't stand Penguin/OS; it looks too much like Windoze for my liking (and does its best to be almost-Unix-but-not-quite). -- Dave, a grey-beard From angus at fairhaven.za.net Thu Feb 4 16:10:08 2021 From: angus at fairhaven.za.net (Angus Robinson) Date: Thu, 4 Feb 2021 08:10:08 +0200 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> Message-ID: Not entirely sure as it's been a while since iI have used it but last time I remember TrueOS which used FreeBSD was easy to use for a newbie. There are other FreeBSD distro's like GhostBSD,etc that have an installer like Linux. My 2c is that FreeBSD is not trying to get people in that are newbies, it's for the server environment and it works extremely well. Kind Regards, Angus Robinson On Thu, Feb 4, 2021 at 7:45 AM Dave Horsfall wrote: > On Sat, 30 Jan 2021, Larry McVoy wrote: > > [ Usual insightful... insights ] > > > If you like ZFS you don't understand operating systems design. I do. > > Indeed... > > > Jeff Bonwick was a stats student at Stanford when he took my OS class, I > > convinced him to come to Sun. Bill Moore worked for me. That's the two > > main ZFS guys and I thought I had taught them well but they let me down. > > { ... ] > > There's no way that I'd use ZFS; lose a block in an ordinary file, well, > you now have a hole (but not in the file-system sense); lose a block in a > compressed system, well... > > Or perhaps I'm becoming conservative in my old age; I remember when I once > rewrote utilities that when writing a zero block merely did a seek instead > (or something like that; you had to remember to actually write out the > last block). I wouldn't try it these days, as Unix file systems were > simple back then. > > [ ... ] > > > Let's try it this way. Get back to me when you can show me 40 people > > who have installed FreeBSD on their own, with no help. In the same > > time, I can show you 40,000 people who have installed Linux on their > > own, with no help. Probably 400,000. > > Well, I did (but without ZFS) on several boxes, with zero help. Having > had SunOS experience (4.4 was the best) helped :-) > > I can't stand Penguin/OS; it looks too much like Windoze for my liking > (and does its best to be almost-Unix-but-not-quite). > > -- Dave, a grey-beard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arno.griffioen at ieee.org Thu Feb 4 17:23:26 2021 From: arno.griffioen at ieee.org (Arno Griffioen) Date: Thu, 4 Feb 2021 08:23:26 +0100 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <20210204013356.GA16541@mcvoy.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: <20210204072326.GZ4829@ancienthardware.org> On Wed, Feb 03, 2021 at 05:33:56PM -0800, Larry McVoy wrote: > I have to admit that I haven't looked at ARM assembler, the M1 is making > me rethink that. Anyone have an opinion on where ARM lies in the pleasant > to unpleasant scale? 'Different' is what I would call it.. Years ago I did a bunch of assembly hacking on the original ARM2 used in the Archimedes A3000, which was an amazingly fast CPU for the time. The thing that stood out on these CPU's to me, which was wildly different to what I was used to (M68K, 6502, Z80/8080, VAX), was the fact that many instructions were (somewhat) composeable. Aka. you can/could add varuous logical operations like AND, OR, etc. 'into' an instruction like a load or store and it would take the same number of clock cycles to execute it all in 1 go. That was great for doing data manipulation at very high rates for the time compared to the common CISC CPU's as you did not need to go through multiple fetch and modify cycles. Reminiscent of some VLIW setups, but still more 'human readable' :) The original ARM1/2/3 design did have some oddities like status bits being encoded in the top of the (23) address bits, which meant that later versions of the original design had to do some memory tricks to use a bigger address space and keep compatilibity to the original code. I suspect the current common ARM revisions since the move to the StrongARM (ARM4) architecture, when DEC got involved and ARM became a standalone chip design firm, have long fixed those oddities. Probably still retains the way in which it encodes it's instructions to make a lot of common logic operations while shuffling data more efficient though.. Having said that.. (and bringing it more back to TUHS instead of COFF ;) ) The ARM2 and ARM3 based machines could already run UNIX with Acorn selling RISC iX for a short time, which was a 4.3BSD port done in the late 80's and early 90's. Very few of those were ever used/sold though as the Acorn Archimedes series of machines were quite a bit more expensive than more widespread CISC machines. Most were found in the UK and often in universities and the like. Bye, Arno. From akosela at andykosela.com Thu Feb 4 17:46:47 2021 From: akosela at andykosela.com (Andy Kosela) Date: Thu, 4 Feb 2021 08:46:47 +0100 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> Message-ID: On 2/4/21, Angus Robinson wrote: > it's for the server environment and it works extremely well. > Well...20 years ago it was a catchy phrase when FreeBSD was still in stiff competition with Linux. Now 20 years later we all see who won the Unix server wars. Literally every corporate entity is using some form of commercial Red Hat Enterprise Linux now. It is running everywhere. There are some individual companies that still utilizes FreeBSD like Netflix, but the overall majority of server systems are based on Linux. I am not saying it is better; I am just pointing out the facts. I generally despise all modern Unix: be it Linux with systemd or FreeBSD as being too bloated and complex. My personal opinion is that Linux peaked around late 90s-early/mid 2000s. It was still pretty logical and simple, but excelled in performance. It went downhill since then... --Andy From torek at elf.torek.net Thu Feb 4 17:46:37 2021 From: torek at elf.torek.net (Chris Torek) Date: Wed, 3 Feb 2021 23:46:37 -0800 (PST) Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210131022500.GU4227@mcvoy.com> Message-ID: <202102040746.1147kb2Z095593@elf.torek.net> For what it's worth, you don't *have* to use compression on ZFS. But everything still goes through the ARC, which is ... messy. And eats memory for breakfast and then more memory for snacks and lunch and more snacks and so on. Fortunately memory is cheap, if you have a modern box. Unfortunately, I still don't, yet. ZFS has a ton of stuff in it. That, also, is messy, and not a great thing in terms of kernel size and security and alacrity and so on. But it has some really cool ideas in it. I'm perfectly happy to use it, or will be once I build a new box (still haven't made the jump to an AMD system with ECC). Chris From toby at telegraphics.com.au Thu Feb 4 21:28:49 2021 From: toby at telegraphics.com.au (Toby Thain) Date: Thu, 4 Feb 2021 06:28:49 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <20210204072326.GZ4829@ancienthardware.org> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> <20210204072326.GZ4829@ancienthardware.org> Message-ID: On 2021-02-04 2:23 a.m., Arno Griffioen wrote: > On Wed, Feb 03, 2021 at 05:33:56PM -0800, Larry McVoy wrote: >> I have to admit that I haven't looked at ARM assembler, the M1 is making >> me rethink that. Anyone have an opinion on where ARM lies in the pleasant >> to unpleasant scale? > > 'Different' is what I would call it.. > > Years ago I did a bunch of assembly hacking on the original ARM2 used in the > Archimedes A3000, which was an amazingly fast CPU for the time. > > The thing that stood out on these CPU's to me, which was wildly different > to what I was used to (M68K, 6502, Z80/8080, VAX), was the fact that > many instructions were (somewhat) composeable. > > Aka. you can/could add varuous logical operations like AND, OR, etc. 'into' an > instruction like a load or store and it would take the same number of clock > cycles to execute it all in 1 go. That is immediately reminiscent of DG Nova, PDP-8 (and to a tiny extent, PowerPC). > > That was great for doing data manipulation at very high rates for the time > compared to the common CISC CPU's as you did not need to go through multiple > fetch and modify cycles. > > Reminiscent of some VLIW setups, but still more 'human readable' :) > > The original ARM1/2/3 design did have some oddities like status bits being > encoded in the top of the (23) address bits, which meant that later versions of > the original design had to do some memory tricks to use a bigger address > space and keep compatilibity to the original code. > > I suspect the current common ARM revisions since the move to the StrongARM > (ARM4) architecture, when DEC got involved and ARM became a standalone chip > design firm, have long fixed those oddities. > > Probably still retains the way in which it encodes it's instructions to make > a lot of common logic operations while shuffling data more efficient though.. ARM MCUs also have the "bit manipulation engine" for a similar goal, I think. --Toby > > Having said that.. (and bringing it more back to TUHS instead of COFF ;) ) > > The ARM2 and ARM3 based machines could already run UNIX with Acorn selling > RISC iX for a short time, which was a 4.3BSD port done in the late 80's > and early 90's. > > Very few of those were ever used/sold though as the Acorn Archimedes series > of machines were quite a bit more expensive than more widespread CISC machines. > Most were found in the UK and often in universities and the like. > > Bye, Arno. > From cowan at ccil.org Fri Feb 5 00:56:49 2021 From: cowan at ccil.org (John Cowan) Date: Thu, 4 Feb 2021 09:56:49 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <27567.1612399305@hop.toad.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: On Wed, Feb 3, 2021 at 7:42 PM John Gilmore wrote: > It seems like the designers of > the other chips (e.g. the 8088) had never actually worked with real > computers (mainframes and minicomputers) and kept not-learning from > computing history. > Hence the description of Windows 95 as "a 32-bit extension to a 16-bit patch to an 8 bit OS originally for a 4-bit chip written by a 2-bit company that doesn't care 1 bit about its users." On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy wrote: The NS320XX always reminded me more of the PDP-11 (which is by *far* > my favorite assembler, so uniform, I slightly prefer the MIPS-32. > The x86 stuff is about as far away from PDP-11 as you can get. Required > to know it, but so unpleasant. > Required? Ghu forbid. After doing a bunch of PDP-11 assembler work, I found out that the Vax had 256 opcodes and foreswore assembly thereafter. Still, that was nothing compared to the 1500+ opcodes of x86*. I think I dodged a bullet. On Wed, Feb 3, 2021 at 9:18 PM Dave Horsfall wrote: > -- Dave, wondering whether anyone has ever used every VAX instruction > AFAIU, some of them were significantly slower than their multi-instruction equivalents. -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Fri Feb 5 01:45:48 2021 From: will.senn at gmail.com (Will Senn) Date: Thu, 4 Feb 2021 09:45:48 -0600 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> Message-ID: <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> On 2/3/21 11:43 PM, Dave Horsfall wrote: > On Sat, 30 Jan 2021, Larry McVoy wrote: > > [ Usual insightful...  insights ] > >> If you like ZFS you don't understand operating systems design.  I do. > ... > > There's no way that I'd use ZFS; lose a block in an ordinary file, > well, you now have a hole (but not in the file-system sense); lose a > block in a compressed system, well... > ZFS needn't be compressed, and I don't generally do compression or encryption unless required by law, so I can't speak from personal experience on those use cases (others, far more experienced can). I do know that it's truly a pain to recover from issues with either. In response to the negative vibes around ZFS. I've never lost a file (or a piece of a file) in 10+ years of using ZFS. I get the feeling we may not be talking about the same ZFS. My experience is with the ZFS FreeBSD comes with, not the version that Oracle owns. Perhaps the info is a little out of date for the naysayers. In my experience, using ZFS is fairly transparent and simple to use - no partitioning to deal with, no need to worry about generating filesystems, none of that - add your disks to a pool, choose your RAID levels and it gets mounted, no fuss. I've lost plenty of disks along the way, but ZFS just keeps on chugging along nicely until I replace them and then rebuilds the arrays, again, no fuss other than replacing the hardware. In terms of massive system updates and such, I just snapshot the environment (a near instantaneous operation) before making significant changes to my system, that might break things and when they do break (and they do, more often than I'd like), I just rollback. man bectl. Painless (and I mean painless, hundreds of times, or mor). I'm sure it all sounds scifi, but it's my experience along with plenty of other folks, and this ZFS sucks thread seems to be FUD to me - ala Microsoft vs Linux, or at best informed hypothetical speculations - reminds me of an if statement conversation I had online in the early 1990's where one group of folks claimed that braces worked a certain way, based on the then current standard, and another group of folks (I'd be on this side of things), tested the theory with a host of compilers, observed the functions effects, shook their heads and wondered why it didn't match up with the theory, and said it worked another. Who was right? I'm still not entirely sure, from a philosophical perspective, but I have since coded my if statements according to my environment, not the standards. As I mentioned in the prior thread, I've lost my share of files and file systems (many, many times since 1993 when I started with linux - 0.9 kernel, slackware, then redhat, then debian, now mint) with ext3/4, and btrfs, though, and the only recovery was backup (a time intensive process). I really don't see the logic behind the negative arguments. Don't like it, fine, say it and live it. Claim it sucks? Then, back it up with a real-world, current experience and I'll cede the point - I'll keep using ZFS though :). I want to be clear, I don't dislike Linux. I don't think FreeBSD is superior. I like both. I use both... daily. With enough prep and planning, my linux environment is similarly recoverable, but with freebsd, the prep and planning requires a lot less time and effort. Personally, I heart linux Mint - it's based on Debian and Ubuntu - is a straightforward install, works well, has zfs (not yet on boot), has timeshift (lovely piece of software), and can be quite pretty. Vive la difference. Will From krewat at kilonet.net Fri Feb 5 01:47:03 2021 From: krewat at kilonet.net (Arthur Krewat) Date: Thu, 4 Feb 2021 10:47:03 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <20210204013356.GA16541@mcvoy.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: <09f134dd-ed70-5ff5-08d3-70c9a4ff97e3@kilonet.net> On 2/3/2021 8:33 PM, Larry McVoy wrote: >> The 68K always reminded me of the VAX. > I'm not sure if that is a compliment or not. Neither, more of an observation than anything. Post/pre decrement/increment, 32-bit everything, it was an easy move, mentally, from VAX to 68K. I cut my teeth on a PDP-10, but also VAX, and a sprinkling of microprocessors such as the 8080, Z80, 6502 and of course, 8088/86. Back around the mid 80's, a friend and I built a 68020 prototype computer from spare parts, all wire-wrapped, fast static RAM (it was free), and I wrote a cross-assembler in C (on a 80386 PC) and we went to town. And then, as they say, life happened. It was much easier to get access to, or take home, powerful enough computers that we didn't need to build our own. My friend still has the thing. I still have the cross-assembler too, but sadly no actual code I wrote at the time. It would have been worth a laugh or two ;) I still have a Sun 3/280 laying around here somewhere... art k. -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Fri Feb 5 01:47:42 2021 From: will.senn at gmail.com (Will Senn) Date: Thu, 4 Feb 2021 09:47:42 -0600 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <202102040746.1147kb2Z095593@elf.torek.net> References: <202102040746.1147kb2Z095593@elf.torek.net> Message-ID: <6e0c3aac-bc46-340e-4d1c-8d30d046aaae@gmail.com> On 2/4/21 1:46 AM, Chris Torek wrote: > For what it's worth, you don't *have* to use compression on ZFS. > But everything still goes through the ARC, which is ... messy. > And eats memory for breakfast and then more memory for snacks and > lunch and more snacks and so on. Fortunately memory is cheap, if > you have a modern box. Unfortunately, I still don't, yet. > > ZFS has a ton of stuff in it. That, also, is messy, and not a > great thing in terms of kernel size and security and alacrity and > so on. But it has some really cool ideas in it. I'm perfectly > happy to use it, or will be once I build a new box (still haven't > made the jump to an AMD system with ECC). > > Chris OMG! It's a memory hog, for sure, and messy :) From krewat at kilonet.net Fri Feb 5 01:53:55 2021 From: krewat at kilonet.net (Arthur Krewat) Date: Thu, 4 Feb 2021 10:53:55 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> Message-ID: <4db6108e-73f0-6498-fe45-3fd422d1f389@kilonet.net> On 2/3/2021 9:18 PM, Dave Horsfall wrote: > -- Dave, wondering whether anyone has ever used every VAX instruction Or every VMS call, for that matter. ;) art k. From henry.r.bent at gmail.com Fri Feb 5 02:03:14 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Thu, 4 Feb 2021 11:03:14 -0500 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: I don't really know enough about filesystem internals to comment on the design one way or another. I can say that Oberlin College Computer Science switched to using ZFS with snapshots in 2008 or so (on Solaris 10) and it greatly simplified restoring from backups. Home directories had weekly snapshots taken, rolling over a month or two, and it was a godsend when a student (or a professor...) accidentally deleted something important. No more trawling through backup tapes by the admin to restore the file you wanted, it could easily be taken from an on-disk snapshot. Obviously it required a certain greater amount of disk space, but I contend that it was worth it. -Henry On Thu, 4 Feb 2021 at 10:47, Will Senn wrote: > On 2/3/21 11:43 PM, Dave Horsfall wrote: > > On Sat, 30 Jan 2021, Larry McVoy wrote: > > > > [ Usual insightful... insights ] > > > >> If you like ZFS you don't understand operating systems design. I do. > > ... > > > > There's no way that I'd use ZFS; lose a block in an ordinary file, > > well, you now have a hole (but not in the file-system sense); lose a > > block in a compressed system, well... > > > ZFS needn't be compressed, and I don't generally do compression or > encryption unless required by law, so I can't speak from personal > experience on those use cases (others, far more experienced can). I do > know that it's truly a pain to recover from issues with either. > > In response to the negative vibes around ZFS. I've never lost a file (or > a piece of a file) in 10+ years of using ZFS. I get the feeling we may > not be talking about the same ZFS. My experience is with the ZFS FreeBSD > comes with, not the version that Oracle owns. Perhaps the info is a > little out of date for the naysayers. In my experience, using ZFS is > fairly transparent and simple to use - no partitioning to deal with, no > need to worry about generating filesystems, none of that - add your > disks to a pool, choose your RAID levels and it gets mounted, no fuss. > I've lost plenty of disks along the way, but ZFS just keeps on chugging > along nicely until I replace them and then rebuilds the arrays, again, > no fuss other than replacing the hardware. In terms of massive system > updates and such, I just snapshot the environment (a near instantaneous > operation) before making significant changes to my system, that might > break things and when they do break (and they do, more often than I'd > like), I just rollback. man bectl. Painless (and I mean painless, > hundreds of times, or mor). I'm sure it all sounds scifi, but it's my > experience along with plenty of other folks, and this ZFS sucks thread > seems to be FUD to me - ala Microsoft vs Linux, or at best informed > hypothetical speculations - reminds me of an if statement conversation I > had online in the early 1990's where one group of folks claimed that > braces worked a certain way, based on the then current standard, and > another group of folks (I'd be on this side of things), tested the > theory with a host of compilers, observed the functions effects, shook > their heads and wondered why it didn't match up with the theory, and > said it worked another. Who was right? I'm still not entirely sure, from > a philosophical perspective, but I have since coded my if statements > according to my environment, not the standards. > > As I mentioned in the prior thread, I've lost my share of files and file > systems (many, many times since 1993 when I started with linux - 0.9 > kernel, slackware, then redhat, then debian, now mint) with ext3/4, and > btrfs, though, and the only recovery was backup (a time intensive > process). I really don't see the logic behind the negative arguments. > Don't like it, fine, say it and live it. Claim it sucks? Then, back it > up with a real-world, current experience and I'll cede the point - I'll > keep using ZFS though :). > > I want to be clear, I don't dislike Linux. I don't think FreeBSD is > superior. I like both. I use both... daily. With enough prep and > planning, my linux environment is similarly recoverable, but with > freebsd, the prep and planning requires a lot less time and effort. > Personally, I heart linux Mint - it's based on Debian and Ubuntu - is a > straightforward install, works well, has zfs (not yet on boot), has > timeshift (lovely piece of software), and can be quite pretty. > > Vive la difference. > > Will > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Fri Feb 5 02:32:31 2021 From: crossd at gmail.com (Dan Cross) Date: Thu, 4 Feb 2021 11:32:31 -0500 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: On Thu, Feb 4, 2021 at 10:47 AM Will Senn wrote: > [snip] > > In response to the negative vibes around ZFS. [snip] > I think the discordance is around the semantics ZFS's implementation implies. Larry's point about mmap() vs a buffer cache is entirely valid; it took lots of people heroic amounts of work worthy of Greek sagas to bridge the difference between the original buffer and VM page caches, but ZFS says, "meh. too much work; not worth it." The practical implication of that is that memory mapped IO (via `mmap`) is no longer coherent with file IO (via `open`/`close`/`read`/`write`) without lots of work that both degrades performance and add complexity. The question that a lot of folks who use ZFS regularly ask is, "does that matter?" And perhaps it doesn't: if I've got a file server sitting there serving NFS, do I care what it's kernel is doing? As long as it's saturating the network and disks, and it's reliable...not really. (Incidentally, that was kind of the philosophy behind the original plan9 file server kernel...as I heard the story, the rate of change of the plan9 kernel proper was too high, so Ken split off the file server portion into its own, special-purpose kernel, and it stayed like that for ~20 years). Similarly, if I'm on the local machine and the required coherence code is there and largely works, then again, perhaps as a consumer of the filesystem, I just don't care. After all, one can still get work done, and ZFS has a bunch of other features that make it very attractive, right? In particular, it's very good at NOT losing my data, kernel purity be damned. On the other hand, if we're discussing OS design and implementation, (re)splitting the VM and buffer caches is a poor decision. One might well ask, "why?" and the answer may be, "because it adds significant complexity to the kernel." This to me seems like the crux of the disagreement. Satisfied users of ZFS might legitimately ask, "who cares?" and one might respond, "kernel maintainers." If the kernel is mostly transparent as far as a particular use case goes, though, then I can see why one would bulk at the suggestion that this matters. If one is concerned with the design and implementation of kernels, I could see why one would care very much. Like many things, it's a matter of perspective. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emu at e-bbes.com Fri Feb 5 02:03:08 2021 From: emu at e-bbes.com (emanuel stiebler) Date: Thu, 4 Feb 2021 11:03:08 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <09f134dd-ed70-5ff5-08d3-70c9a4ff97e3@kilonet.net> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> <09f134dd-ed70-5ff5-08d3-70c9a4ff97e3@kilonet.net> Message-ID: <49bb36e7-8fca-baf0-6640-c5c510ecd9a3@e-bbes.com> On 2021-02-04 10:47, Arthur Krewat wrote: > Post/pre decrement/increment, 32-bit everything, it was an easy move, > mentally, from VAX to 68K. I worked with 11s and VAX at this time the 68k came out, and as a student, the 68k was the only way to get a decent machine for less. And it really felt like the real thing ... From will.senn at gmail.com Fri Feb 5 02:49:21 2021 From: will.senn at gmail.com (Will Senn) Date: Thu, 4 Feb 2021 10:49:21 -0600 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: <48c18873-a536-d2e5-04cd-f5c581901726@gmail.com> On 2/4/21 10:32 AM, Dan Cross wrote: > On Thu, Feb 4, 2021 at 10:47 AM Will Senn > wrote: > > [snip] > > In response to the negative vibes around ZFS. [snip] > > > I think the discordance is around the semantics ZFS's implementation > implies. Larry's point about mmap() vs a buffer cache is entirely > valid; it took lots of people heroic amounts of work worthy of Greek > sagas to bridge the difference between the original buffer and VM page > caches, but ZFS says, "meh. too much work; not worth it." The > practical implication of that is that memory mapped IO (via `mmap`) is > no longer coherent with file IO (via `open`/`close`/`read`/`write`) > without lots of work that both degrades performance and add complexity. > > The question that a lot of folks who use ZFS regularly ask is, "does > that matter?" And perhaps it doesn't: if I've got a file server > sitting there serving NFS, do I care what it's kernel is doing? As > long as it's saturating the network and disks, and it's reliable...not > really. (Incidentally, that was kind of the philosophy behind the > original plan9 file server kernel...as I heard the story, the rate of > change of the plan9 kernel proper was too high, so Ken split off the > file server portion into its own, special-purpose kernel, and it > stayed like that for ~20 years). Similarly, if I'm on the local > machine and the required coherence code is there and largely works, > then again, perhaps as a consumer of the filesystem, I just don't > care. After all, one can still get work done, and ZFS has a bunch of > other features that make it very attractive, right? In particular, > it's very good at NOT losing my data, kernel purity be damned. > > On the other hand, if we're discussing OS design and implementation, > (re)splitting the VM and buffer caches is a poor decision. One might > well ask, "why?" and the answer may be, "because it adds significant > complexity to the kernel." This to me seems like the crux of the > disagreement. Satisfied users of ZFS might legitimately ask, "who > cares?" and one might respond, "kernel maintainers." If the kernel is > mostly transparent as far as a particular use case goes, though, then > I can see why one would bulk at the suggestion that this matters. If > one is concerned with the design and implementation of kernels, I > could see why one would care very much. > > Like many things, it's a matter of perspective. > >         - Dan C. > Thanks for the comments, Dan. I see your point. I was thinking as a user/admin. In this light, I'll admit that I'm not an expert on the internals and say that I can only imagine the breadth of design tradeoffs that were contemplated and the many decisions that were made when coming up with ZFS. I'm glad somebody thought through them, and worked through them, though. So I could consume their work, Frankensteinian though it may be. Now, if somebody would only get it working properly in Linux (boot environments included), or even get BTRFS to be more realiable, I'd be a happy camper, at least for a while :). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Feb 5 03:46:50 2021 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 4 Feb 2021 09:46:50 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: <20210204174650.GC13701@mcvoy.com> +1 to everything Dan said, including the "who cares if it works?". I was definitely coming at it from the kernel perspective, I joined Sun just after SunOS 4.0 came out; that was the release that made mmap/page cache be a coherent thing, the buffer cache was almost completely gone, I think it was still used for inodes and directories but no user data was in there. The amount of work that went into that was huge but the resulting kernel was actually smaller (I think) and much easier to understand and maintain (I know). And the kernel team was quite proud of that work, with good reason. Which is why it blows my mind that the same organization that saw the value of have one way to do things, did a 180 and said the page cache isn't that important. When I talked to the ZFS guys about it, their plan was that ZFS was going to be the only local file system so it sort of didn't matter. But that's nonsense, you are going to have tmpfs, nfs, vfat, ntfs, whatever apple uses, etc. So you are going to have a page cache for all of those, which means the page cache is never going away, which means that mmap() wants pages (remember, you can mmap stuff that isn't in memory, you need the hardware to fault for you so you can go get it). The ZFS guys said making that work is too hard, we did it in BitKeeper, it works fine. So the combination of going back to something anyone with OS smarts knows is a bad idea plus the fact that I know from personal experience that they could have done in the page cache and still have compression and XOR, yeah, I lost a lot of respect for those guys. ZFS is very useful, it could have been that and played nice with the page cache. On Thu, Feb 04, 2021 at 11:32:31AM -0500, Dan Cross wrote: > On Thu, Feb 4, 2021 at 10:47 AM Will Senn wrote: > > > [snip] > > > > In response to the negative vibes around ZFS. [snip] > > > > I think the discordance is around the semantics ZFS's implementation > implies. Larry's point about mmap() vs a buffer cache is entirely valid; it > took lots of people heroic amounts of work worthy of Greek sagas to bridge > the difference between the original buffer and VM page caches, but ZFS > says, "meh. too much work; not worth it." The practical implication of that > is that memory mapped IO (via `mmap`) is no longer coherent with file IO > (via `open`/`close`/`read`/`write`) without lots of work that both degrades > performance and add complexity. > > The question that a lot of folks who use ZFS regularly ask is, "does that > matter?" And perhaps it doesn't: if I've got a file server sitting there > serving NFS, do I care what it's kernel is doing? As long as it's > saturating the network and disks, and it's reliable...not really. > (Incidentally, that was kind of the philosophy behind the original plan9 > file server kernel...as I heard the story, the rate of change of the plan9 > kernel proper was too high, so Ken split off the file server portion into > its own, special-purpose kernel, and it stayed like that for ~20 years). > Similarly, if I'm on the local machine and the required coherence code is > there and largely works, then again, perhaps as a consumer of the > filesystem, I just don't care. After all, one can still get work done, and > ZFS has a bunch of other features that make it very attractive, right? In > particular, it's very good at NOT losing my data, kernel purity be damned. > > On the other hand, if we're discussing OS design and implementation, > (re)splitting the VM and buffer caches is a poor decision. One might well > ask, "why?" and the answer may be, "because it adds significant complexity > to the kernel." This to me seems like the crux of the disagreement. > Satisfied users of ZFS might legitimately ask, "who cares?" and one might > respond, "kernel maintainers." If the kernel is mostly transparent as far > as a particular use case goes, though, then I can see why one would bulk at > the suggestion that this matters. If one is concerned with the design and > implementation of kernels, I could see why one would care very much. > > Like many things, it's a matter of perspective. > > - Dan C. -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From bakul at iitbombay.org Fri Feb 5 04:41:22 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Thu, 4 Feb 2021 10:41:22 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: Message-ID: <46DC5B33-1F08-4DAE-ACAC-4318DB1498DA@iitbombay.org> On Feb 4, 2021, at 8:34 AM, Dan Cross wrote: > > On the other hand, if we're discussing OS design and implementation, (re)splitting the VM and buffer caches is a poor decision. One might well ask, "why?" and the answer may be, "because it adds significant complexity to the kernel." This to me seems like the crux of the disagreement. Satisfied users of ZFS might legitimately ask, "who cares?" and one might respond, "kernel maintainers." If the kernel is mostly transparent as far as a particular use case goes, though, then I can see why one would bulk at the suggestion that this matters. If one is concerned with the design and implementation of kernels, I could see why one would care very much. Largely agree; though the complexity battle has long been lost. On multiple fronts. Many of us are happy to use such complex systems for their ease of use or their feature set but wouldn’t want to maintain these systems! I have used ZFS since 2005 and largely happy with it. Replaced all the disks twice. Moved the same set of disks to a new machine. etc. Features: cheap and fast snapshots, send/receive, clone, adding disks, checksummed blocks, redundancy etc. The dedup impl. is suboptimal so I don't use it. No idea if they considered using a bloom filter and a cache to reduce memory use. If a new FS came along with a similar set of features and a simpler, better integrated implementation, I'd switch. From woods at robohack.ca Fri Feb 5 07:29:12 2021 From: woods at robohack.ca (Greg A. Woods) Date: Thu, 04 Feb 2021 13:29:12 -0800 Subject: [TUHS] Origins of globbing In-Reply-To: References: <20201006154420.2C93C18C099@mercury.lcs.mit.edu> Message-ID: At Tue, 6 Oct 2020 23:14:57 -0400, John Cowan wrote: Subject: Re: [TUHS] Origins of globbing > > Multics had support for * and ?, but I don't know when that was added or if > it was there from the beginning. Multics filenames, unlike DEC ones, allow > multiple dots, which are treated specially by these characters: neither ? > nor * can match a dot, but ** can. So perhaps they got into Unix from > Multics after all. Stratus VOS is another direct descendant of Multics, > but I don't know if it has globs. Multics called them "starnames". I don't know when they were added, but mention of "starname" support appears in some of the very early technical bulletins, e.g. MTP-042 from February 1974. Interestingly in Multics starname expansion was/is always the responsibility of the application, not the shell. This has its advantages, such as in the "help" command where the user can find help without knowing exactly how to spell what they're looking for. In fact when I first learned long ago that the Unix shell did the expansion I remember immediately thinking of some of these issues, as by that time I had already learned something of Multics. It is also used to good advantage in Multics Emacs without having to have Emacs re-implement some part of the shell, or indeed without having to call the shell, or some part of the shell. (Honeywell Bull notes about differences between Multics and Unix also indicate that '$' was not used for variable expansion, but there were other ways to do it on Multics.) On the other hand the Multics shell could and was/is modifiable or replaceable by the user, so it wouldn't have been too hard to do starname expansion in a shell implementation as well. Also, given the way the Multics shell did command substitution (pretty much right from the beginning, IIUC, and for sure by 1968), it would also be trivial to use something like V6 "glob" to do starname expansion for commands that didn't do it themselves. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From dave at horsfall.org Fri Feb 5 07:55:19 2021 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 5 Feb 2021 08:55:19 +1100 (EST) Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <20210204013356.GA16541@mcvoy.com> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On Wed, 3 Feb 2021, Larry McVoy wrote: >> The 68K always reminded me of the VAX. > > I'm not sure if that is a compliment or not. The 68K was fairly clean; the VAX not so much... I got the impression that it was designed by a committee i.e. everybody wanted to have their own instruction/feature, and it showed. I do admit though that paging the page tables was a stroke of genius. > The NS320XX always reminded me more of the PDP-11 (which is by *far* > my favorite assembler, so uniform, I had a TA that could read the octal > dump of a PDP-11 like it was C). I wasn't that good but I could sort of > see what he was seeing and I never saw that in the VAX. 68K was closer > but I felt like the NS320xx was closer yet. Pity they couldn't produce > bug free chips. I used to be a whiz on the 360 :-) As part of our final CompSci exams we had to hand-assemble and disassemble some code, and I hardly ever referred to the "green card". > Someone mentioned Z80000, I stopped at Z80 so I don't know if that was > also a pleasant ISA. The Z80 was quite nice; I wrote heaps of programs for it, and I even found an ANSI C Compiler for it (Hi-Tech as I recall; BDS-C was, well, you could barely call it "C")[*]. I compiled a number of Unix programs... > The x86 stuff is about as far away from PDP-11 as you can get. Required > to know it, but so unpleasant. The x86 architecture is utterly brain-dead; I mean, what's wrong with a linear address space? I think it was JohnG who said "segment registers are for worms". > I have to admit that I haven't looked at ARM assembler, the M1 is making > me rethink that. Anyone have an opinion on where ARM lies in the pleasant > to unpleasant scale? I've been looking at the ARM; it seems quite nice at first glance. > --lm who misses comp.arch back when CPU people hung out there Indeed. I gave up on USENET when the joint got flooded by spammers; I still have my "cancel" script somewhere. [*] I think it was Henry Spencer who said (in an unrelated matter): "Somehow to be called a C compiler, I think it ought at least be able to compile C". -- Dave, who ran aus.radio.amateur.* From usotsuki at buric.co Fri Feb 5 08:11:40 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Thu, 4 Feb 2021 17:11:40 -0500 (EST) Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On Fri, 5 Feb 2021, Dave Horsfall wrote: > The Z80 was quite nice; I wrote heaps of programs for it, and I even found an > ANSI C Compiler for it (Hi-Tech as I recall; BDS-C was, well, you could > barely call it "C")[*]. I compiled a number of Unix programs... Well, it *was* "Braindead Software" C. > The x86 architecture is utterly brain-dead; I mean, what's wrong with a > linear address space? I think it was JohnG who said "segment registers > are for worms". The 65816 doesn't have the screwed-up bitshifted segment stuff but it's also a segmented architecture and is also braindead. And I'm a 65C02 fan. -uso. From dave at horsfall.org Fri Feb 5 08:25:59 2021 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 5 Feb 2021 09:25:59 +1100 (EST) Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> Message-ID: On Thu, 4 Feb 2021, Angus Robinson wrote: > Not entirely sure as it's been a while since iI have used it but last > time I remember TrueOS which used FreeBSD was easy to use for a newbie. > There are other FreeBSD distro's like GhostBSD,etc that have an > installer like Linux. My 2c is that FreeBSD is not trying to get > people in that are newbies, it's for the server environment and it works > extremely well.  Exactly; FreeBSD is not for newbies: I'll leave Penguin/OS to that market. Mind you, OpenBSD was a bit of a bugger; all services are off by default. It's perfect for a firewall :-) -- Dave From ggm at algebras.org Fri Feb 5 08:28:12 2021 From: ggm at algebras.org (George Michaelson) Date: Fri, 5 Feb 2021 08:28:12 +1000 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <46DC5B33-1F08-4DAE-ACAC-4318DB1498DA@iitbombay.org> References: <46DC5B33-1F08-4DAE-ACAC-4318DB1498DA@iitbombay.org> Message-ID: snapshots of FS state existed for years before ZFS/BTRFS did it, because FS journaling existed. The underlying model of COW forking of the inodes and a journal were all there. What ZFS did, and what Docker does, and snap does, and flatpack does, is package things in a way which make modalities for use for modern sysadmin "just work" I don't recall seeing a backup model built on UFS+journalling with an integrated command tooling to do what zfs snapshot; zfs send | zfs receive does as an incremental copy mechanism. I'm not trying to disrespect the prior art, if there was a good packaging, I probably missed it. ZFS made this "yea, I get it now" for a lot of people. I think ZFS is fine. I have adopted it wholesale in the work context for the last 15+ years and at home for about a year. The FUD around minimum memory for ARC is a misunderstanding of an Oracle document "back then" and you can run ZFS on rPi class systems fine, if you can accept some lumpy paging and VM behaviour, and if you don't enable compression which burns the ARC. (you need the extent of memory to deal with things a lot more) -And there are modern OpenZFS docs which patiently explain this stuff. Zsys is going to get better. This means rollback from systems update under Ubuntu snaps and things, will get better. It will be a lot less risky to do significant upgrade on systems. -Think about Android and the two-root model of updating the passive root in the background, so the new OS is live across a reboot without having the spinning beachball of upgrade on your phone, or analogues of the "keep this setting or go back" model WIndows does, and Windows has had OS snapshots for a long time now, and offers "undo that major update" models to users: people expect this. I also think ZFS was a godsend for getting over the smart RAID card mistakes of the past. I HAVE lost data with these puppies. I've moved multi TB fs between FreeBSD and Linux and back again (with care) under ZFS, and you can't do that with Apples FS, or EXT3 with anything like the same confidence. Really? I like UUID. UUID are a godsend, for making things have unique, but asynchronously generated identity, so when you move them and mix them, you can stop worrying about device/bus order and simply re-create them as they were defined semantically. ZFS does this too: zfs import is leveraging what UUID does for you. Basically Larry, I think you are kindof wrong. These alumni of yours did what all kids should do: they ran ahead. Did they scrape their knees doing it? Sure. But if they don't try things their teachers say are bad, how do they advance the art? If we'd listened to Eddy Dijkstra, we'd never have got BGP: He said it couldn't scale, even though it was based on his own work. -G On Fri, Feb 5, 2021 at 4:41 AM Bakul Shah wrote: > > On Feb 4, 2021, at 8:34 AM, Dan Cross wrote: > > > > On the other hand, if we're discussing OS design and implementation, (re)splitting the VM and buffer caches is a poor decision. One might well ask, "why?" and the answer may be, "because it adds significant complexity to the kernel." This to me seems like the crux of the disagreement. Satisfied users of ZFS might legitimately ask, "who cares?" and one might respond, "kernel maintainers." If the kernel is mostly transparent as far as a particular use case goes, though, then I can see why one would bulk at the suggestion that this matters. If one is concerned with the design and implementation of kernels, I could see why one would care very much. > > Largely agree; though the complexity battle has long been lost. On multiple fronts. Many of us are happy to use such complex systems for their ease of use or their feature set but wouldn’t want to maintain these systems! > > I have used ZFS since 2005 and largely happy with it. Replaced all the disks twice. Moved the same set of disks to a new machine. etc. Features: cheap and fast snapshots, send/receive, clone, adding disks, checksummed blocks, redundancy etc. The dedup impl. is suboptimal so I don't use it. No idea if they considered using a bloom filter and a cache to reduce memory use. If a new FS came along with a similar set of features and a simpler, better integrated implementation, I'd switch. > > From athornton at gmail.com Fri Feb 5 08:39:46 2021 From: athornton at gmail.com (Adam Thornton) Date: Thu, 4 Feb 2021 15:39:46 -0700 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: I'm probably Stockholm Syndrommed about 6502. It's what I grew up on, and I still like it a great deal. Admittedly register-starved (well, unless you consider the zero page a whole page of registers), but...simple, easy to fit in your head, kinda wonderful. I'd love a 64-bit 6502-alike (but I'd probably give it more than three registers). I mean given how little silicon (or how few FPGA gates) a reasonable version of that would take, might as well include 65C02 and 65816 cores in there too with some sort of mode-switching instruction. Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus be fun? Throw in an onboard MMU and FPU too, I suppose, and then you could have a real system on it. 32-bit SPARC was kind of fun and felt kind of like 6502. The 6502 wasn't exactly RISCy...but when working with RISC architectures, understanding the 6502 seemed to be helpful. I really liked the 68000, but in a different way. It's a nice, regular, easy-to-understand instruction set without many surprises, and felt to me like it had plenty of registers. Once the 68030 brought the MMU onboard it was glorious. Post-370 (which is to say 390/z IBM mainframe architectures) went wild with microprogrammed crazy baroque very, very special purpose instructions. Which, I mean, OK, cool, I guess, but not elegant. I don't really know enough about the DEC architectures. It is my strong impression that the PDP-11 is regular, simple to understand, and rather delightful (like I find the 68000), while VAX gets super-baroque like later IBM mainframe instruction sets. Although I've worked with emulated 10s, 11s, and VAXen, I've never really done anything in assembly (sure, you can argue that C is the best PDP-11 preprocessor there is) on them. On Thu, Feb 4, 2021 at 3:12 PM Steve Nickolas wrote: > On Fri, 5 Feb 2021, Dave Horsfall wrote: > > > The Z80 was quite nice; I wrote heaps of programs for it, and I even > found an > > ANSI C Compiler for it (Hi-Tech as I recall; BDS-C was, well, you could > > barely call it "C")[*]. I compiled a number of Unix programs... > > Well, it *was* "Braindead Software" C. > > > > > The x86 architecture is utterly brain-dead; I mean, what's wrong with a > > linear address space? I think it was JohnG who said "segment registers > > are for worms". > > The 65816 doesn't have the screwed-up bitshifted segment stuff but it's > also a segmented architecture and is also braindead. > > And I'm a 65C02 fan. > > -uso. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bakul at iitbombay.org Fri Feb 5 08:41:40 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Thu, 4 Feb 2021 14:41:40 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <46DC5B33-1F08-4DAE-ACAC-4318DB1498DA@iitbombay.org> Message-ID: <2855D44B-F428-49A1-8A7C-C64156385669@iitbombay.org> On Feb 4, 2021, at 2:28 PM, George Michaelson wrote: > > Basically Larry, I think you are kindof wrong. These alumni of yours > did what all kids should do: they ran ahead. Did they scrape their > knees doing it? Sure. But if they don't try things their teachers say > are bad, how do they advance the art? If we'd listened to Eddy > Dijkstra, we'd never have got BGP: He said it couldn't scale, even > though it was based on his own work. I think Larry is talking about ZFS *implementation* design choices; which we, as ZFS users, mostly don't care about! It was OSPF, not BGP, that used Dijkstra's SPF algorithm. The last I knew BGP was all about "policy" -- why random bad actors can sling half of the internet traffic through China or Vanuatu or whatever. From henry.r.bent at gmail.com Fri Feb 5 08:47:02 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Thu, 4 Feb 2021 17:47:02 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On Thu, Feb 4, 2021, 17:40 Adam Thornton wrote: > I'm probably Stockholm Syndrommed about 6502. It's what I grew up on, and > I still like it a great deal. Admittedly register-starved (well, unless > you consider the zero page a whole page of registers), but...simple, easy > to fit in your head, kinda wonderful. > > I'd love a 64-bit 6502-alike (but I'd probably give it more than three > registers). I mean given how little silicon (or how few FPGA gates) a > reasonable version of that would take, might as well include 65C02 and > 65816 cores in there too with some sort of mode-switching instruction. > Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus be fun? > Throw in an onboard MMU and FPU too, I suppose, and then you could have a > real system on it. > > Sounds like a perfect project for an FPGA. If there's already a 6502 implementation out there, converting to 64 bit should be fairly easy. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.salz at gmail.com Fri Feb 5 08:56:32 2021 From: rich.salz at gmail.com (Richard Salz) Date: Thu, 4 Feb 2021 17:56:32 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On Thu, Feb 4, 2021, 5:12 PM Steve Nickolas wrote: Well, it *was* "Braindead Software" C. > Braindamaged software. I knew Leor; he sold me his motorcycle. -------------- next part -------------- An HTML attachment was scrubbed... URL: From usotsuki at buric.co Fri Feb 5 09:14:26 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Thu, 4 Feb 2021 18:14:26 -0500 (EST) Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On Thu, 4 Feb 2021, Richard Salz wrote: > On Thu, Feb 4, 2021, 5:12 PM Steve Nickolas wrote: > >> Well, it *was* "Braindead Software" C. > > Braindamaged software. I knew Leor; he sold me his motorcycle. Close enough that my point can stand - but I stand corrected. -uso. From lm at mcvoy.com Fri Feb 5 10:33:15 2021 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 4 Feb 2021 16:33:15 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <46DC5B33-1F08-4DAE-ACAC-4318DB1498DA@iitbombay.org> Message-ID: <20210205003315.GK13701@mcvoy.com> On Fri, Feb 05, 2021 at 08:28:12AM +1000, George Michaelson wrote: > What ZFS did, and what Docker does, and snap does, and flatpack does, > is package things in a way which make modalities for use for modern > sysadmin "just work" No argument there. > Basically Larry, I think you are kindof wrong. These alumni of yours > did what all kids should do: they ran ahead. Did they scrape their > knees doing it? Sure. But if they don't try things their teachers say > are bad, how do they advance the art? Before I show you I'm not wrong, if you are saying (and I think you are) that you like ZFS and find it useful, I have no disagreement with that. I'm in no way arguing that ZFS isn't useful. If you are just an end user and you don't run into any of the coherency problems, it's great. I'm arguing from the point of view of how a kernel is supposed to work. What ZFS did is a gross violation of how the kernel is supposed to work and both Bonwick and Moore have admitted that, they just thought it was too hard to do it right. There is a body of code in BitKeeper that does the exact part that they thought was too hard, a layer that takes a page fault and fills in the page from a compressed and xor-ed data source. Works great, one guy did it in a few months or so. It's not that hard. So why is what ZFS did so wrong? Ignoring the page cache and make their own cache has big problems. You can mmap() ZFS files and doing so means that when a page is referenced it is copied from the ZFS cache to the page cache. That creates a coherency problem, I can write via the mapping and I can write via write(2) and now you have two copies of the data that don't match, that's pretty much OS no-no #1. You can get around it, I know, because I've written the coherency code for SGI's IRIX when I did the bulk data server that went around the page cache and made NFS go at 60MB/sec for a single stream (and many times that for multiple streams). So I'm not talking out of my ass, I know what coherency means when you have the same data in two different places, I know it is possible to make it work, I've done that, and I don't think it is a good idea (it was OK in SGI's case, it was for O_DIRECT which exists to completely bypass the page cache; so a special case that wasn't so bad and wasn't general). It's messy. You could remove the data from the ZFS cache when you put it in the page cache but ZFS is compresses so it's not going line up on page boundaries like you'd want it to. That means you're removing more than your page which sort of sucks. You could never map the pages writable, take a fault every time someone wants to write the page and then do the write back to the ZFS cache. That doesn't really work because you take the fault before the write completes, not after. You can make it work, the write fault has to get an exclusive lock on the data in the ZFS cache, then return, then the page gets modified, now someone has to wake up and copy that data from the page cache to ZFS. It's messy and it performs really poorly, nobody would do it this way. You could lock the data in the ZFS cache, making it read only. That doesn't work because you can write via mmap() and read via ZFS and you get old data. All of these sorts of problems, which are solvable, I've solved them, Sun solved them, are why you don't really want what ZFS did. It's non ending case of wack a mole as the code evolves and someone slips in something that makes the page cache and the ZFS cache incoherent again. There isn't a pleasant way to make this stuff work, that's exactly why Sun made everything live in the page cache, there was only one copy of any chunk of data. Which makes it baffling to me that Sun would allow ZFS into the kernel but I guess the benefits were perceived to outweigh the ongoing work to make the caches coherent. Personally, I think just doing it right is way easier. --lm From dave at horsfall.org Fri Feb 5 12:16:08 2021 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 5 Feb 2021 13:16:08 +1100 (EST) Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <4db6108e-73f0-6498-fe45-3fd422d1f389@kilonet.net> References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <4db6108e-73f0-6498-fe45-3fd422d1f389@kilonet.net> Message-ID: [ Directing to COFF, where it likely belongs ] On Thu, 4 Feb 2021, Arthur Krewat wrote: >> -- Dave, wondering whether anyone has ever used every VAX instruction > > Or every VMS call, for that matter. ;) Urk... I stayed away from VMS as much as possible (I had a network of PDP-11s to play with), although I did do a device driver course; dunno why. -- Dave From lm at mcvoy.com Fri Feb 5 12:53:49 2021 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 4 Feb 2021 18:53:49 -0800 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <4db6108e-73f0-6498-fe45-3fd422d1f389@kilonet.net> Message-ID: <20210205025349.GM13701@mcvoy.com> On Fri, Feb 05, 2021 at 01:16:08PM +1100, Dave Horsfall wrote: > [ Directing to COFF, where it likely belongs ] > > On Thu, 4 Feb 2021, Arthur Krewat wrote: > > >>-- Dave, wondering whether anyone has ever used every VAX instruction > > > >Or every VMS call, for that matter. ;) > > Urk... I stayed away from VMS as much as possible (I had a network of > PDP-11s to play with), although I did do a device driver course; dunno why. Me too, though I did use Eunice, it was a lonely place, it did not let me see who was on VMS. I was the only one. A far cry from BSD where wall went to everyone and talk got you a screen where you talked. From bakul at iitbombay.org Fri Feb 5 15:17:54 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Thu, 4 Feb 2021 21:17:54 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210205003315.GK13701@mcvoy.com> References: <20210205003315.GK13701@mcvoy.com> Message-ID: <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> On Feb 4, 2021, at 4:33 PM, Larry McVoy wrote: > > Ignoring the page cache and make their own cache has big problems. > You can mmap() ZFS files and doing so means that when a page is referenced > it is copied from the ZFS cache to the page cache. That creates a > coherency problem, I can write via the mapping and I can write via > write(2) and now you have two copies of the data that don't match, > that's pretty much OS no-no #1. Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you in trouble in any case. Similarly read(2)ing. And you can keep track of mapped pages and read/write from them if necessary even if you have a separate cache for any compressed pages. I haven’t read zfs code but this doesn’t seem like a tricky problem. From spedraja at gmail.com Fri Feb 5 22:44:36 2021 From: spedraja at gmail.com (Sergio Pedraja) Date: Fri, 5 Feb 2021 13:44:36 +0100 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: Hi everyone. I've built Freebee using Make and specifying win32 as architecture under Cygwin with libSDL2 plus Cygwin-X XWindows installed. The Freebee runs starting it from xterm. It's a bit faster than my own real 3B1. I have briefly tested the two startup hard drives and the second hard drive, empty. No problem as far as I have seen. Great work. On the other hand I dare to suggest the improve of the GUI of the emulator to reduce the flickering of the 3B1's screen refresh. Is too much visible. Thanks and good luck, anyway. Sergio El vie., 29 ene. 2021 11:50, Arnold Robbins escribió: > Hello All. > > Many of you may remember the AT&T UNIX PC and 3B1. These systems > were built by Convergent Technologies and sold by AT&T. They had an > MC 68010 processor, up to 4 Meg Ram and up to 67 Meg disk. The OS > was System V Release 2 vintage. There was a built-in 1200 baud modem, > and a primitive windowing system with mouse. > > I had a 3B1 as my first personal system and spent many happy hours writing > code and documentation on it. > > There is an emulator for it that recently became pretty stable. The > original > software floppy images are available as well. You can bring up a fairly > functional system without much difficulty. > > The emulator is at https://github.com/philpem/freebee. You can install up > to two 175 Meg hard drives - a lot of space for the time. > > The emulator's README.md there has links to lots of other interesting > 3B1 bits, both installable software and Linux tools for exporting the > file system from disk image so it can be mounted under Linux and > importing it back. Included is an updated 'sysv' Linux kernel module > that can handle the byte-swapped file system. > > I have made a pre-installed disk image available with a fair amount > of software, see https://www.skeeve.com/3b1/. > > The emulator runs great under Linux; not so sure about MacOS or Windows. > :-) > > So, anyone wishing to journey back to 1987, have fun! > > Arnold > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Sat Feb 6 00:18:20 2021 From: lm at mcvoy.com (Larry McVoy) Date: Fri, 5 Feb 2021 06:18:20 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> Message-ID: <20210205141820.GO13701@mcvoy.com> On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: > On Feb 4, 2021, at 4:33 PM, Larry McVoy wrote: > > > > Ignoring the page cache and make their own cache has big problems. > > You can mmap() ZFS files and doing so means that when a page is referenced > > it is copied from the ZFS cache to the page cache. That creates a > > coherency problem, I can write via the mapping and I can write via > > write(2) and now you have two copies of the data that don't match, > > that's pretty much OS no-no #1. > > Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you > in trouble in any case. Similarly read(2)ing. The entire point of the SunOS 4.0 VM system was that the page you saw via mmap(2) is the exact same page you saw via read(2). It's the page cache, it has page sized chunks of memory that cache file,offset pairs. There is one, and only one, copy of the truth. Doesn't matter how you get at it, there is only one "it". ZFS broke that contract and that was a step backwards in terms of OS design. From mparson at bl.org Sat Feb 6 00:42:33 2021 From: mparson at bl.org (Michael Parson) Date: Fri, 05 Feb 2021 08:42:33 -0600 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: References: <202102030759.1137x7C2013543@freefriends.org> <202102030858.1138wuqd011051@freefriends.org> <27567.1612399305@hop.toad.com> <20210204013356.GA16541@mcvoy.com> Message-ID: On 2021-02-04 16:47, Henry Bent wrote: > On Thu, Feb 4, 2021, 17:40 Adam Thornton wrote: > >> I'm probably Stockholm Syndrommed about 6502. It's what I grew up on, >> and >> I still like it a great deal. Admittedly register-starved (well, >> unless >> you consider the zero page a whole page of registers), but...simple, >> easy >> to fit in your head, kinda wonderful. >> >> I'd love a 64-bit 6502-alike (but I'd probably give it more than three >> registers). I mean given how little silicon (or how few FPGA gates) a >> reasonable version of that would take, might as well include 65C02 and >> 65816 cores in there too with some sort of mode-switching instruction. >> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus be >> fun? >> Throw in an onboard MMU and FPU too, I suppose, and then you could >> have a >> real system on it. >> >> > Sounds like a perfect project for an FPGA. If there's already a 6502 > implementation out there, converting to 64 bit should be fairly easy. There are FPGA implementations of the 6502 out there. If you've not seen it, check out the MiSTer[0] project, FPGA implementations of a LOT of computers, going back as far as the EDSAC, PDP-1, a LOT of 8, 16, and 32 bit systems from the 70s and 80s along with gaming consoles from the 70s and 80s. Keeping this semi-TUHS related, one guy[1] has even implemented a Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4, 5, Linux, NetBSD, and even the Sparc version of NeXTSTEP, but it's not part of the "official" MiSTer bits (yet?). -- Michael Parson Pflugerville, TX KF5LGQ [0] https://github.com/MiSTer-devel/Main_MiSTer/wiki [1] https://temlib.org/site/ [2] https://temlib.org/pub/mister/SS/ From imp at bsdimp.com Sat Feb 6 04:16:26 2021 From: imp at bsdimp.com (Warner Losh) Date: Fri, 5 Feb 2021 11:16:26 -0700 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210205141820.GO13701@mcvoy.com> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> Message-ID: On Fri, Feb 5, 2021, 7:19 AM Larry McVoy wrote: > On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: > > On Feb 4, 2021, at 4:33 PM, Larry McVoy wrote: > > > > > > Ignoring the page cache and make their own cache has big problems. > > > You can mmap() ZFS files and doing so means that when a page is > referenced > > > it is copied from the ZFS cache to the page cache. That creates a > > > coherency problem, I can write via the mapping and I can write via > > > write(2) and now you have two copies of the data that don't match, > > > that's pretty much OS no-no #1. > > > > Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you > > in trouble in any case. Similarly read(2)ing. > > The entire point of the SunOS 4.0 VM system was that the page you > saw via mmap(2) is the exact same page you saw via read(2). It's > the page cache, it has page sized chunks of memory that cache > file,offset pairs. > > There is one, and only one, copy of the truth. Doesn't matter how > you get at it, there is only one "it". > > ZFS broke that contract and that was a step backwards in terms of > OS design. > The double copy is the primary reason we don't use it to store videos we serve. It's a performance bottleneck as well. And fixing it is... rather involved... possible, but a lot of work to teach the ARC about the buffer cache or the buffer cache about the ARC... But for everything else I do, I accept the imperfect design because of all the other features it unlocks. Warner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Sat Feb 6 04:21:54 2021 From: rminnich at gmail.com (ron minnich) Date: Fri, 5 Feb 2021 10:21:54 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210205141820.GO13701@mcvoy.com> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> Message-ID: I think hearing *why* they did that would be interesting. On Fri, Feb 5, 2021 at 6:19 AM Larry McVoy wrote: > > On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: > > On Feb 4, 2021, at 4:33 PM, Larry McVoy wrote: > > > > > > Ignoring the page cache and make their own cache has big problems. > > > You can mmap() ZFS files and doing so means that when a page is referenced > > > it is copied from the ZFS cache to the page cache. That creates a > > > coherency problem, I can write via the mapping and I can write via > > > write(2) and now you have two copies of the data that don't match, > > > that's pretty much OS no-no #1. > > > > Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you > > in trouble in any case. Similarly read(2)ing. > > The entire point of the SunOS 4.0 VM system was that the page you > saw via mmap(2) is the exact same page you saw via read(2). It's > the page cache, it has page sized chunks of memory that cache > file,offset pairs. > > There is one, and only one, copy of the truth. Doesn't matter how > you get at it, there is only one "it". > > ZFS broke that contract and that was a step backwards in terms of > OS design. From dave at horsfall.org Sat Feb 6 06:50:17 2021 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 6 Feb 2021 07:50:17 +1100 (EST) Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: On Thu, 4 Feb 2021, Will Senn wrote: > ZFS needn't be compressed, and I don't generally do compression or > encryption unless required by law, so I can't speak from personal > experience on those use cases (others, far more experienced can). I do > know that it's truly a pain to recover from issues with either. [...] Thanks; I'd heard that ZFS was a compressed file system, so I stopped right there (I had lots of experience in recovering from corrupted RK05s, and didn't need any more trouble). Ah, the RK05 - evil incarnate. I mean, a disk drive exposed to the air? Out There Somewhere [tm] is a picture of a human hair compared with the head clearance; yikes! Once a month, our DEC ginger-beer[*] would PM our 40s, and I was most perturbed to learn that the sole job of the internal NiCd battery was to drag those heads back in the event of a power failure; what, you were writing at the time? Too bad... [*] His surname was "Roth", so naturally his nickname was "Portnoy". -- Dave From bakul at iitbombay.org Sat Feb 6 10:03:21 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Fri, 5 Feb 2021 16:03:21 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210205141820.GO13701@mcvoy.com> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> Message-ID: <0253BE0F-94CB-41BB-921D-6BD09A188601@iitbombay.org> On Feb 5, 2021, at 6:18 AM, Larry McVoy wrote: > > On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: >> On Feb 4, 2021, at 4:33 PM, Larry McVoy wrote: >>> >>> Ignoring the page cache and make their own cache has big problems. >>> You can mmap() ZFS files and doing so means that when a page is referenced >>> it is copied from the ZFS cache to the page cache. That creates a >>> coherency problem, I can write via the mapping and I can write via >>> write(2) and now you have two copies of the data that don't match, >>> that's pretty much OS no-no #1. >> >> Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you >> in trouble in any case. Similarly read(2)ing. > > The entire point of the SunOS 4.0 VM system was that the page you > saw via mmap(2) is the exact same page you saw via read(2). It's > the page cache, it has page sized chunks of memory that cache > file,offset pairs. > > There is one, and only one, copy of the truth. Doesn't matter how > you get at it, there is only one "it". > > ZFS broke that contract and that was a step backwards in terms of > OS design. Let me repeat a part of my response you cut out: And you can keep track of mapped pages and read/write from them if necessary even if you have a separate cache for any compressed pages. In essence you pass the ownership of a page's data from a compressed page cache to the mapped page. Just like in processor cache coherence algorithms there is one source of truth: the current owner of a cached unit (line or page or whatever). In other words, the you see via mmap(2) will be the exact same page you will see via read(2). Not having actually tried this I may have missed corner cases + any practical considerations complicating things but *conceptually* this doesn't seem hard. Warner mentions not using ZFS for its double copying. May be omething like the above can a step in the direction of integrating the caches? As Ron says, I too would like to hear what the authors of ZFS have to say.... From brad at anduin.eldar.org Sat Feb 6 10:21:08 2021 From: brad at anduin.eldar.org (Brad Spencer) Date: Fri, 05 Feb 2021 19:21:08 -0500 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: (message from Dave Horsfall on Sat, 6 Feb 2021 07:50:17 +1100 (EST)) Message-ID: Dave Horsfall writes: > On Thu, 4 Feb 2021, Will Senn wrote: > >> ZFS needn't be compressed, and I don't generally do compression or >> encryption unless required by law, so I can't speak from personal >> experience on those use cases (others, far more experienced can). I do >> know that it's truly a pain to recover from issues with either. > > [...] > > Thanks; I'd heard that ZFS was a compressed file system, so I stopped > right there (I had lots of experience in recovering from corrupted RK05s, > and didn't need any more trouble). [snip] I have some real world experience with ZFS and bad blocks. As mentioned by others, compression is not required, but it would not matter too much. ZFS is somewhat odd in that you can mount and use a damaged filesystem without too much trouble and recover anything possible. A real example (and to keep it a TUHS topic), we had an older Solaris 10 Sparc at the $DAYJOB about two years ago that developed bad blocks on the drive (ZFS detected this for us). Not a surprise given the age of the system, being 6 to 8 years old at the time, we replaced the drive without issue and started a re-silver (ZFS's version of fsck and raid rebuild) and during that re-silver another drive developed block errors. This was a RAIDZ1 set up, so having two drives go out at the same time would cause something to be lost. If it were another RAID variant, it would probably have been fatal. What ZFS did was told us exactly which file was effected by the bad block and the system kept going, no unmounts, reboots or down time. We replaced the second drive and the only problem we had was a single lost file. Compression would not have changed this situation much, except perhaps making the problem a bit bigger, we may have lost two files or something. I have another example with a Sun NFS appliance that runs under the covers Solaris 11 that developed multiple drive fails. In the end, we estimated that around 40% or so of the multi terabyte filesystem was damaged. It was possible to still use the appliance and we were able to get a lot off of it as it continued to function as best as it could you just could not access the files that had bad blocks in them. ZFS has honestly been amazing in the face of the problems I have seen. -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From gnu at toad.com Sat Feb 6 11:18:34 2021 From: gnu at toad.com (John Gilmore) Date: Fri, 05 Feb 2021 17:18:34 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210205141820.GO13701@mcvoy.com> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> Message-ID: <31039.1612574314@hop.toad.com> On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: > Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you > in trouble in any case. Similarly read(2)ing. Uh, no. You misunderstand completely. The purpose of the kernel is to provide a reliable interface to system facilities, that lets processes NOT DEPEND on what other processes are doing. The decision about whether Tool X uses mmap() versus read() to access a file, or mmap() versus write() to change one, is a decision that DOES NOT DEPEND on what Tool Y is doing. Tools X and Y may have been written by different groups in different decades. Tool X may have been written to use stdio, which used read(). Three years later, stdio got rewritten to use mmap() for speed, but that's invisible to the author of Tool X. And maybe an end user in 2025 decides to use both Tool X and Tool Y on the same file. So only much later will any malign interactions between read/write and mmap actually be noticed by end users. And the fix is not to create new dependencies between Tool X, stdio, and Tool Y. It is to fix the kernel so they do not depend on each other! Here is a real-life example from my own experience. There is a long-standing bug in the Linux kernel, in which the inotify() system call simply didn't work on nested file systems. This caused a long-standing bug in Ubuntu, which I reported in 2012 here: https://bugs.launchpad.net/ubuntu/+source/rpcbind/+bug/977847 The symptom was that after booting from a LiveCD image, "apt-get install" for system services (in my case an NFS client package) wouldn't work. Turned out the system startup scripts used inotify() to notice and start newly installed system services. The root cause was that inotify failed because the root file system was an "overlayfs" that overlaid a RAMdisk on top of the read-only LiveCD file system. The people who implemented "overlayfs" didn't think inotify() was important, or they thought it would be too much work to make it actually meet its specs, so they just made it ignore changes to the files in the overlaid file system. So the startup daemon's inotify() would never report the creation of new files about the new services, because those files were in the overlaying RAM disk, and so it would not start them and the user would notice the error. The underlying overlayfs bug was reported in 2011 here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/882147 As far as I know it has never been fixed. (The bug report was closed in 2019 for one of the usual bogus reasons.) The problem came because real tools (like systemd, or the tail command) actually started using inotify, assuming that as a well documented kernel interface, it would actually meet its specs. And because a completely unrelated other real tool (like the LiveCD installer) actually started using overlayfs, assuming that as a well documented kernel interface, it too would actually meet its specs. And then one day somebody tried to use both those tools together and they failed. That's why telling people "Don't use mmap() on the same file that you use read() on" is an invalid attitude for a Real Kernel Maintainer. Props to Larry McVoy for caring about this. Boos to the Linux maintainers of overlayfs who didn't give a shit. John From bakul at iitbombay.org Sat Feb 6 11:55:38 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Fri, 5 Feb 2021 17:55:38 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <31039.1612574314@hop.toad.com> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> <31039.1612574314@hop.toad.com> Message-ID: Please see my followup message. My fault for mixing two separate things (what a user should not do vs how the kernel can still provide coherence). > On Feb 5, 2021, at 5:18 PM, John Gilmore wrote: > > On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: >> Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you >> in trouble in any case. Similarly read(2)ing. > > Uh, no. You misunderstand completely. > > The purpose of the kernel is to provide a reliable interface to system > facilities, that lets processes NOT DEPEND on what other processes are > doing. > > The decision about whether Tool X uses mmap() versus read() to access a > file, or mmap() versus write() to change one, is a decision that DOES > NOT DEPEND on what Tool Y is doing. Tools X and Y may have been written > by different groups in different decades. Tool X may have been written > to use stdio, which used read(). Three years later, stdio got rewritten > to use mmap() for speed, but that's invisible to the author of Tool X. > And maybe an end user in 2025 decides to use both Tool X and Tool Y on > the same file. So only much later will any malign interactions between > read/write and mmap actually be noticed by end users. And the fix is > not to create new dependencies between Tool X, stdio, and Tool Y. It is > to fix the kernel so they do not depend on each other! > > Here is a real-life example from my own experience. > > There is a long-standing bug in the Linux kernel, in which the inotify() > system call simply didn't work on nested file systems. This caused a > long-standing bug in Ubuntu, which I reported in 2012 here: > > https://bugs.launchpad.net/ubuntu/+source/rpcbind/+bug/977847 > > The symptom was that after booting from a LiveCD image, "apt-get > install" for system services (in my case an NFS client package) wouldn't > work. Turned out the system startup scripts used inotify() to notice > and start newly installed system services. The root cause was that > inotify failed because the root file system was an "overlayfs" that > overlaid a RAMdisk on top of the read-only LiveCD file system. The > people who implemented "overlayfs" didn't think inotify() was important, > or they thought it would be too much work to make it actually meet its > specs, so they just made it ignore changes to the files in the overlaid > file system. So the startup daemon's inotify() would never report the > creation of new files about the new services, because those files were > in the overlaying RAM disk, and so it would not start them and the user > would notice the error. > > The underlying overlayfs bug was reported in 2011 here: > > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/882147 > > As far as I know it has never been fixed. (The bug report was > closed in 2019 for one of the usual bogus reasons.) > > The problem came because real tools (like systemd, or the tail command) > actually started using inotify, assuming that as a well documented > kernel interface, it would actually meet its specs. And because a > completely unrelated other real tool (like the LiveCD installer) > actually started using overlayfs, assuming that as a well documented > kernel interface, it too would actually meet its specs. And then one > day somebody tried to use both those tools together and they failed. > > That's why telling people "Don't use mmap() on the same file that you > use read() on" is an invalid attitude for a Real Kernel Maintainer. > Props to Larry McVoy for caring about this. Boos to the Linux > maintainers of overlayfs who didn't give a shit. > > John > From crossd at gmail.com Sat Feb 6 12:06:45 2021 From: crossd at gmail.com (Dan Cross) Date: Fri, 5 Feb 2021 21:06:45 -0500 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <0253BE0F-94CB-41BB-921D-6BD09A188601@iitbombay.org> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> <0253BE0F-94CB-41BB-921D-6BD09A188601@iitbombay.org> Message-ID: On Fri, Feb 5, 2021 at 7:04 PM Bakul Shah wrote: > On Feb 5, 2021, at 6:18 AM, Larry McVoy wrote: > > > > On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: > >> On Feb 4, 2021, at 4:33 PM, Larry McVoy wrote: > >>> > >>> Ignoring the page cache and make their own cache has big problems. > >>> You can mmap() ZFS files and doing so means that when a page is > referenced > >>> it is copied from the ZFS cache to the page cache. That creates a > >>> coherency problem, I can write via the mapping and I can write via > >>> write(2) and now you have two copies of the data that don't match, > >>> that's pretty much OS no-no #1. > >> > >> Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you > >> in trouble in any case. Similarly read(2)ing. > > > > The entire point of the SunOS 4.0 VM system was that the page you > > saw via mmap(2) is the exact same page you saw via read(2). It's > > the page cache, it has page sized chunks of memory that cache > > file,offset pairs. > > > > There is one, and only one, copy of the truth. Doesn't matter how > > you get at it, there is only one "it". > > > > ZFS broke that contract and that was a step backwards in terms of > > OS design. > > Let me repeat a part of my response you cut out: > > And you can keep track of mapped pages and read/write from them if > necessary even if you have a separate cache for any compressed pages. > > In essence you pass the ownership of a page's data from a compressed > page cache to the mapped page. Just like in processor cache coherence > algorithms there is one source of truth: the current owner of a cached > unit (line or page or whatever). In other words, the you see via mmap(2) > will be the exact same page you will see via read(2). Not having actually > tried this I may have missed corner cases + any practical considerations > complicating things but *conceptually* this doesn't seem hard. In essence, that's what the merged page/buffer cache is all about: file IO (read/write) operations are satisfied from the same memory cache that backs up VM objects. I agree that conceptually it's not that complex; but that's not what ZFS does. Of course the original Unix buffer cache didn't do that either, because no one was mmap'ing files on the PDP-11, let alone the PDP-7. A RAM-based buffer cache for blocks as the nexus around which the system serialized access to the disc-resident filesystem sufficed. When virtual address spaces got bigger (starting on the VAX) and folks wanted to start being more clever with marrying virtual memory and IO, you had an impedance mismatch with a fairly large extant kernel that had developed not taking into consideration memory-mapped IO to/from files. Sun fixed this, at what I take to be great expense (I followed keenly the same path of development in *BSD and Linux at the time and saw how long that took, so I believe this). But then the same Sun broke it again. Warner mentions not using ZFS for its double copying. May be omething > like the above can a step in the direction of integrating the caches? > But the cache was integrated! Until it wasn't again.... As Ron says, I too would like to hear what the authors of ZFS have to > say.... > Sounds like they thought it was too hard because compression means the place on storage where an offset in a file lands is no longer a linear function of the file's contents. Presumably the compressed contents are not kept in RAM in any of the caches (aside from a temporary buffer to which the compressed contents are read or something). - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at via.net Sat Feb 6 11:43:31 2021 From: joe at via.net (joe mcguckin) Date: Fri, 5 Feb 2021 17:43:31 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <31039.1612574314@hop.toad.com> References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> <31039.1612574314@hop.toad.com> Message-ID: When I was working for a big chip testing company, a STDIO vs MMAP problem came up. The tester was at it’s heart a SPARC VME system running Solaris. The tester read in ‘patterns’ from disk, it it literally took hours to read in all the test patterns. At the scale of the large chip vendors, every minute you can’t test because the tester is booting, etc means dollars are lost. We wrote a bunch of macros that replaced the STDIO file system I/O calls with the equivalent mmap calls. . It turns out STDIO does a lot of prefetching and has some assumptions that you’re going to read a file linearly from beginning to end, whereas we wanted to jump around a lot in the pattern files. Pattern loading went from 4 hours to 30 minutes. Our customer was ecstatic. Joe McGuckin ViaNet Communications joe at via.net 650-207-0372 cell 650-213-1302 office 650-969-2124 fax > On Feb 5, 2021, at 5:18 PM, John Gilmore wrote: > > On Thu, Feb 04, 2021 at 09:17:54PM -0800, Bakul Shah wrote: >> Write(2)ing to a mapped page sounds pretty dodgy. Likely to get you >> in trouble in any case. Similarly read(2)ing. > > Uh, no. You misunderstand completely. > > The purpose of the kernel is to provide a reliable interface to system > facilities, that lets processes NOT DEPEND on what other processes are > doing. > > The decision about whether Tool X uses mmap() versus read() to access a > file, or mmap() versus write() to change one, is a decision that DOES > NOT DEPEND on what Tool Y is doing. Tools X and Y may have been written > by different groups in different decades. Tool X may have been written > to use stdio, which used read(). Three years later, stdio got rewritten > to use mmap() for speed, but that's invisible to the author of Tool X. > And maybe an end user in 2025 decides to use both Tool X and Tool Y on > the same file. So only much later will any malign interactions between > read/write and mmap actually be noticed by end users. And the fix is > not to create new dependencies between Tool X, stdio, and Tool Y. It is > to fix the kernel so they do not depend on each other! > > Here is a real-life example from my own experience. > > There is a long-standing bug in the Linux kernel, in which the inotify() > system call simply didn't work on nested file systems. This caused a > long-standing bug in Ubuntu, which I reported in 2012 here: > > https://bugs.launchpad.net/ubuntu/+source/rpcbind/+bug/977847 > > The symptom was that after booting from a LiveCD image, "apt-get > install" for system services (in my case an NFS client package) wouldn't > work. Turned out the system startup scripts used inotify() to notice > and start newly installed system services. The root cause was that > inotify failed because the root file system was an "overlayfs" that > overlaid a RAMdisk on top of the read-only LiveCD file system. The > people who implemented "overlayfs" didn't think inotify() was important, > or they thought it would be too much work to make it actually meet its > specs, so they just made it ignore changes to the files in the overlaid > file system. So the startup daemon's inotify() would never report the > creation of new files about the new services, because those files were > in the overlaying RAM disk, and so it would not start them and the user > would notice the error. > > The underlying overlayfs bug was reported in 2011 here: > > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/882147 > > As far as I know it has never been fixed. (The bug report was > closed in 2019 for one of the usual bogus reasons.) > > The problem came because real tools (like systemd, or the tail command) > actually started using inotify, assuming that as a well documented > kernel interface, it would actually meet its specs. And because a > completely unrelated other real tool (like the LiveCD installer) > actually started using overlayfs, assuming that as a well documented > kernel interface, it too would actually meet its specs. And then one > day somebody tried to use both those tools together and they failed. > > That's why telling people "Don't use mmap() on the same file that you > use read() on" is an invalid attitude for a Real Kernel Maintainer. > Props to Larry McVoy for caring about this. Boos to the Linux > maintainers of overlayfs who didn't give a shit. > > John > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp at servium.ch Sat Feb 6 12:22:32 2021 From: rp at servium.ch (Rico Pajarola) Date: Fri, 5 Feb 2021 18:22:32 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: On Fri, Feb 5, 2021 at 12:51 PM Dave Horsfall wrote: > > [...] > > Thanks; I'd heard that ZFS was a compressed file system, so I stopped > right there (I had lots of experience in recovering from corrupted RK05s, > and didn't need any more trouble). > That's funny, for me this is the main reason to use ZFS... What really sets ZFS apart from everything else is the lack of trouble and its resilience to failures. We used to have lots and lots of ZFS filesystems at work, and I've been using ZFS exclusively at home ever since. I have run into a non-importable ZFS file system (all drives are there, but it's corrupt and won't import) only once, and was able to fix it with zdb. ZFS compression is completely optional, and not even on by default. I've only tried it once and found it cost too much performance on something that's not very fast to begin with, but I don't think it affects data recovery much (the way ZFS stripes data makes traditional data recovery tools pretty useless anyway). I personally don't care about purity of implementation, because everything is a trade-off. The argument really reminds me of Tanenbaums criticism of the Linux Monokernel (was Tanenbaum right? Maybe, but who cares, because Linux took over the world, and Minix didn't, so from a practical point of view, Linus was right). The other one it reminds me of is the criticism of TCPs "blatant layering violation" (vs OSI). But IMHO the critics were just jealous of the cool things they couldn't do because they needed to respect the division of labor along those pesky layers. I remember reading on one of the Sun engineers blogs (remember when Sun allowed their engineers to keep blogs about Solaris development? Good times!) about the heated discussions they had over the ARC and bypassing the page cache. I don't remember the actual arguments for it, but it was certainly not a decision that was made out of laziness. Performance wise, ZFS is not the best, and if that's all you care about, there are better options. It needs a lot of tuning to just reach "acceptable" and it definitely does not play well with doing other stuff on the same machine (it pretty much assumes that your storage appliance is dedicated). It has particularly abysmal performance when you do lots of small random writes and then try to read that back in order, but if you care about not losing your data, it's 2nd to none. In $JOB-1 (almost 15 years ago), we spent a few weeks stress testing ZFS. The setup was 24x4TB SATA drives, divided into 2 12 drive raidz2 vdevs or something like that. All tests were done while it was busy reading/writing checksummed test files at full speed, 1GB/s or so (see? Performance was not impressive. We definitely got a lot more out of that with UFS). What was absolutely stunning was the fact that in all our tests it never served one bit of corrupted data. It either had it, or it returned an error. We tortured the storage in any way we could imagine. Wiggled the cables, yanked out drives, used dd to overwrite random parts or entire drives, smashed a drive with a hammer and put it back in, put in drives of the wrong size, put in known bad drives, yanked out drives while it was resilvering, put drives back into a different slot, overwrote stuff while it was resilvering. Unplugged the entire storage, plugged the storage into another machine and imported it, plugged the drives back into the first machine in a different order. We even did things like "copy a drive onto a spare with dd, remove 3 drives, and then substitute the spare drive for the removed one" (this led to some data loss because making the copy was not atomic, but most of the data was recoverable). And no matter what we did, it just kept going unless the data was simply not there, and even then, it kept serving the files (or parts of files) that were available, and indicated exactly which files were affected by data loss. And when you put the drives back (or restored the overwritten parts), it would continue as if nothing had ever happened. If you've ever wrestled a hardware RAID controller, or VxFS/JFS/HPFS, or mdadm, you know that none of that can be taken for granted, and that doing any of the stupid things mentioned above would most likely lead to complete data loss and/or serving lots of random corrupted data and no way to tell what had been corrupted. I remember some performance issues with mmap, but I don't remember how we fixed it. Probably just sucked it up. Using ZFS was not for maximum performance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Sat Feb 6 12:55:53 2021 From: lm at mcvoy.com (Larry McVoy) Date: Fri, 5 Feb 2021 18:55:53 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: <20210206025553.GY13701@mcvoy.com> On Fri, Feb 05, 2021 at 06:22:32PM -0800, Rico Pajarola wrote: > On Fri, Feb 5, 2021 at 12:51 PM Dave Horsfall wrote: > > Thanks; I'd heard that ZFS was a compressed file system, so I stopped > > right there (I had lots of experience in recovering from corrupted RK05s, > > and didn't need any more trouble). > > > That's funny, for me this is the main reason to use ZFS... What really sets > ZFS apart from everything else is the lack of trouble and its resilience to > failures. I'm gonna call Bill tomorrow and get his take again, that's Bill Moore one of the two main guys who did ZFS. This whole thread is sort of silly. There are the users of ZFS who love it for what it does for them. I have no argument with them. Then there are the much smaller, depressingly so, group of people who care about OS design that think ZFS took a step backwards. I think Dennis might have stepped in here, if he was still with us, and had some words. I think Dennis would have brought us back to lets talk about the kernel and what is right. ZFS is useful, no doubt, but it is not right from a kernel guy's point of view. I miss Dennis. From will.senn at gmail.com Sat Feb 6 12:57:19 2021 From: will.senn at gmail.com (Will Senn) Date: Fri, 5 Feb 2021 20:57:19 -0600 Subject: [TUHS] Typing tutors Message-ID: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> Hi all, On a completely different note... I’ve been delving into typing tutor programs of late. Quite a mishmash of approaches out there. Not at all like what I remember from junior high - The quick brown fox jumps over the lazy dog, kinda stuff. Best of breed may be Mavis Beacon Teaches Typing on the gui front, and I hate to admit it, gnu typist, on the console front. I’m wondering if there are some well considered unix programs, historically, for learning typing? Or did everyone spring into the unix world accomplished typists straight outta school? I did see mention a while back about a TOPS-10 typing tutor, not unix, but in the spirit - surely there's some unix history around typing tutors. Thanks, Will From bakul at iitbombay.org Sat Feb 6 13:01:48 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Fri, 5 Feb 2021 19:01:48 -0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <20210205003315.GK13701@mcvoy.com> <26D923CF-5319-4207-BA28-6EFA0E3BB1F8@iitbombay.org> <20210205141820.GO13701@mcvoy.com> <0253BE0F-94CB-41BB-921D-6BD09A188601@iitbombay.org> Message-ID: <0076DB2B-48BC-45D3-8278-3D8528431E87@iitbombay.org> > On Feb 5, 2021, at 6:06 PM, Dan Cross wrote: > > On Fri, Feb 5, 2021 at 7:04 PM Bakul Shah > wrote: > > And you can keep track of mapped pages and read/write from them if > necessary even if you have a separate cache for any compressed pages. > > In essence you pass the ownership of a page's data from a compressed > page cache to the mapped page. Just like in processor cache coherence > algorithms there is one source of truth: the current owner of a cached > unit (line or page or whatever). In other words, the you see via mmap(2) > will be the exact same page you will see via read(2). Not having actually > tried this I may have missed corner cases + any practical considerations > complicating things but *conceptually* this doesn't seem hard. > > In essence, that's what the merged page/buffer cache is all about: file IO (read/write) operations are satisfied from the same memory cache that backs up VM objects. I agree that conceptually it's not that complex; but that's not what ZFS does. Good that we agree on that at least! This is why it would be good to hear from Bonwick/Moore. -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Sat Feb 6 13:07:18 2021 From: will.senn at gmail.com (Will Senn) Date: Fri, 5 Feb 2021 21:07:18 -0600 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210206025553.GY13701@mcvoy.com> References: <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> <20210206025553.GY13701@mcvoy.com> Message-ID: On 2/5/21 8:55 PM, Larry McVoy wrote: > I'm gonna call Bill tomorrow and get his take again, that's Bill Moore > one of the two main guys who did ZFS. > > This whole thread is sort of silly. There are the users of ZFS who love > it for what it does for them. I have no argument with them. Then there > are the much smaller, depressingly so, group of people who care about OS > design that think ZFS took a step backwards. > > I think Dennis might have stepped in here, if he was still with us, and > had some words. > > I think Dennis would have brought us back to lets talk about the kernel > and what is right. ZFS is useful, no doubt, but it is not right from > a kernel guy's point of view. > > I miss Dennis. Larry, Now, after the last 50 emails or so on this topic, I get it :). At least, I understand that technical decision were made in creating ZFS that were likely ill considered, the impact of those changes dubbed insignificant, or even possibly sound design principles ignored. It's debatable whether or not these decisions were deliberately contrary to good OS design, but I appreciate your and the other experts hanging in there and explaining your perspectives. I'm a systems guy, so a lot of the detailed OS discussions go over my head, but after enough of them, it kinda makes sense. At least, now, I'll pay closer attention to the ZFS developer list discussions :). Will From cowan at ccil.org Sat Feb 6 14:55:38 2021 From: cowan at ccil.org (John Cowan) Date: Fri, 5 Feb 2021 23:55:38 -0500 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> Message-ID: On Fri, Feb 5, 2021 at 3:50 PM Dave Horsfall wrote: > Ah, the RK05 - evil incarnate. I mean, a disk drive exposed to the air? > Out There Somewhere [tm] is a picture of a human hair compared with the > head clearance; yikes! > IIRC, the picture showed that even a smoke particle or a fingerprint is bigger than the gap, never mind the hair. Once I had to take a PDP-11 accounting application from $EMPLOYER's NJ office to a customer in KC. It fit on a single RK, so I took two identical disks just in case. This was in the late 70s. They were already X-raying carry-on luggage then, and I had a Bad Feeling about what might happen, so I said no. "No problem. Does it open up?" "Well, yes, but dust will ruin it." "Well, what should we do?" "Trust me?" "Well -- okay. Go ahead." Of course when I got there both RKs were scrambled completely, so I took the next plane home. The next weekend, a colleague took the application on the same flight. On floppies. Floppy floppies. Lots and lots of them. It installed fine. Of course it took him all day and much of the following day. John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org You cannot enter here. Go back to the abyss prepared for you! Go back! Fall into the nothingness that awaits you and your Master. Go! --Gandalf -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Sun Feb 7 02:55:08 2021 From: clemc at ccc.com (Clem Cole) Date: Sat, 6 Feb 2021 11:55:08 -0500 Subject: [TUHS] Typing tutors In-Reply-To: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> Message-ID: On Fri, Feb 5, 2021 at 9:57 PM Will Senn wrote: > I did see mention a while back about a TOPS-10 typing tutor, not unix, but > in the spirit - surely there's some unix history around typing tutors. > Nah I just learned to push harder on the ASR-33 keys ;-) Funny, back-in-the-day, the local public HS had a typing class for the girls, which two of my sisters took. The all-male prep-school where my dad taught and my brothers and I all went, had nothing. But I had access to an ASR-33 and just migrated to it. To this day, my wife (who is a concert pianist/organist) can touch type but she is amazed at watching me with my 2, 3 or 4 finger style. Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Sun Feb 7 03:22:01 2021 From: ron at ronnatalie.com (Ron Natalie) Date: Sat, 06 Feb 2021 17:22:01 +0000 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> Message-ID: One of the smartest things my mother did was make me take typing in summer school one year. Little did she or I knew that being able to type 60WPM was going to become a very important asset in my eventual career. ------ Original Message ------ From: "Clem Cole" To: "Will Senn" Cc: "TUHS main list" Sent: 2/6/2021 11:55:08 AM Subject: Re: [TUHS] Typing tutors > > >On Fri, Feb 5, 2021 at 9:57 PM Will Senn wrote: >>I did see mention a while back about a TOPS-10 typing tutor, not unix, >>but in the spirit - surely there's some unix history around typing >>tutors. >Nah I just learned to push harder on the ASR-33 keys ;-) > >Funny, back-in-the-day, the local public HS had a typing class for the >girls, which two of my sisters took. The all-male prep-school where >my dad taught and my brothers and I all went, had nothing. But I had >access to an ASR-33 and just migrated to it. > >To this day, my wife (who is a concert pianist/organist) can touch type >but she is amazed at watching me with my 2, 3 or 4 finger style. > >Clem >ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Sun Feb 7 03:29:03 2021 From: clemc at ccc.com (Clem Cole) Date: Sat, 6 Feb 2021 12:29:03 -0500 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> Message-ID: Which, is why we made both of my adult kids learn to touch type. ᐧ On Sat, Feb 6, 2021 at 12:22 PM Ron Natalie wrote: > One of the smartest things my mother did was make me take typing in summer > school one year. Little did she or I knew that being able to type 60WPM > was going to become a very important asset in my eventual career. > > > ------ Original Message ------ > From: "Clem Cole" > To: "Will Senn" > Cc: "TUHS main list" > Sent: 2/6/2021 11:55:08 AM > Subject: Re: [TUHS] Typing tutors > > > > On Fri, Feb 5, 2021 at 9:57 PM Will Senn wrote: > >> I did see mention a while back about a TOPS-10 typing tutor, not unix, >> but in the spirit - surely there's some unix history around typing tutors. >> > Nah I just learned to push harder on the ASR-33 keys ;-) > > Funny, back-in-the-day, the local public HS had a typing class for the > girls, which two of my sisters took. The all-male prep-school where my > dad taught and my brothers and I all went, had nothing. But I had access to > an ASR-33 and just migrated to it. > > To this day, my wife (who is a concert pianist/organist) can touch type > but she is amazed at watching me with my 2, 3 or 4 finger style. > > Clem > ᐧ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mah at mhorton.net Sun Feb 7 03:33:53 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Sat, 6 Feb 2021 09:33:53 -0800 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> Message-ID: <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> I learned on a manual typewriter in 7th grade, but I got fast on a keypunch. To this day i don't use the right shift key, because it didn't work on a keypunch. At Berkeley, everybody was already a touch typist. That's why vi commands emphasize lower case letters, especially hjkl which are right under the home position. The original reason for hjkl was the ADM3A, but when I added arrow key support to vi and disabled the hardcoded hjkl, a line of grad students made me put it back.     Mary Ann On 2/6/21 9:22 AM, Ron Natalie wrote: > One of the smartest things my mother did was make me take typing in > summer school one year.    Little did she or I knew that being able to > type 60WPM was going to become a very important asset in my eventual > career. > > > ------ Original Message ------ > From: "Clem Cole" > > To: "Will Senn" > > Cc: "TUHS main list" > > Sent: 2/6/2021 11:55:08 AM > Subject: Re: [TUHS] Typing tutors > >> >> >> On Fri, Feb 5, 2021 at 9:57 PM Will Senn > > wrote: >> >> I did see mention a while back about a TOPS-10 typing tutor, not >> unix, but in the spirit - surely there's some unix history around >> typing tutors. >> >> Nah   I just learned to push harder on the ASR-33 keys ;-) >> Funny, back-in-the-day, the local public HS had a typing class for >> the girls, which two of my sisters took.  The all-male prep-school >> where my dad taught and my brothers and I all went, had nothing. But >> I had access to an ASR-33 and just migrated to it. >> >> To this day, my wife (who is a concert pianist/organist) can touch >> type but she is amazed at watching me with my 2, 3 or 4 finger style. >> >> Clem >> ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Sun Feb 7 03:47:04 2021 From: ron at ronnatalie.com (Ron Natalie) Date: Sat, 06 Feb 2021 17:47:04 +0000 Subject: [TUHS] Typing tutors In-Reply-To: <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: Yep. The problem with keypunches and teletypes is that they had a limit to how fast you could type on them and you could easily outtype them. The key to being efficient on them was to get into the rhythm of the maximum speed the machine could accept. My first terminal I got to use was actually an ADM1. It had the same arrow keys printed on HJKL as the ADM3. The H and J made sense (backspace and linefeed for left and down). The others were just convenient as they were physically adjacent. To this day, it galls me that emacs uses ^H for help. It's the first thing I change when I install it. By the time vi rolled around I had already learned one of the emacs variants (after a brief stint with a Rand-editor flavored thing called INed). To this day I don't really have much facility in vi. It used to freakout my coworkers no end that if there was no emacs on the machine, I'd just blast through everything using ed. Nice thing about doing a lot of work in ed: you get very good at regular expressions. ------ Original Message ------ From: "Mary Ann Horton" To: tuhs at minnie.tuhs.org Sent: 2/6/2021 12:33:53 PM Subject: Re: [TUHS] Typing tutors >At Berkeley, everybody was already a touch typist. That's why vi >commands emphasize lower case letters, especially hjkl which are right >under the home position. The original reason for hjkl was the ADM3A, >but when I added arrow key support to vi and disabled the hardcoded >hjkl, a line of grad students made me put it back. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Sun Feb 7 04:06:36 2021 From: clemc at ccc.com (Clem Cole) Date: Sat, 6 Feb 2021 13:06:36 -0500 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: On Sat, Feb 6, 2021 at 12:47 PM Ron Natalie wrote: > It used to freakout my coworkers no end that if there was no emacs on the > machine, I'd just blast through everything using ed. Nice thing about > doing a lot of work in ed: you get very good at regular expressions. > Yep - although there was usually a vi, which is why I stuck with it. uemacs just sucked. WRT ^H yeah - it also galled me that if you type ^H to ITS, they knew what you wanted to do and would torment you by telling you to use DEL. Sigh... Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at kdbarto.org Sun Feb 7 04:56:35 2021 From: david at kdbarto.org (David Barto) Date: Sat, 6 Feb 2021 10:56:35 -0800 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> Message-ID: <668A9720-D414-4E90-ACD4-0E0A35D74F08@kdbarto.org> > On Feb 6, 2021, at 9:22 AM, Ron Natalie wrote: > > One of the smartest things my mother did was make me take typing in summer school one year. Little did she or I knew that being able to type 60WPM was going to become a very important asset in my eventual career. > > My mother also insisted that I take typing as a summer school class because “In college you will have to type papers for your professors”. She having been a professor of English at Perdue University. I took the class and failed because I would backspace and overstrike with the correct letter. After I got to college I used UCSD Pascal and so typing a backspace to erase the previous letter was fine. I returned to HS and told the typing instructor that I now type 40 WPM flawlessly because I use a computer, not something with paper. He was not impressed. The HS replaced the IBM Selectrics with cheap PC clones the next year. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Sun Feb 7 08:38:30 2021 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 7 Feb 2021 09:38:30 +1100 (EST) Subject: [TUHS] Typing tutors In-Reply-To: <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: On Sat, 6 Feb 2021, Mary Ann Horton wrote: > I learned on a manual typewriter in 7th grade, but I got fast on a > keypunch. To this day i don't use the right shift key, because it didn't > work on a keypunch. The 026 (ugh!), or the 029? > At Berkeley, everybody was already a touch typist. That's why vi > commands emphasize lower case letters, especially hjkl which are right > under the home position. The original reason for hjkl was the ADM3A, but > when I added arrow key support to vi and disabled the hardcoded hjkl, a > line of grad students made me put it back. I'm not surprised :-) We were all playing "rogue" back then. And my favourite terminal was indeed the ADM-3A; it just seemed to be designed for Unix, with the ESC key in the right place etc. I still loathe the VT-220... -- Dave, a fast two-finger typist (but with pinkie on RETURN) From nikke.karlsson at gmail.com Sun Feb 7 08:47:13 2021 From: nikke.karlsson at gmail.com (Niklas Karlsson) Date: Sat, 6 Feb 2021 23:47:13 +0100 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: Den lör 6 feb. 2021 kl 23:39 skrev Dave Horsfall : > On Sat, 6 Feb 2021, Mary Ann Horton wrote: > > > At Berkeley, everybody was already a touch typist. That's why vi > > commands emphasize lower case letters, especially hjkl which are right > > under the home position. The original reason for hjkl was the ADM3A, but > > when I added arrow key support to vi and disabled the hardcoded hjkl, a > > line of grad students made me put it back. > > I'm not surprised :-) We were all playing "rogue" back then. And my > favourite terminal was indeed the ADM-3A; it just seemed to be designed > for Unix, with the ESC key in the right place etc. > I'm probably a youngster in this crowd (no, I'm not calling you old farts, more like people with a long history I respect and am willing to learn from). Born in 1980. But I had similar reasons for feeling at home with hjkl. In the 1980s (I think before I even started school) I got my hands on what was then called HACK for MS-DOS, which of course later became NetHack. So by the time I started playing with Linux and other *nixes in 2000, I didn't have any real learning curve with basic vi usage. Niklas -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.douglas.mcilroy at dartmouth.edu Sun Feb 7 09:28:19 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Sat, 6 Feb 2021 18:28:19 -0500 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] Message-ID: This topic is evocative, even though I really have nothing to say about it. Mike Lesk started, and I believe Brian contributed to, "learn", a program for interactive tutorials about Unix. It was never pushed very far--almost certainly not into typing. But the mention of typing brings to mind the inimitable Fred Grampp--he who pioneered massive white-hat computer cracking. Fred's exploits justified the opening sentence I wrote for Bell Labs' first computer-security task force report, "It is easy and not very risky to pilfer data from Bell Laboratories computers." Among Fred's many distinctive and endearing quirks was the fact that he was a confirmed two-finger typist--proof that typing technique is an insignificant factor in programmer productivity. I thought this would be an excuse to tell another ftg story, but I don't want to repeat myself and a search for "Grampp" in the tuhs archives misses many that have already been told. Have the entries been lost or is the index defective? Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Sun Feb 7 10:05:58 2021 From: wkt at tuhs.org (Warren Toomey) Date: Sun, 7 Feb 2021 10:05:58 +1000 Subject: [TUHS] Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: <20210207000558.GA11474@minnie.tuhs.org> On Sat, Feb 06, 2021 at 06:28:19PM -0500, M Douglas McIlroy wrote: > I thought this would be an excuse to tell another ftg story, but > I don't want to repeat myself and a search for "Grampp" in the tuhs > archives misses many that have already been told. Have the entries been > lost or is the index defective? > Doug I'm not sure. I'm using swish to do the indexing. I've just destroyed the existing index and rebuilt it. Fingers crossed! Thanks, Warren From cowan at ccil.org Sun Feb 7 10:25:14 2021 From: cowan at ccil.org (John Cowan) Date: Sat, 6 Feb 2021 19:25:14 -0500 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: On Sat, Feb 6, 2021 at 5:47 PM Niklas Karlsson wrote: > I'm probably a youngster in this crowd (no, I'm not calling you old farts, >> more like people with a long history I respect and am willing to learn >> from). > > In computer circles, that is what "old fart" means. John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org If I have not seen as far as others, it is because giants were standing on my shoulders. --Hal Abelson -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Sun Feb 7 10:49:00 2021 From: will.senn at gmail.com (Will Senn) Date: Sat, 6 Feb 2021 18:49:00 -0600 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: Doug said: > Among Fred's many distinctive and endearing > quirks was the fact that he was a confirmed two-finger typist--proof that > typing technique is an insignificant factor in programmer productivity. > I wrote my most lasting programs before I learned to type - many of which are still in production some 20+ years later. Tolkeim and Gilbarco credit processors, SMTPE and IRIG-B GPS satellite time signal processing hardware drivers, and others more ancient and specialized. Ah, those were the days... not the good old days mind you, but memorable. Am I alone in seeing the utter irony in my sitting here, my two thumbs searching frantically for the right letters on a keyboard the size of a 1/3rd of a credit card, designed to slow typists down (qwerty), missing as many letters as I get right? Ugh... what lunacy progress hath wrought! > I thought this would be an excuse to tell another ftg story, but I > don't want to repeat myself and a search for "Grampp" in the tuhs archives > misses many that have already been told. Have the entries been lost or > is the index defective? What’s an ftg? Grampp, I’ve heard about. Will From arnold at skeeve.com Sun Feb 7 17:32:39 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sun, 07 Feb 2021 00:32:39 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: <202102070732.1177Wd3r014240@freefriends.org> Hi. Thanks for the update. The speed comparison is interesting. With respect to screen flickering, please open an issue on the Github repo. I don't really see that under Linux. Thanks, Arnold Sergio Pedraja wrote: > Hi everyone. I've built Freebee using Make and specifying win32 as > architecture under Cygwin with libSDL2 plus Cygwin-X XWindows installed. > The Freebee runs starting it from xterm. It's a bit faster than my own > real 3B1. I have briefly tested the two startup hard drives and the second > hard drive, empty. No problem as far as I have seen. Great work. On the > other hand I dare to suggest the improve of the GUI of the emulator to > reduce the flickering of the 3B1's screen refresh. Is too much visible. > Thanks and good luck, anyway. > > Sergio > > El vie., 29 ene. 2021 11:50, Arnold Robbins escribió: > > > Hello All. > > > > Many of you may remember the AT&T UNIX PC and 3B1. These systems > > were built by Convergent Technologies and sold by AT&T. They had an > > MC 68010 processor, up to 4 Meg Ram and up to 67 Meg disk. The OS > > was System V Release 2 vintage. There was a built-in 1200 baud modem, > > and a primitive windowing system with mouse. > > > > I had a 3B1 as my first personal system and spent many happy hours writing > > code and documentation on it. > > > > There is an emulator for it that recently became pretty stable. The > > original > > software floppy images are available as well. You can bring up a fairly > > functional system without much difficulty. > > > > The emulator is at https://github.com/philpem/freebee. You can install up > > to two 175 Meg hard drives - a lot of space for the time. > > > > The emulator's README.md there has links to lots of other interesting > > 3B1 bits, both installable software and Linux tools for exporting the > > file system from disk image so it can be mounted under Linux and > > importing it back. Included is an updated 'sysv' Linux kernel module > > that can handle the byte-swapped file system. > > > > I have made a pre-installed disk image available with a fair amount > > of software, see https://www.skeeve.com/3b1/. > > > > The emulator runs great under Linux; not so sure about MacOS or Windows. > > :-) > > > > So, anyone wishing to journey back to 1987, have fun! > > > > Arnold > > From mah at mhorton.net Mon Feb 8 03:43:38 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Sun, 7 Feb 2021 09:43:38 -0800 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> On 2/6/21 2:38 PM, Dave Horsfall wrote: > On Sat, 6 Feb 2021, Mary Ann Horton wrote: > >> I learned on a manual typewriter in 7th grade, but I got fast on a >> keypunch. To this day i don't use the right shift key, because it >> didn't work on a keypunch. > > The 026 (ugh!), or the 029? > I had to Google for an image of the 026 - yuck!  The image of an 029 matches what I recall. >> At Berkeley, everybody was already a touch typist. That's why vi >> commands emphasize lower case letters, especially hjkl which are >> right under the home position. The original reason for hjkl was the >> ADM3A, but when I added arrow key support to vi and disabled the >> hardcoded hjkl, a line of grad students made me put it back. > > I'm not surprised :-)  We were all playing "rogue" back then.  And my > favourite terminal was indeed the ADM-3A; it just seemed to be > designed for Unix, with the ESC key in the right place etc. > I hated it when the PC-AT came along and moved Ctrl down and Esc up! I depend on Ctrl being to the left of A and Esc left of 1, where God intended them to be! I used a Sun keyboard with a DIN adapter for years, until I came to SDG&E in 2007 and discovered a cache of USB Sun keyboards, half with the UNIX layout (yay!) and half with the PC layout (boo!) Word got around quickly that I liked them, and I wound up with several UNIX layout Sun keyboards. For good measure, I bought a 10-pack on eBay, so I'll have spares until the day they peel my cold dead fingers away from my UNIX layout keyboard.     Mary Ann From crossd at gmail.com Mon Feb 8 05:28:06 2021 From: crossd at gmail.com (Dan Cross) Date: Sun, 7 Feb 2021 14:28:06 -0500 Subject: [TUHS] Typing tutors In-Reply-To: <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> Message-ID: On Sun, Feb 7, 2021 at 12:44 PM Mary Ann Horton wrote: > > I'm not surprised :-) We were all playing "rogue" back then. And my > > favourite terminal was indeed the ADM-3A; it just seemed to be > > designed for Unix, with the ESC key in the right place etc. > > I hated it when the PC-AT came along and moved Ctrl down and Esc up! I > depend on Ctrl being to the left of A and Esc left of 1, where God > intended them to be! I used a Sun keyboard with a DIN adapter for years, > until I came to SDG&E in 2007 and discovered a cache of USB Sun > keyboards, half with the UNIX layout (yay!) and half with the PC layout > (boo!) Word got around quickly that I liked them, and I wound up with > several UNIX layout Sun keyboards. For good measure, I bought a 10-pack > on eBay, so I'll have spares until the day they peel my cold dead > fingers away from my UNIX layout keyboard. > A few years ago I got to the point where my wrists just wouldn't take it anymore. I invested in an Evoluent vertical mouse (3 buttons! Well, really more than that, but the three in the "correct" positions were the ones I cared about) and a Kinesis Advantage keyboard. It took me about a week to learn how to type on the Kinesis, but I can't imagine going back now. I remapped the 'Caps Lock' key to control so that I've got a Control key where one is supposed to be, but the Esc key is a bit far away. It's not excessively so, but it is mildly annoying. Still, RSI no longer wakes me up at night, so on balance the tradeoff has been worth it. Some colleagues have suggested learning the Dvorak layout; I splurged for the Kinesis with the double QWERTY/Dvorak keycaps and a mode key and (relevant to the question here) I found some typing tutor program that would ostensibly teach me typing again. But like Morse code, it's never stuck. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cym224 at gmail.com Mon Feb 8 07:32:56 2021 From: cym224 at gmail.com (Nemo Nusquam) Date: Sun, 07 Feb 2021 16:32:56 -0500 Subject: [TUHS] Typing tutors In-Reply-To: <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> Message-ID: <60205C88.6070808@gmail.com> On 07/02/2021 12:43, Mary Ann Horton wrote (in part): > I hated it when the PC-AT came along and moved Ctrl down and Esc up! I > depend on Ctrl being to the left of A and Esc left of 1, where God > intended them to be! I used a Sun keyboard with a DIN adapter for > years, until I came to SDG&E in 2007 and discovered a cache of USB Sun > keyboards, half with the UNIX layout (yay!) and half with the PC > layout (boo!) Word got around quickly that I liked them, and I wound > up with several UNIX layout Sun keyboards. For good measure, I bought > a 10-pack on eBay, so I'll have spares until the day they peel my cold > dead fingers away from my UNIX layout keyboard. My Sun UNIX layout keyboards (and mice) work quite well with my Macs. I share your sentiments. N. From ggm at algebras.org Mon Feb 8 08:33:55 2021 From: ggm at algebras.org (George Michaelson) Date: Mon, 8 Feb 2021 08:33:55 +1000 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: The "learn" about shell or editing required you to demonstrate you could type with 'pack my box with six dozen liquor jugs' input gating the lesson as I remember it. something else around that time, I think the TOPS-10 typing tutor got me the home keys. Took another 10 years for me to wake up to being able to type mostly eyes off the keyboard but it sure seems to work (most of the the time) now. -G On Sun, Feb 7, 2021 at 9:28 AM M Douglas McIlroy wrote: > > This topic is evocative, even though I really have nothing to say about it. > > Mike Lesk started, and I believe Brian contributed to, "learn", a program > for interactive tutorials about Unix. It was never pushed very far--almost > certainly not into typing. > > But the mention of typing brings to mind the inimitable Fred Grampp--he > who pioneered massive white-hat computer cracking. Fred's exploits justified > the opening sentence I wrote for Bell Labs' first computer-security task > force report, "It is easy and not very risky to pilfer data from Bell > Laboratories computers." Among Fred's many distinctive and endearing > quirks was the fact that he was a confirmed two-finger typist--proof that > typing technique is an insignificant factor in programmer productivity. > > I thought this would be an excuse to tell another ftg story, but I > don't want to repeat myself and a search for "Grampp" in the tuhs archives > misses many that have already been told. Have the entries been lost or > is the index defective? > > Doug > From henry.r.bent at gmail.com Mon Feb 8 09:17:27 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Sun, 7 Feb 2021 18:17:27 -0500 Subject: [TUHS] Typing tutors In-Reply-To: <60205C88.6070808@gmail.com> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> <60205C88.6070808@gmail.com> Message-ID: On Sun, Feb 7, 2021, 16:34 Nemo Nusquam wrote: > On 07/02/2021 12:43, Mary Ann Horton wrote (in part): > > I hated it when the PC-AT came along and moved Ctrl down and Esc up! I > > depend on Ctrl being to the left of A and Esc left of 1, where God > > intended them to be! I used a Sun keyboard with a DIN adapter for > > years, until I came to SDG&E in 2007 and discovered a cache of USB Sun > > keyboards, half with the UNIX layout (yay!) and half with the PC > > layout (boo!) Word got around quickly that I liked them, and I wound > > up with several UNIX layout Sun keyboards. For good measure, I bought > > a 10-pack on eBay, so I'll have spares until the day they peel my cold > > dead fingers away from my UNIX layout keyboard. > My Sun UNIX layout keyboards (and mice) work quite well with my Macs. I > share your sentiments. > There was an early Apple ADB keyboard with control in the "right" place. I had two and I used them for many years with ADB to USB adapters until the keyboards became unreliable. drakaware.com makes a variety of keyboard adapters, including a Sun to USB if you just can't give up your Type 5 (or type 4!) -Henry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From usotsuki at buric.co Mon Feb 8 09:55:04 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Sun, 7 Feb 2021 18:55:04 -0500 (EST) Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> <60205C88.6070808@gmail.com> Message-ID: On Sun, 7 Feb 2021, Henry Bent wrote: > There was an early Apple ADB keyboard with control in the "right" place. I > had two and I used them for many years with ADB to USB adapters until the > keyboards became unreliable. The one from the Apple IIgs? -uso. From henry.r.bent at gmail.com Mon Feb 8 10:56:23 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Sun, 7 Feb 2021 19:56:23 -0500 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> <60205C88.6070808@gmail.com> Message-ID: On Sun, Feb 7, 2021, 18:55 Steve Nickolas wrote: > On Sun, 7 Feb 2021, Henry Bent wrote: > > > There was an early Apple ADB keyboard with control in the "right" place. > I > > had two and I used them for many years with ADB to USB adapters until the > > keyboards became unreliable. > > The one from the Apple IIgs? > > -uso. > Actually an M0116, same layout but intended for a Mac. I can't remember where they came from originally - maybe a IIci or something of that vintage? -Henry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fair-tuhs at netbsd.org Mon Feb 8 15:15:47 2021 From: fair-tuhs at netbsd.org (Erik E. Fair) Date: Sun, 07 Feb 2021 21:15:47 -0800 Subject: [TUHS] Typing tutors In-Reply-To: References: , <10619.1491461840@cesium.clock.org> Message-ID: <17353.1612761347@cesium.clock.org> The first Apple Desktop Bus (ADB) keyboard was for the Apple IIgs: https://deskthority.net/wiki/Apple_Desktop_Bus_Keyboard and that lead to the same keyboard layout for the Mac II ADB keyboards (the Mac 128K, Fat Mac, and Mac Plus did not use ADB for their keyboards): https://deskthority.net/wiki/Apple_M0116 That was the last Apple keyboard with the Control and Escape keys in the correct positions, particularly for those of us using Macs as terminals to Unix systems. https://en.wikipedia.org/wiki/Apple_Keyboard (a reasonably full history). I took a typing class in 7th grade (early 1970s) on heavy, manual Smith-Corona typewriters, more or less contemporaneously with learning to program in BASIC on a DG Nova batch system, using non-punched Hollerith cards - we marked them with #2 pencils, and woe is you if you didn't fill in the dots well enough for your cards to be read by the card reader: correct your cards, and back of the batch queue for you! After that, it was years pounding TTY ASR-33s (interactive BASIC, rather than batch), Hazeltine h1500, LSI ADM-3a, Heathkit h19, HP 2621, the occasional DEC VT100 or VT102 ... after being hired in July 1988 by Apple, I've typed on basically nothing but Apple keyboards, with very occasional flirtations with third-party ADB or USB keyboards. I had a bad bout of repetitive strain injury (specifically, ulnar nerve syndrome - a cousin of carpal tunnel syndrome) in my left hand in the early 1990s, partly from pounding keyboards too hard for too long, and partly (I think) from wearing a Casio digital watch with the watchband cinched too tight (I hate floppy watch). I made three changes, after a month of PT and silly amounts of ibuprofen: velcro watch band/strap for near-infinite fine adjustment of fit (so as not to constrict my wrist), neoprene wrist rests in front of my keyboards, and training myself not to pound the keys so hard. I still have a small stock of Apple M0116 keyboards, though I've capitulated to the IBM PC "typist" keyboard layout with Control in the incorrect position; I've been using the Apple A1243 (US) Aluminum USB extended keyboard (with some replacement stock) since its introduction in 2007, and I'm moderately happy with it: thin keyboard, no wrist rest required, light touch - no key pounding required. The A1843 (optionally wireless USB keyboard with "lightning" port and no USB hub) is an OK replacement, but I use it strictly wired. Laptop keyboards also want a lighter touch these days. I'm glad I took the typing course, but I'm hardly a full touch typist. However, I'm fast enough that I prefer vi to emacs, as I've previously described. I'm not perfect, but that's what the backspace or DEL key is for (and, with a properly programmed tty line discipline: ^W (word erase)). Very glad I was already conversant with computers when it came time to write essays for UCB freshman English classes. That was also impetus to learn nroff. Erik From usotsuki at buric.co Mon Feb 8 15:33:16 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Mon, 8 Feb 2021 00:33:16 -0500 (EST) Subject: [TUHS] Typing tutors In-Reply-To: <17353.1612761347@cesium.clock.org> References: , <10619.1491461840@cesium.clock.org> <17353.1612761347@cesium.clock.org> Message-ID: On Sun, 7 Feb 2021, Erik E. Fair wrote: > The first Apple Desktop Bus (ADB) keyboard was for the Apple IIgs: > > https://deskthority.net/wiki/Apple_Desktop_Bus_Keyboard > > and that lead to the same keyboard layout for the Mac II ADB keyboards (the > Mac 128K, Fat Mac, and Mac Plus did not use ADB for their keyboards): > > https://deskthority.net/wiki/Apple_M0116 > > That was the last Apple keyboard with the Control and Escape keys in the > correct positions, particularly for those of us using Macs as terminals to > Unix systems. > > https://en.wikipedia.org/wiki/Apple_Keyboard (a reasonably full history). I have an Apple Keyboard II, an M0487. The layout is kinda braindead, though good enough for its purpose, given that I only have the Mac it's connected to because I need a host for the //e card. I prefer the actual //e layout. That's pretty close to the M layout, but ~` is on the bottom about where Left Windows is on modern keyboards, and ESC, Ctrl and Caps Lock are where you guys would probably expect them. ;) But I don't really mind Ctrl in the corners. -uso. From merlyn at geeks.org Mon Feb 8 15:29:09 2021 From: merlyn at geeks.org (Doug McIntyre) Date: Sun, 7 Feb 2021 23:29:09 -0600 Subject: [TUHS] Typing tutors In-Reply-To: <60205C88.6070808@gmail.com> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> <60205C88.6070808@gmail.com> Message-ID: On Sun, Feb 07, 2021 at 04:32:56PM -0500, Nemo Nusquam wrote: > My Sun UNIX layout keyboards (and mice) work quite well with my Macs. I > share your sentiments. Most of the bespoke mechanical keyboard makers will offer a dipswitch for what happens to the left of the A, and with an option to print the right value there, my keyboards work quite well the right way. I did use the Sun Type5 USB Unix layout for quite some years, but I always found it a but mushy, and liked it better switching back to mechanical keyboards with the proper layout. From will.senn at gmail.com Tue Feb 9 01:17:54 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 8 Feb 2021 09:17:54 -0600 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: On 2/7/21 4:33 PM, George Michaelson wrote: > The "learn" about shell or editing required you to demonstrate you > could type with 'pack my box with six dozen liquor jugs' input gating > the lesson as I remember it. something else around that time, I think > the TOPS-10 typing tutor got me the home keys. Took another 10 years > for me to wake up to being able to type mostly eyes off the keyboard > but it sure seems to work (most of the the time) now. > > -G > > OK. I was hoping somebody somewhere had used a unix typing tutor, but if the TOPS-10 tutor was the only thing out there, was it any good? Surely, somebody somewhen knows of others? Will From will.senn at gmail.com Tue Feb 9 01:54:38 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 8 Feb 2021 09:54:38 -0600 Subject: [TUHS] Typing tutors In-Reply-To: References: <10619.1491461840@cesium.clock.org> <17353.1612761347@cesium.clock.org> Message-ID: <6d96d0e6-8276-0e27-aa8e-e2b4ef5cdc04@gmail.com> On 2/7/21 11:33 PM, Steve Nickolas wrote: >  have an Apple Keyboard II, an M0487. The layout is kinda braindead, > though good enough for its purpose, given that I only have the Mac > it's connected to because I need a host for the //e card. > > I prefer the actual //e layout. That's pretty close to the M layout, > but ~` is on the bottom about where Left Windows is on modern > keyboards, and ESC, Ctrl and Caps Lock are where you guys would > probably expect them. ;) > > But I don't really mind Ctrl in the corners. > > -uso. I have a //e, I can't stand the layout :) - foreign to my PC & Modern Mac experience. I keep hitting Caps Lock instead of control and delete doesn't work - it's left arrow?! But I definitely like the way they feel. Very satisfying to get feedback on the key presses, reminds me of the IBM PC. I'm no fan of my macs' chicklets, but I'm used to them and they're worth the tradeoff of having macness over non-macness. My favorite keyboard though, amongst my current hardware, is my Lenovo T430 Thinkpad, clean and clear and not mushy at all - too bad it's not a mac. From cowan at ccil.org Tue Feb 9 03:17:54 2021 From: cowan at ccil.org (John Cowan) Date: Mon, 8 Feb 2021 12:17:54 -0500 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: On Mon, Feb 8, 2021 at 10:18 AM Will Senn wrote: OK. I was hoping somebody somewhere had used a unix typing tutor, but if > the TOPS-10 tutor was the only thing out there, was it any good? Surely, > somebody somewhen knows of others? > Note that SIMH + trailing-edge/bitsavers provides an excellent TOPS-10 environment. If the typing tutor is hiding anywhere, it would be straightforward to run it today. John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org Does anybody want any flotsam? / I've gotsam. Does anybody want any jetsam? / I can getsam. --Ogden Nash, No Doctors Today, Thank You -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at fourwinds.com Tue Feb 9 04:05:29 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Mon, 08 Feb 2021 10:05:29 -0800 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: <202102081805.118I5TvM4164561@darkstar.fourwinds.com> Will Senn writes: > > OK. I was hoping somebody somewhere had used a unix typing tutor, but if > the TOPS-10 tutor was the only thing out there, was it any good? Surely, > somebody somewhen knows of others? Never used a UNIX typing tutor since I took a typing class before UNIX was a thing, but when my kid was young I had her use tuxtype (and tuxmath, ...) which is pretty cool. From will.senn at gmail.com Tue Feb 9 04:11:08 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 8 Feb 2021 12:11:08 -0600 Subject: [TUHS] Macs and future unix derivatives Message-ID: All, I was introduced to Unix in the mid 1990's through my wife's VMS account at UT Arlington, where they had a portal to the WWW. I was able to download Slackware with the 0.9 kernel on 11 floppies including X11. I installed this on my system at the time - either a DEC Rainbow 100B? or a handme down generic PC. A few years later at Western Illinois University - they had some Sun Workstations there and I loved working with them. It would be several years later, though, that I would actually use unix in a work setting - 1998. I don't even remember what brand of unix, but I think it was again, sun, though no gui, so not as much love. Still, I was able to use rcs and and when my Windows bound buddies lost a week's work because of some snafu with their backups, I didn't lose anything - jackflash was the name of the server - good memories :). However, after this it was all DOS and Windows until, 2005. I'd been eyeing Macs for some time. I like the visual aesthetics and obvious design considerations. But, in 2005, I finally had a bonus big enough to actually buy one. I bought a G5 24" iMac and fell in love with Mac. Next, it was a 15" G4 Powerbook. I loved those Macs until Intel came around and then it was game over, no more PC's in my life (not really, but emotionally, this was how I felt). With Mac going intel, I could dual boot into Windows, Triple boot into Linux, and Quadruple boot into FreeBSD, and I could ditch Fink and finally manage my unix tools properly (arguable, I know) with Homebrew or MacPorts (lately, I've gone back to MacPorts due to Homebrew's lack of support for older OS versions, and for MacPorts seeming rationality). Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the ride got really bumpy (too much phone home, no more 32 bit programs and since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other apps, this just in not an option for me), and with Big Sur, it's gotten worse, potholes, sinkholes, and suchlike, and the interface is downright patronizing (remember Microsoft Bob?). So, here I am, Mr. Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS Mojave where I still have a modicum of control over my environment. My thought for the day and question for the group is... It seems that the options for a free operating system (free as in freedom) are becoming ever more limited - Microsoft, this week, announced that their Edge update will remove Edge Legacy and IE while doing the update - nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild west meets dictatorship and major corporations are moving in to set their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to death over the last couple of weeks, so I'll leave it out of the mix for now. What in our unix past speaks to the current circumstance and what do those of you who lived those events see as possibilities for the next revolution - and, will unix be part of it? And a bonus question, why, oh why, can't we have a contained kernel that provides minimal functionality (dare I say microkernel), that is securable, and layers above it that other stuff (everything else) can run on with auditing and suchlike for traceability? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Feb 9 04:14:29 2021 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 8 Feb 2021 10:14:29 -0800 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: <202102081805.118I5TvM4164561@darkstar.fourwinds.com> References: <202102081805.118I5TvM4164561@darkstar.fourwinds.com> Message-ID: <20210208181429.GH13701@mcvoy.com> On Mon, Feb 08, 2021 at 10:05:29AM -0800, Jon Steinhart wrote: > Will Senn writes: > > > > OK. I was hoping somebody somewhere had used a unix typing tutor, but if > > the TOPS-10 tutor was the only thing out there, was it any good? Surely, > > somebody somewhen knows of others? > > Never used a UNIX typing tutor since I took a typing class before UNIX was > a thing, but when my kid was young I had her use tuxtype (and tuxmath, ...) > which is pretty cool. Wandering a little far afield but ... I'm a self taught touch typist, if you can call it that. I've just typed enough that it works. But it's a thing that works when I don't think about it, as soon as I go "oh, I'm typing without looking" it stops working. So I try not to think about it. Funny thing is that I bought an excavator maybe 5 years ago or so. Using the joysticks to control that thing is just like my touch typing. I've done it enough that if I'm not thinking about it, everything just works. And that's after about 300 hours of experience on it, which isn't a lot. It's some, not a lot. I'm good enough that I set this skidsteer upright with it, which was something, that excavator is way too small for that 8000 pound skid steer but I got her up. And didn't break anything, or anyone, in the process. http://mcvoy.com/lm/skidsteer-rescue.jpg From lm at mcvoy.com Tue Feb 9 04:21:23 2021 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 8 Feb 2021 10:21:23 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <20210208182123.GI13701@mcvoy.com> On Mon, Feb 08, 2021 at 12:11:08PM -0600, Will Senn wrote: > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is securable, > and layers above it that other stuff (everything else) can run on with > auditing and suchlike for traceability? I can answer the microkernel question I think. It's discipline. The only microkernel I ever liked was QNX and I liked it because it was a MICROkernel. The entire kernel easily fit in a 4K instruction cache. The only way that worked was discipline. There were 3 guys who could touch the kernel, one of them, Dan Hildebrandt, was sort of a friend of mine, we could, and did, have conversations about the benefits of a monokernel vs a microkernel. He agreed with me that QNX only worked because those 3 guys were really careful about what went into the kernel. There was none of this "Oh, I measured performance and it is only 1.5% slower now" nonsense, that's death by a hundred paper cuts. Instead, every change came with before and after cache miss counts under a benchmark. Stuff that increased the cache misses was heavily frowned upon. Most teams don't have that sort of discipline. They say they do, they think they do, but when marketing says we have to do $WHATEVER, it goes in. From jqcoffey at gmail.com Tue Feb 9 04:32:03 2021 From: jqcoffey at gmail.com (Justin Coffey) Date: Mon, 8 Feb 2021 10:32:03 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <20210208182123.GI13701@mcvoy.com> References: <20210208182123.GI13701@mcvoy.com> Message-ID: On Mon, Feb 8, 2021 at 10:22 AM Larry McVoy wrote: > On Mon, Feb 08, 2021 at 12:11:08PM -0600, Will Senn wrote: > > And a bonus question, why, oh why, can't we have a contained kernel that > > provides minimal functionality (dare I say microkernel), that is > securable, > > and layers above it that other stuff (everything else) can run on with > > auditing and suchlike for traceability? > > I can answer the microkernel question I think. It's discipline. > The only microkernel I ever liked was QNX and I liked it because it was > a MICROkernel. The entire kernel easily fit in a 4K instruction cache. > > The only way that worked was discipline. There were 3 guys who could > touch the kernel, one of them, Dan Hildebrandt, was sort of a friend > of mine, we could, and did, have conversations about the benefits of a > monokernel vs a microkernel. He agreed with me that QNX only worked > because those 3 guys were really careful about what went into the > kernel. There was none of this "Oh, I measured performance and it is > only 1.5% slower now" nonsense, that's death by a hundred paper cuts. > Instead, every change came with before and after cache miss counts > under a benchmark. Stuff that increased the cache misses was heavily > frowned upon. > > Most teams don't have that sort of discipline. They say they do, > they think they do, but when marketing says we have to do $WHATEVER, > it goes in. > This describes pretty much every project I've ever worked on. It starts small, with a manageable feature set and a clean and performant codebase and then succumbs to external pressure for features and slowly bloats. If the features prove useful then the project will live on of course (and those features may well be the reason the project lives on), but at some point the bloat and techdebt become the dominant development story. My question then is, are there any examples of projects that maintained discipline, focus and relevance over years/decades that serve as counter examples to the above statement(s)? OpenBSD? Go? Is there anything to learn here? -Justin -- +1 (858) 230-1436 jqcoffey at gmail.com ----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Feb 9 04:39:45 2021 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 8 Feb 2021 10:39:45 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <20210208182123.GI13701@mcvoy.com> Message-ID: <20210208183945.GJ13701@mcvoy.com> On Mon, Feb 08, 2021 at 10:32:03AM -0800, Justin Coffey wrote: > My question then is, are there any examples of projects that maintained > discipline, focus and relevance over years/decades that serve as counter > examples to the above statement(s)? OpenBSD? Go? Is there anything to > learn here? I also think it is team size. We never had more than 8 engineers on BitKeeper, and the core was really 4, and we kept adding features and the main code topped out at 128K lines. It got to 120K or so pretty quickly and then we just kept pushing for changesets that removed as much code as they added, bonus points for when they removed more than they added. I think if we had more people it would have been harder to keep things small and clean. From henry.r.bent at gmail.com Tue Feb 9 04:42:00 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Mon, 8 Feb 2021 13:42:00 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On Mon, 8 Feb 2021 at 13:12, Will Senn wrote: > > Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the > ride got really bumpy (too much phone home, no more 32 bit programs and > since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other > apps, this just in not an option for me), and with Big Sur, it's gotten > worse, potholes, sinkholes, and suchlike, and the interface is downright > patronizing (remember Microsoft Bob?). So, here I am, Mr. > Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS > Mojave where I still have a modicum of control over my environment. > I hear you on this one. I'm sticking with Mojave as well on my Mac laptop, but part of that is also because I refuse to give up on what is now an almost eight year old machine that has no real problems and has all of the hardware and ports I want. Apple loves to move quickly and abandon compatibility, and in that respect it's an interesting counterpoint to Linux or a *BSD where you can have decades old binaries that still run. > > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is securable, > and layers above it that other stuff (everything else) can run on with > auditing and suchlike for traceability? > Oh no, not this can of worms... I bet Clem has quite a bit to say about this, but I'll boil it down to this: Mach bombed spectacularly (check out the Wikipedia article, it's pretty decent) and set the idea in people's heads that microkernels were not the way to go. If you wanted to write a microkernel OS today IMHO you'd need to be fully UNIX compatible, and you'd need to natively write EVERY syscall so that performance isn't horrible. This has turned out to be much harder than one might think at first glance. Just ask the GNU Hurd folks... All said, this is probably a space where the time and effort required to squeeze the last 10%, or 5%, or 1% of performance out of the hardware just isn't worth the time investment. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From drsalists at gmail.com Tue Feb 9 04:43:54 2021 From: drsalists at gmail.com (Dan Stromberg) Date: Mon, 8 Feb 2021 10:43:54 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On Mon, Feb 8, 2021 at 10:12 AM Will Senn wrote: > All, > > My thought for the day and question for the group is... It seems that the > options for a free operating system (free as in freedom) are becoming ever > more limited - Microsoft, this week, announced that their Edge update will > remove Edge Legacy and IE while doing the update - nuts; Mac's desktop is > turning into IOS - ew, ick; and Linux is wild west meets dictatorship and > major corporations are moving in to set their direction (Microsoft, Oracle, > IBM, etc.). FreeBSD we've beat to death over the last couple of weeks, so > I'll leave it out of the mix for now. What in our unix past speaks to the > current circumstance and what do those of you who lived those events see as > possibilities for the next revolution - and, will unix be part of it? > > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is securable, > and layers above it that other stuff (everything else) can run on with > auditing and suchlike for traceability? > I love Linux, especially Debian lately. But I also have high hopes for Redox OS, and may switch someday: https://en.wikipedia.org/wiki/Redox_(operating_system) https://www.theregister.com/2019/11/29/after_four_years_rusty_os_nearly_selfhosting/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.paulsen at firemail.de Tue Feb 9 04:45:54 2021 From: thomas.paulsen at firemail.de (Thomas Paulsen) Date: Mon, 08 Feb 2021 19:45:54 +0100 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <2d8643d8ab50f65d14db0fb53933a148@firemail.de> An HTML attachment was scrubbed... URL: From aek at bitsavers.org Tue Feb 9 06:07:04 2021 From: aek at bitsavers.org (Al Kossow) Date: Mon, 8 Feb 2021 12:07:04 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <8462ed90-ae4d-6223-d08f-d30edb63e013@bitsavers.org> On 2/8/21 10:11 AM, Will Senn wrote: > Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the ride got really bumpy The problem now is they have broken enough of the APIs that developers aren't willing to support old useful releases. Even MAME has recently been forced to abandon support for perfectly functioning machines before 10.15 and I have no desire to purchase any Apple product capable of running that release. From atrn at optusnet.com.au Tue Feb 9 06:41:02 2021 From: atrn at optusnet.com.au (Andrew Newman) Date: Tue, 9 Feb 2021 07:41:02 +1100 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> <1a7a35b7-4105-20b7-65ed-1eabb663d0ca@mhorton.net> <60205C88.6070808@gmail.com> Message-ID: <82B6B693-2C65-4CC6-B165-81AEA4D678E6@optusnet.com.au> On 8 Feb 2021, at 4:43 am, Mary Ann Horton wrote: > > I hated it when the PC-AT came along and moved Ctrl down and Esc up! I depend on Ctrl being to the left of A and Esc left of 1, where God intended them to be! I used a Sun keyboard with a DIN adapter for years, until I came to SDG&E in 2007 and discovered a cache of USB Sun keyboards, half with the UNIX layout (yay!) and half with the PC layout (boo!) Word got around quickly that I liked them, and I wound up with several UNIX layout Sun keyboards. For good measure, I bought a 10-pack on eBay, so I'll have spares until the day they peel my cold dead fingers away from my UNIX layout keyboard. As a long time emacs user that change drives me nuts but thankfully re-mapping makes things usable - cap-lock -> ctrl, swap left-alt and Windows-key (meta). BTW if you like buckling springs Unicomp make a Sun layout keyboard that looks okay - https://www.pckeyboard.com/page/product/40PSA (larger image at https://www.pckeyboard.com/mm5/graphics/00000001/Sun%20Keyboard1000x1000_800x800.png ) I’ve been using their IBM layout keyboards for years (mostly on Macs) and found them reliable and great to type on. But YMMV of course. If I were in the USA I’d get one of these Sun layout boards in an instant but shipping one to Australia costs more the keyboard which irks me enough to say no (at least for the time being). From dave at horsfall.org Tue Feb 9 07:50:39 2021 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 9 Feb 2021 08:50:39 +1100 (EST) Subject: [TUHS] Typing tutors In-Reply-To: <668A9720-D414-4E90-ACD4-0E0A35D74F08@kdbarto.org> References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <668A9720-D414-4E90-ACD4-0E0A35D74F08@kdbarto.org> Message-ID: On Sat, 6 Feb 2021, David Barto wrote: > The HS replaced the IBM Selectrics with cheap PC clones the next year. The Selectric was the best typewriter ever; it just felt "natural". I was about in love with our secretary as she was with her typewriter :-) No; I happened to meet some other bird... -- Dave From dave at horsfall.org Tue Feb 9 08:20:42 2021 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 9 Feb 2021 09:20:42 +1100 (EST) Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: On Sat, 6 Feb 2021, John Cowan wrote: > > I'm probably a youngster in this crowd (no, I'm not calling you old > > farts, more like people with a long history I respect and am willing > > to learn from). > In computer circles, that is what "old fart" means. I know that I'm gonna be outclassed here, but I taught myself BASIC, ALGOL, and FORTRAN (ugh! well, it was WATFOR then WATFIV) from my school days in the late 60s onwards. COBOL tried to be drilled into me, but I firmly rejected it (but for some odd reason I still know it, but deny all knowledge of it). -- Dave From clemc at ccc.com Tue Feb 9 09:01:23 2021 From: clemc at ccc.com (Clem Cole) Date: Mon, 8 Feb 2021 18:01:23 -0500 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: On Mon, Feb 8, 2021 at 5:21 PM Dave Horsfall wrote: > I know that I'm gonna be outclassed here, but I taught myself BASIC, > ALGOL, and FORTRAN (ugh! well, it was WATFOR then WATFIV) from my school > days in the late 60s onwards. > Many much older and more experienced than I on this list. I'm a relative youngster that started in the late 1960s. So Dave, I have to say, ditto, but I will add a couple of assemblers to the early list (360 BAL, HP2000, and PDP-8 and 10). My father showed me the GE-635 assembler in probably 1968, but I never managed to write anything meaningful in it. > > COBOL tried to be drilled into me, but I firmly rejected it (but for some > odd reason I still know it, but deny all knowledge of it). > Funny, I dodged COBOL, but not PL/1 and APL. With the latter, I maintained the York/APL interpreter on TSS for a bit. I also saw a number of languages on the 10's like SAIL, SNOBOL, and over course BLISS. All before I saw C on the Fifth Edition of UNIX. As I've said before, when I first saw it, I was not impressed. Little did I know Dennis and Ken would rot my brain - (and I'm thankful that they did). Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at kdbarto.org Tue Feb 9 08:58:19 2021 From: david at kdbarto.org (David Barto) Date: Mon, 8 Feb 2021 14:58:19 -0800 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <2ffbcab8-b651-c3f9-d4ed-e9ff792cfde6@mhorton.net> Message-ID: <06167F4D-5726-41C2-9A1D-906E8EA146A0@kdbarto.org> In HS COBOL was the only programming class offered. We punched the cards and got overnight service from the data center for the district. When I was a senior we got an ASR-33 that talked to a Honeywell at a local uni. With it we could login and run Basic programs. Real programming (APL, FORTRAN, UCSD Pascal, IBM 360 Assembly) awaited for me at UCSD. David > On Feb 8, 2021, at 2:20 PM, Dave Horsfall wrote: > > On Sat, 6 Feb 2021, John Cowan wrote: > >> > I'm probably a youngster in this crowd (no, I'm not calling you old > farts, more like people with a long history I respect and am willing > to learn from). > >> In computer circles, that is what "old fart" means. > > I know that I'm gonna be outclassed here, but I taught myself BASIC, ALGOL, and FORTRAN (ugh! well, it was WATFOR then WATFIV) from my school days in the late 60s onwards. > > COBOL tried to be drilled into me, but I firmly rejected it (but for some odd reason I still know it, but deny all knowledge of it). > > -- Dave From ggm at algebras.org Tue Feb 9 09:46:21 2021 From: ggm at algebras.org (George Michaelson) Date: Tue, 9 Feb 2021 09:46:21 +1000 Subject: [TUHS] [TUHS} Typing Tutor [and tuhs archive] In-Reply-To: References: Message-ID: It was very probably a precursor to this product on a VMS DECUS tape: https://www.digiater.nl/openvms/decus/vax83c/harris/aaareadme.txt This is from '83 but the one I was using was pre 79. Funny story: the account to run the typing tutor had an open login. The login had unlimited credit on JANET, the uk pre-internet X25 network. JANET was driving cost recovery models, so data was "budgeted" with real world money. I used the typing tutor login to make X29 PAD calls over JANET from York to Edinburgh to "talk" to my dad who was on EMAS at edinburgh uni. It was nice. I spent GBP200 of JANET "credits" and got caught at the end-of-month by the accounts team in the computer centre, hauled up before the professor for "hacking" and was formally reprimanded and required not to do it again (tm). Hacker career over before end of first term of first year university. On Tue, Feb 9, 2021 at 1:18 AM Will Senn wrote: > > On 2/7/21 4:33 PM, George Michaelson wrote: > > The "learn" about shell or editing required you to demonstrate you > > could type with 'pack my box with six dozen liquor jugs' input gating > > the lesson as I remember it. something else around that time, I think > > the TOPS-10 typing tutor got me the home keys. Took another 10 years > > for me to wake up to being able to type mostly eyes off the keyboard > > but it sure seems to work (most of the the time) now. > > > > -G > > > > > > OK. I was hoping somebody somewhere had used a unix typing tutor, but if > the TOPS-10 tutor was the only thing out there, was it any good? Surely, > somebody somewhen knows of others? > > Will From tytso at mit.edu Tue Feb 9 11:59:39 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 8 Feb 2021 20:59:39 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <20210208182123.GI13701@mcvoy.com> Message-ID: On Mon, Feb 08, 2021 at 10:32:03AM -0800, Justin Coffey wrote: > This describes pretty much every project I've ever worked on. It starts > small, with a manageable feature set and a clean and performant codebase > and then succumbs to external pressure for features and slowly bloats. If > the features prove useful then the project will live on of course (and > those features may well be the reason the project lives on), but at some > point the bloat and techdebt become the dominant development story. The problem is users all want a different set of features. One person's "small and clean" is another person's "missing critical feature". This is one of the problems which the Linux enterprise distro's see. Everyone wants everything to stay the same --- except for their own pet feature. Or because they want their new hardware (say, NVMe, which wasn't present in the version of the Linux kernel that was frozen for the enterprise distro three years ago). Ultimately, the reason why you can't have what you want boils down to sheer economics. If you want to pay a small team to give you exactly what you want, but nothing else, then sure, you can have what you want. If you and two dozen want _exactly_ the same thing, then it will be a lot cheaper. But if you want something which is free as in beer, or even the cost of iOS, then it needs to have all of the features and hardware support for a much larger set of customers. It's interesting that some folks are complaining about "elistism"; but they don't seem to recognize that asking for something super small and clean, that is inherently elitist. I also suspect they don't *actually* want something _that_ small, in terms of feature set. Do they *really* want something which is just V7 Unix, with nothing else? No TCP/IP, no hot-plug USB support? No web browsing? If so, it shouldn't be that hard to squeeze a PDP-11 into a laptop form factor, and they can just run V7 Unix. Easy-peasy! Oh, you wanted more than that? Feature bloat! Feature bloat! Feature bloat! Shame! Shame! Shame! - Ted From will.senn at gmail.com Tue Feb 9 12:15:00 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 8 Feb 2021 20:15:00 -0600 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210130231750.GQ4227@mcvoy.com> Message-ID: <56acd577-dce1-1f46-619e-27bdd4b3d60c@gmail.com> On 1/30/21 5:22 PM, Warner Losh wrote some pretty good stuff and I needed a hook back into the thread... Today, I found out that on Jan 12, distrowatch transitioned to FreeBSD. I guess it's not so far behind the times to scare off the folks that track the bleeding edge :) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: alcoppkfalhecjmi.png Type: image/png Size: 55860 bytes Desc: not available URL: From will.senn at gmail.com Tue Feb 9 12:16:50 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 8 Feb 2021 20:16:50 -0600 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <56acd577-dce1-1f46-619e-27bdd4b3d60c@gmail.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210130231750.GQ4227@mcvoy.com> <56acd577-dce1-1f46-619e-27bdd4b3d60c@gmail.com> Message-ID: <62228f40-f972-950b-11e5-b756b0ac1db6@gmail.com> On 2/8/21 8:15 PM, Will Senn wrote: > On 1/30/21 5:22 PM, Warner Losh wrote some pretty good stuff and I > needed a hook back into the thread... > > Today, I found out that on Jan 12, distrowatch transitioned to > FreeBSD. I guess it's not so far behind the times to scare off the > folks that track the bleeding edge :) > OK. So it was Jan 12, of 2020 - sue me :) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: alcoppkfalhecjmi.png Type: image/png Size: 55860 bytes Desc: not available URL: From grog at lemis.com Tue Feb 9 12:30:45 2021 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Tue, 9 Feb 2021 13:30:45 +1100 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <62228f40-f972-950b-11e5-b756b0ac1db6@gmail.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210130231750.GQ4227@mcvoy.com> <56acd577-dce1-1f46-619e-27bdd4b3d60c@gmail.com> <62228f40-f972-950b-11e5-b756b0ac1db6@gmail.com> Message-ID: <20210209023045.GC6168@eureka.lemis.com> On Monday, 8 February 2021 at 20:16:50 -0600, Will Senn wrote: > On 2/8/21 8:15 PM, Will Senn wrote: >> On 1/30/21 5:22 PM, Warner Losh wrote some pretty good stuff and I >> needed a hook back into the thread... >> >> Today, I found out that on Jan 12, distrowatch transitioned to >> FreeBSD. I guess it's not so far behind the times to scare off the >> folks that track the bleeding edge :) >> > OK. So it was Jan 12, of 2020 - sue me :) No, just proof that we're behind the times :-) Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From m.douglas.mcilroy at dartmouth.edu Tue Feb 9 13:58:08 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Mon, 8 Feb 2021 22:58:08 -0500 Subject: [TUHS] Macs and future unix derivatives Message-ID: > Do they *really* want something which is just V7 Unix, with nothing else? > No TCP/IP, no hot-plug USB support? No web browsing? > Oh, you wanted more than that? Feature bloat! Feature bloat! > Feature bloat! Shame! Shame! Shame! % ls /usr/share/man/man2|wc 495 495 7230 % ls /bin|wc 2809 2809 30468 How many of roughly 500 system calls (to say nothing of uncounted ioctl's) do you think are necessary for writing those few crucial capabilities that distinguish Linux from v7? There is undeniably bloat, but only a sliver of it contributes to the distinctive utility of today's systems. Or consider this. Unix grew by about 39 system calls in its first decade, but an average of 40 per decade ever since. Is this accelerated growth more symptomatic of maturity or of cancer? Doug From athornton at gmail.com Tue Feb 9 14:07:32 2021 From: athornton at gmail.com (Adam Thornton) Date: Mon, 8 Feb 2021 21:07:32 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: Once you add the "s" editor (so you have a screen editor) and UUCP, v7 is an adequate daily driver, in my recent experience. No, it didn't actually replace my Mac, but with a way to get data on and off it and a decent editor (which I personally do not feel "ed" is)...it's totally OK. I can edit files and move them around, which honestly is most of my job. Adam On Mon, Feb 8, 2021 at 8:59 PM M Douglas McIlroy < m.douglas.mcilroy at dartmouth.edu> wrote: > > Do they *really* want something which is just V7 Unix, with nothing else? > > No TCP/IP, no hot-plug USB support? No web browsing? > > > Oh, you wanted more than that? Feature bloat! Feature bloat! > > Feature bloat! Shame! Shame! Shame! > > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 > > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Tue Feb 9 14:13:14 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 8 Feb 2021 22:13:14 -0600 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <36a5b8f0-bfc0-120e-6226-bc107680e31e@gmail.com> On 2/8/21 9:58 PM, M Douglas McIlroy wrote: >> Do they *really* want something which is just V7 Unix, with nothing else? >> No TCP/IP, no hot-plug USB support? No web browsing? >> Oh, you wanted more than that? Feature bloat! Feature bloat! >> Feature bloat! Shame! Shame! Shame! > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 > > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug I couldn't have said it better. I was looking at 'man syscalls' and shaking my head just a couple of days ago. I also just thought, hey, wouldn't it be nice to get a printed copy of the latest manpages - uh, that ain't gonna happen - 3000+ manpages, sheesh. Not sure I was thinking 'man widget_toolbar' would provide me with any deep insights anyway, maybe that's why its in mann. Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreww591 at gmail.com Tue Feb 9 15:10:05 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Mon, 8 Feb 2021 22:10:05 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On 2/8/21, Will Senn wrote: > > My thought for the day and question for the group is... It seems that > the options for a free operating system (free as in freedom) are > becoming ever more limited - Microsoft, this week, announced that their > Edge update will remove Edge Legacy and IE while doing the update - > nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild > west meets dictatorship and major corporations are moving in to set > their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to > death over the last couple of weeks, so I'll leave it out of the mix for > now. What in our unix past speaks to the current circumstance and what > do those of you who lived those events see as possibilities for the next > revolution - and, will unix be part of it? > Yes, those are almost exactly my thoughts on the current state of OSes. I'm hoping that I can change it by writing my own OS , because I don't see anyone else making anything that I consider to be a particularly good effort to solve those issues. However, it's still anyone's guess as to whether I will succeed. I'm trying to give it every chance of success that I can though by using existing code wherever possible and focusing on compatibility with Linux (or at least I will be once I get to that point). BTW, I welcome any contributions to UX/RT. I currently am still only working on the allocation subsystem (it's a bit trickier under seL4 than it would be on bare metal due to having to manage capabilities as well as memory), but I think I am pretty close to being able to move on. > > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is > securable, and layers above it that other stuff (everything else) can > run on with auditing and suchlike for traceability? > A lot of people still seem to believe that microkernels are inherently slow, even though fast microkernels (specifically QNX) predate the slow ones by several years. Also, it seems as if much of the academic microkernel community thinks that there's a need to abandon Unix-like architecture entirely and relegate Unix programs to some kind of "penalty box" compatibility layer, but I don't see any reason why that is the case. Certainly there are a lot of old Unix features that could be either reimplemented in terms of something more modern or just dropped entirely where doing so wouldn't break compatibility, but I still think it's possible to write a modern microkernel-native Unix-like OS that does most of what the various proposed or existing incompatible OSes do. > > I can answer the microkernel question I think. It's discipline. > The only microkernel I ever liked was QNX and I liked it because it was > a MICROkernel. The entire kernel easily fit in a 4K instruction cache. > > The only way that worked was discipline. There were 3 guys who could > touch the kernel, one of them, Dan Hildebrandt, was sort of a friend > of mine, we could, and did, have conversations about the benefits of a > monokernel vs a microkernel. He agreed with me that QNX only worked > because those 3 guys were really careful about what went into the > kernel. There was none of this "Oh, I measured performance and it is > only 1.5% slower now" nonsense, that's death by a hundred paper cuts. > Instead, every change came with before and after cache miss counts > under a benchmark. Stuff that increased the cache misses was heavily > frowned upon. > > Most teams don't have that sort of discipline. They say they do, > they think they do, but when marketing says we have to do $WHATEVER, > it goes in. > I still don't get why QNX hasn't had more influence on anything else even though it's been fairly successful commercially. If i am successful, UX/RT will only be the second usable non-QNX OS with significant QNX influence that I am aware of (after VSTa; there have been a couple other attempts at free QNX-like systems that I am aware of but they haven't really produced anything that could be considered complete). seL4 is fairly similar to QNX's kernel both in terms of architecture and design philosophy. That's why I'm using it in UX/RT. I may end up having to fork it at some point, but I am still going to keep to a strict policy of not adding something to the kernel unless there is no other way to reasonably implement it. For the sake of extensibility and security I'm also going to have a strict policy against adding non-file-based primitives of any kind, which is something that QNX hasn't done (no other OS has AFAICT, not even Plan 9, since it has anonymous memory, whereas UX/RT will use memory-mapped files in a tmpfs instead). On 2/8/21, Dan Stromberg wrote: > > But I also have high hopes for Redox OS, and may switch someday: > https://en.wikipedia.org/wiki/Redox_(operating_system) > https://www.theregister.com/2019/11/29/after_four_years_rusty_os_nearly_selfhosting/ > The biggest problem I see with Redox is that they insist on writing everything from scratch, whereas with UX/RT I am going to use existing code wherever it is reasonable (this includes using the LKL project to get access to basically the full range of Linux device drivers, filesystems, and network protocols). Also, their VFS architecture is a bit questionable IMO. Otherwise I might have been inclined to just contribute to Redox (UX/RT will use quite a bit of Rust, but it won't try to be a "Rust OS" like Redox is). I may end up incorporating more code from Redox though (I'm already going to use their heap allocator but will enhance it with dynamic resizing of the heap). In addition they claim it to be a microkernel when it is actually a Plan 9-style hybrid, since the kernel includes a bunch of Unix system calls as primitives, disqualifying it from being a microkernel. This is unlike QNX where the process server that implements the core of the Unix API is built into the kernel but accessed entirely through IPC and not through its own system calls (the UX/RT process server will be similar in scope to that of QNX and will similarly lack any kind of multi-personality infrastructure and be built alongside the kernel, but will be a separate binary). From andreww591 at gmail.com Tue Feb 9 15:21:36 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Mon, 8 Feb 2021 22:21:36 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On 2/8/21, M Douglas McIlroy wrote: > > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug > I'd say probably the latter. There's no good reason I can see for the proliferation of system calls. It should be possible to write a system where everything truly is a file, and reduce the actual primitives to basically just (variants of) read() and write(), with absolutely everything else (even other file APIs such as open()/close()) implemented on top of those. That's my plan for UX/RT. Extreme minimalism of primitives should make things more manageable, especially as far as access control and extensibility go. From tytso at mit.edu Tue Feb 9 15:29:58 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Tue, 9 Feb 2021 00:29:58 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On Mon, Feb 08, 2021 at 10:58:08PM -0500, M Douglas McIlroy wrote: > > Do they *really* want something which is just V7 Unix, with nothing else? > > No TCP/IP, no hot-plug USB support? No web browsing? > > > Oh, you wanted more than that? Feature bloat! Feature bloat! > > Feature bloat! Shame! Shame! Shame! > > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 > > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. Well, let's take a look at those system calls. They fall into a number of major categories: *) BSD innovations *) BSD socket interfaces (so if you want TCP/IP... is it bloat?) *) BSD job control *) BSD effective id and its extensions *) BSD groups *) New versions to maintain stable ABI's, e.g., (dup vs dup2 vs dup3, wait vs wait3 vs wait4 vs waitpid, stat vs stat64, lstat vs lstat64, chown vs chown32, etc.) *) System V IPC support (is support for enterprise databases like Oracle "bloat"?) *) Posix real-time extensions *) Posix extended attributes *) Windows file streams support (the original reason for the *at(2) system calls -- openat, linkat, renameat, and a dozen more) Ok, that last I'd agree was *pure* bloat, and an amazingly bad idea. But there are plenty of people who have bugged/begged me to add windows file streams because they were *convinced* it was a critical feature. And I dare say bug-for-bugs Windows compatibility was worth millions of $$$ of potential sales, which is why they agreed to add it --- and why I kept on getting nagged to add that feature to ext4 (and I pushed back where the Solaris developers caved, so there. :-) As for things like System V IPC support, that was only added to Linux because it was worth $$$, because enterprise databases like DB2 and Oracle demanded it. Is that evidence of "cancer"? You might not want it, but that's a great example of "one person's bloat is another person's critical feature". Or consider the dozen plus BSD sockets interface, which if removed would mean no TCP/IP support, and no graphical windowing systems. Critical feature, or bloat? But hey, if you only want V7 Unix, why are you complaining? Just go and use it, and give up on all of this cancerous new features. And I promise to get off of your lawn. :-) Cheers, - Ted From andreww591 at gmail.com Tue Feb 9 16:37:38 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Mon, 8 Feb 2021 23:37:38 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On 2/8/21, Theodore Ts'o wrote: > On Mon, Feb 08, 2021 at 10:58:08PM -0500, M Douglas McIlroy wrote: >> > Do they *really* want something which is just V7 Unix, with nothing >> > else? >> > No TCP/IP, no hot-plug USB support? No web browsing? >> >> > Oh, you wanted more than that? Feature bloat! Feature bloat! >> > Feature bloat! Shame! Shame! Shame! >> >> % ls /usr/share/man/man2|wc >> 495 495 7230 >> % ls /bin|wc >> 2809 2809 30468 >> >> How many of roughly 500 system calls (to say nothing of uncounted >> ioctl's) do you think are necessary for writing those few crucial >> capabilities that distinguish Linux from v7? There is >> undeniably bloat, but only a sliver of it contributes to the >> distinctive utility of today's systems. > > Well, let's take a look at those system calls. They fall into a > number of major categories: > > *) BSD innovations > *) BSD socket interfaces (so if you want TCP/IP... is it bloat?) > *) BSD job control > *) BSD effective id and its extensions > *) BSD groups > *) New versions to maintain stable ABI's, e.g., (dup vs dup2 vs dup3, > wait vs wait3 vs wait4 vs waitpid, stat vs stat64, lstat vs lstat64, > chown vs chown32, etc.) > *) System V IPC support (is support for enterprise databases like > Oracle "bloat"?) > *) Posix real-time extensions > *) Posix extended attributes > *) Windows file streams support (the original reason for the *at(2) > system calls -- openat, linkat, renameat, and a dozen more) > > Ok, that last I'd agree was *pure* bloat, and an amazingly bad idea. > But there are plenty of people who have bugged/begged me to add > windows file streams because they were *convinced* it was a critical > feature. And I dare say bug-for-bugs Windows compatibility was worth > millions of $$$ of potential sales, which is why they agreed to add it > --- and why I kept on getting nagged to add that feature to ext4 (and > I pushed back where the Solaris developers caved, so there. :-) > > As for things like System V IPC support, that was only added to Linux > because it was worth $$$, because enterprise databases like DB2 and > Oracle demanded it. Is that evidence of "cancer"? You might not want > it, but that's a great example of "one person's bloat is another > person's critical feature". > > Or consider the dozen plus BSD sockets interface, which if removed > would mean no TCP/IP support, and no graphical windowing systems. > Critical feature, or bloat? > > But hey, if you only want V7 Unix, why are you complaining? Just go > and use it, and give up on all of this cancerous new features. And I > promise to get off of your lawn. :-) > There's no reason any of that has to be implemented with primitives though. All of that could be implemented on top of normal file APIs fairly easily. Also, sockets are not the ideal interface for a window server IMO. The only reason they are used is because conventional Unix didn't provide user-mode file server support until fairly recently (and the support that's been added to more recent conventional Unices is a hack that has poor performance and isn't used all that much). From gnu at toad.com Tue Feb 9 16:55:50 2021 From: gnu at toad.com (John Gilmore) Date: Mon, 08 Feb 2021 22:55:50 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <5372.1612853750@hop.toad.com> Henry Bent wrote: > Apple loves to move quickly and abandon > compatibility, and in that respect it's an interesting counterpoint to > Linux or a *BSD where you can have decades old binaries that still run. That was true decades ago, but no longer. In the intervening time, all the major Linux distributions have stopped releasing OS's that support 32-bit machines. Even those that support 32-bit CPUs have often desupported the earlier CPUs (like, what was wrong with the 80386?). Essentially NO applications require 64-bit address spaces, so arguably if they wanted to lessen their workload, they should have desupported the 64-bit architectures (or made kernels and OS's that would run on both from a single release). But that wouldn't give them the gee-whiz-look-at-all-the-new-features feeling. I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They ran faster and smaller than the amd64 versions, and also ran old binaries for more than a decade. But their vendors and support teams decided that doing the release-engineering to keep them running was more work than pulling the plug. Even Fedora has desupported the One Laptop Per Child hardware now -- no new releases for millions of kids! And desupported all the other cheap Intel mobile CPUs, let alone your typical desktop 80386, 80486, or Pentium. Have you tried running Linux on a machine without a GPU these days? It's truly sad that to gain stupid animated window tricks, they broke compatability with millions of existing systems. Here's one overview of the niche distros that still have x86 support: https://fossbytes.com/best-lightweight-linux-distros/ Even those are dropping like flies, e.g. Ubuntu MATE now says "For older hardware based on i386. Supported until April 2021", i.e. only til next month! The PuppyLinux.com web site is now a 404. Etc. (I'm not up on what the BSD releases are doing.) John From mphuff at gmail.com Tue Feb 9 17:05:57 2021 From: mphuff at gmail.com (Michael Huff) Date: Mon, 8 Feb 2021 22:05:57 -0900 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <5372.1612853750@hop.toad.com> References: <5372.1612853750@hop.toad.com> Message-ID: <13ded1a4-d717-c57c-5168-0f1f44ca4b5b@gmail.com> On 2/8/2021 9:55 PM, John Gilmore wrote: > Henry Bent wrote: >> Apple loves to move quickly and abandon >> compatibility, and in that respect it's an interesting counterpoint to >> Linux or a *BSD where you can have decades old binaries that still run. > That was true decades ago, but no longer. In the intervening time, all > the major Linux distributions have stopped releasing OS's that support > 32-bit machines. Even those that support 32-bit CPUs have often > desupported the earlier CPUs (like, what was wrong with the 80386?). > Essentially NO applications require 64-bit address spaces, so arguably > if they wanted to lessen their workload, they should have desupported > the 64-bit architectures (or made kernels and OS's that would run on > both from a single release). But that wouldn't give them the > gee-whiz-look-at-all-the-new-features feeling. > > I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They > ran faster and smaller than the amd64 versions, and also ran old > binaries for more than a decade. But their vendors and support teams > decided that doing the release-engineering to keep them running was more > work than pulling the plug. > > Even Fedora has desupported the One Laptop Per Child hardware now -- no > new releases for millions of kids! And desupported all the other cheap > Intel mobile CPUs, let alone your typical desktop 80386, 80486, or > Pentium. Have you tried running Linux on a machine without a GPU > these days? It's truly sad that to gain stupid animated window tricks, > they broke compatability with millions of existing systems. > > Here's one overview of the niche distros that still have x86 support: > > https://fossbytes.com/best-lightweight-linux-distros/ > > Even those are dropping like flies, e.g. Ubuntu MATE now says "For older > hardware based on i386. Supported until April 2021", i.e. only til next > month! The PuppyLinux.com web site is now a 404. Etc. > > (I'm not up on what the BSD releases are doing.) > > John i386 has been demoted on FreeBSD: https://lists.freebsd.org/pipermail/freebsd-announce/2021-January/002006.html I don't think there's any change on NetBSD, no idea about OpenBSD but I assume they're the same. In all honest, I don't think that backwards compatibility has ever been that great on Linux -at least not for the last twenty or so years, in my (limited) experience. It's not like Solaris where you could build on 2.4 and there's a good chance it will run on 11 or at least 10. From will.senn at gmail.com Tue Feb 9 17:17:21 2021 From: will.senn at gmail.com (Will Senn) Date: Tue, 9 Feb 2021 01:17:21 -0600 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <5372.1612853750@hop.toad.com> References: <5372.1612853750@hop.toad.com> Message-ID: On 2/9/21 12:55 AM, John Gilmore wrote: > Henry Bent wrote: >> Apple loves to move quickly and abandon >> compatibility, and in that respect it's an interesting counterpoint to >> Linux or a *BSD where you can have decades old binaries that still run. > That was true decades ago, but no longer. In the intervening time, all > the major Linux distributions have stopped releasing OS's that support > 32-bit machines. Even those that support 32-bit CPUs have often > desupported the earlier CPUs (like, what was wrong with the 80386?). > Essentially NO applications require 64-bit address spaces, so arguably > if they wanted to lessen their workload, they should have desupported > the 64-bit architectures (or made kernels and OS's that would run on > both from a single release). But that wouldn't give them the > gee-whiz-look-at-all-the-new-features feeling. > > I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They > ran faster and smaller than the amd64 versions, and also ran old > binaries for more than a decade. But their vendors and support teams > decided that doing the release-engineering to keep them running was more > work than pulling the plug. > > Even Fedora has desupported the One Laptop Per Child hardware now -- no > new releases for millions of kids! And desupported all the other cheap > Intel mobile CPUs, let alone your typical desktop 80386, 80486, or > Pentium. Have you tried running Linux on a machine without a GPU > these days? It's truly sad that to gain stupid animated window tricks, > they broke compatability with millions of existing systems. > > Here's one overview of the niche distros that still have x86 support: > > https://fossbytes.com/best-lightweight-linux-distros/ > > Even those are dropping like flies, e.g. Ubuntu MATE now says "For older > hardware based on i386. Supported until April 2021", i.e. only til next > month! The PuppyLinux.com web site is now a 404. Etc. > > (I'm not up on what the BSD releases are doing.) > > John > Sigh... 32bit will be 2nd tier in FreeBSD 13 :) From gnu at toad.com Tue Feb 9 17:42:34 2021 From: gnu at toad.com (John Gilmore) Date: Mon, 08 Feb 2021 23:42:34 -0800 Subject: [TUHS] QNX In-Reply-To: References: Message-ID: <8092.1612856554@hop.toad.com> Andrew Warkentin wrote: > A lot of people still seem to believe that microkernels are inherently > slow, even though fast microkernels (specifically QNX) predate the > slow ones by several years. Wait, are we talking about the same operating system called QNX? We had a customer at Cygnus in the 1990s (perhaps QNX itself) who wanted us to port the GNU compilers to it. It was the slowest, buggiest system we ever tried to run our code on. I think they claimed POSIX compatibility; hollow laugh! It was more like 1970's compatibility. It had 14-character file names, the shell and utilities regularly core-dumped when doing ordinary work, everything had built-in random undocumented line length limits and file size limits and such (which was also true in V7 -- that's one thing Richard Stallman insisted on fixing in every GNU utility; see the GNU Coding Standards). Our GNU compiler tools ran everywhere, they hosted and bootstrapped on everything. Everything except QNX. Shell scripts and makefiles that worked on a hundred other UNIX systems were impossible to get working on QNX. I think we reported dozens of QNX bugs to the vendor, most of which never got fixed. Perhaps somewhere under all that crud there was some kind of "fast microkernel", but you couldn't prove it by me. By the time it got to user code, the only thing it was fast at was failing. We were trying to do real work on it, and gave up after some engineers turned the air blue with incredulous exclamations. I think we ended up cross-compiling the GNU compilers for it, from some sane system. They still had to fix a bunch of bugs in their libraries that we had to link with. I realize this flame is not about microkernels. But perhaps if they had spent less time optimizing cache hits in the microkernel, the rest of their system wouldn't have been shot full of obvious holes. John From bakul at iitbombay.org Tue Feb 9 18:30:29 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Tue, 9 Feb 2021 00:30:29 -0800 Subject: [TUHS] Macs and future unix derivatives Message-ID: <75B88CD3-8E9E-416C-B494-458958620D2D@iitbombay.org> $ k-2.9t K 2.9t 2001-02-14 Copyright (C) 1993-2001 Kx Systems Evaluation. Not for commercial use. \ for help. \\ to exit. This is a *linux* x86 binary from almost exactly 20 years ago running on FreeBSD built from last Wednesday’s sources. $ uname -rom FreeBSD 13.0-ALPHA3 amd64 Generally compatibility support for previous versions of FreeBSDs has been decent when I have tried. Though the future for x86 support doesn’t look bright. > On Feb 8, 2021, at 10:56 PM, John Gilmore wrote: > > (I'm not up on what the BSD releases are doing.) From robert at timetraveller.org Tue Feb 9 21:03:43 2021 From: robert at timetraveller.org (Robert Brockway) Date: Tue, 9 Feb 2021 21:03:43 +1000 (AEST) Subject: [TUHS] QNX In-Reply-To: <8092.1612856554@hop.toad.com> References: <8092.1612856554@hop.toad.com> Message-ID: On Mon, 8 Feb 2021, John Gilmore wrote: > Wait, are we talking about the same operating system called QNX? Hi John. I found your comment really interesting. I have less experience with QNX than you do but I do have a long-standing interest in OS design and found it interesting from that perspective. QNX is a high performance real-time OS that has run critical infrastructure for decades. Notably it's been used to run nuclear power stations in many countries. Usage may have declined in recent years but in the vintage you're discussing (90s) QNX would have been widely used in critical infrastructure. I contracted to an electricity provider about 8 or 9 years ago and QNX was definitely still there. Looks like BlackBerry now own QNX and the OS is still out there keeping critical stuff running. For the record I am aware that microkernels are not intrinsically slow and I think they can offer significant advantages. MINIX 3 is an interesing design, pity about the name. Cheers, Rob From thomas.paulsen at firemail.de Tue Feb 9 21:34:54 2021 From: thomas.paulsen at firemail.de (Thomas Paulsen) Date: Tue, 09 Feb 2021 12:34:54 +0100 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <99de8071e86aa207ec97b7e2892706fb@firemail.de> >Or consider this. Unix grew by about 39 system calls in its first >decade, but an average of 40 >per decade ever since. Is this accelerated growth more symptomatic of >maturity or of cancer? 3rd option: competition. Linux competes against very modern and certainly 'bloated' windows and macos operating system, defining what an OS today is, and we must be better (we are already 4 sure). From m.douglas.mcilroy at dartmouth.edu Tue Feb 9 22:22:01 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Tue, 9 Feb 2021 07:22:01 -0500 Subject: [TUHS] Macs and future unix derivatives Message-ID: > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? Looks like I need a typing tutor. 39 should be 30. And a math tutor, too. 40 should be 100. Doug From lm at mcvoy.com Wed Feb 10 00:05:16 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 9 Feb 2021 06:05:16 -0800 Subject: [TUHS] QNX In-Reply-To: <8092.1612856554@hop.toad.com> References: <8092.1612856554@hop.toad.com> Message-ID: <20210209140516.GN13701@mcvoy.com> On Mon, Feb 08, 2021 at 11:42:34PM -0800, John Gilmore wrote: > Andrew Warkentin wrote: > > A lot of people still seem to believe that microkernels are inherently > > slow, even though fast microkernels (specifically QNX) predate the > > slow ones by several years. > > Wait, are we talking about the same operating system called QNX? > > We had a customer at Cygnus in the 1990s (perhaps QNX itself) who wanted > us to port the GNU compilers to it. It was the slowest, buggiest system > we ever tried to run our code on. I think they claimed POSIX > compatibility; hollow laugh! Has to be a different system. The QNX I ran on could handle a bunch of users on terminals on a 286. But that was pre-POSIX, maybe the POSIX stuff was crap, I wasn't using it then. From tytso at mit.edu Wed Feb 10 02:13:30 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Tue, 9 Feb 2021 11:13:30 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On Mon, Feb 08, 2021 at 11:37:38PM -0700, Andrew Warkentin wrote: > > But hey, if you only want V7 Unix, why are you complaining? Just go > > and use it, and give up on all of this cancerous new features. And I > > promise to get off of your lawn. :-) > > > There's no reason any of that has to be implemented with primitives > though. All of that could be implemented on top of normal file APIs > fairly easily. Everything can be implemented in terms of a turing machine tape, so I'm sure that's true. Whether or not it would be *performant* and *secure* in the face of application level bugs might be a different story, though. In fact, some of the terrible semantics of the Posix interfaces exist only because there were traditional Unix vendors on the standards committee insisting on semantics that *could* be implemented using a user-mode library on top of normal file API's (I'm looking at you, fcntl locking semantics, where a close of *any* file descriptor, even a fd cloned via dup(2) or fork(2) will release the lock). So yes, Posix fcntl(2) locking *can* be implemented in terms of normal file API's.... AND IT WAS A TERRIBLE IDEA. (For more details, see [1].) [1] https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html I could go on about other spectactularly bad ideas enshrined in POSIX, such as telldir(2) and seekdir(2), which date all the way back to the assumption that directories should only be implemented in terms of linear linked lists with O(n) lookup performance, but I don't whine about that as feature bloat imposed by external standards, but just the cost of doing business. (Or at least, of relevance.) > Also, sockets are not the ideal interface for a window server IMO. The > only reason they are used is because conventional Unix didn't provide > user-mode file server support until fairly recently (and the support > that's been added to more recent conventional Unices is a hack that > has poor performance and isn't used all that much). I'm not sure what you're referring to; if you mean the *at(2) system calls, which is why they exist in Linux (not for !@#!? Windows file streams support); they are needed to provide secure and performant user-mode file servers for things like Samba. Trying to implement a user-space file server using only the V7 Unix primitives will cause you to have some really horrible Time of Use vs Time of Check (TOUTOC) security gaps; you can narrow the TOUTOC races with some terrible performance sucking impacts, but removing them entirely is almost impossible. The reason why it's not used that much is because a lot of programmers want to be compatible with OS's that don't support those new interfaces --- and so they don't use it. And that's the final thing for folks to remember. There's an old saying, "without software, it's just a paperweight." This is just as true for an OS; if you don't have application software, who cares how clean it is? And it had better be performant, and not just a Posix-compliant layer afterthought demanded by the Product Manager as a checklist feature item. Rob Pike talked about this over two decades ago in his talk, Systems Software Research is Dead[2]. The slide talking about Standards (page #13) is especially relevant here: To be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X, ... A huge amount of work, but if you don’t honor the standards you’re marginalized. [2] http://herpolhode.com/rob/utah2000.pdf That's because most people aren't going to port or rewrite application software for some random OS, whether it is a research OS or someone's new "simple, clean, reimplementation". And most users do expect to have a working web browser.... and text editor..., and their favorite games, whether it's nethack or spacewars, etc., etc., etc. If you want to call it feature bloat, so be it. But that seems to me like it's an excuse made by people who are bitter that people aren't using (or paying for, or contributing to) their pet operating system. Cheers, - Ted From mah at mhorton.net Wed Feb 10 02:29:46 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Tue, 9 Feb 2021 08:29:46 -0800 Subject: [TUHS] Typing tutors In-Reply-To: References: <5cb7edc8-7d43-aa3a-334f-18e17aa2fa16@gmail.com> <668A9720-D414-4E90-ACD4-0E0A35D74F08@kdbarto.org> Message-ID: On 2/8/21 1:50 PM, Dave Horsfall wrote: > On Sat, 6 Feb 2021, David Barto wrote: > >> The HS replaced the IBM Selectrics with cheap PC clones the next year. > > The Selectric was the best typewriter ever; it just felt "natural". > -- Dave I totally agree about the Selectric keyboard. As a grad student, I looked for keyboards that felt like a Selectric, and considered considered it a requirement to have tactile feedback when I hit a key. Sadly now, "chiclet" keyboards are considered the gold standard, and two thumbs on a phone is the new Mavis Beacon. From cowan at ccil.org Wed Feb 10 03:31:52 2021 From: cowan at ccil.org (John Cowan) Date: Tue, 9 Feb 2021 12:31:52 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On Tue, Feb 9, 2021 at 11:14 AM Theodore Ts'o wrote: I'm looking at you, > fcntl locking semantics, where a close of *any* file descriptor, even > a fd cloned via dup(2) or fork(2) will release the lock. > >From BTSJ 57:6: > The file system maintains no locks visible to the user, nor is there any > restriction on the number of users who may have a file open for reading or > writing. Although it is possible for the contents of a file to become > scrambled when two users write on it simultaneously, in practice > difficulties do not arise. We take the view that locks are neither > necessary nor sufficient, in our environment, to prevent interference > between users of the same file. They are unnecessary because we are not > faced with large, single-file databases maintained by independent > processes. They are insufficient because locks in the ordinary sense, > whereby one user is prevented from writing on a file that another user is > reading, cannot prevent confusion when, for example, both users are editing > a file with an editor that makes a copy of the file being edited. > There are, however, sufficient internal interlocks to maintain the logical > consistency of the file system when two users engage simultaneously in > activities such as writing on the same file, creating files in the same > directory, or deleting each other’s open files. (end) John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org How they ever reached any conclusion at all is starkly unknowable to the human mind. --"Backstage Lensman", Randall Garrett -------------- next part -------------- An HTML attachment was scrubbed... URL: From cym224 at gmail.com Wed Feb 10 04:24:05 2021 From: cym224 at gmail.com (Nemo Nusquam) Date: Tue, 09 Feb 2021 13:24:05 -0500 Subject: [TUHS] QNX In-Reply-To: References: <8092.1612856554@hop.toad.com> Message-ID: <6022D345.9080001@gmail.com> On 09/02/2021 06:03, Robert Brockway wrote (in part): > Looks like BlackBerry now own QNX and the OS is still out there > keeping critical stuff running. According to their website, they are in 175 million vehicles (and a bunch of other things). N. From cym224 at gmail.com Wed Feb 10 04:29:50 2021 From: cym224 at gmail.com (Nemo Nusquam) Date: Tue, 09 Feb 2021 13:29:50 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <6022D49E.4000209@gmail.com> On 08/02/2021 22:58, M Douglas McIlroy wrote: > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 Whoa! Is this Linux? On my Solaris 10 boxen, I find: [~]=> ls /bin |wc 1113 1113 10256 [~]=> ls /usr/share/man/man2|wc 219 219 2299 N. > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug From jon at fourwinds.com Wed Feb 10 05:00:15 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Tue, 09 Feb 2021 11:00:15 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> Theodore Ts'o writes: > On Mon, Feb 08, 2021 at 10:58:08PM -0500, M Douglas McIlroy wrote: > > > Do they *really* want something which is just V7 Unix, with nothing else? > > > No TCP/IP, no hot-plug USB support? No web browsing? > > > > > Oh, you wanted more than that? Feature bloat! Feature bloat! > > > Feature bloat! Shame! Shame! Shame! > > > > % ls /usr/share/man/man2|wc > > 495 495 7230 > > % ls /bin|wc > > 2809 2809 30468 > > > > How many of roughly 500 system calls (to say nothing of uncounted > > ioctl's) do you think are necessary for writing those few crucial > > capabilities that distinguish Linux from v7? There is > > undeniably bloat, but only a sliver of it contributes to the > > distinctive utility of today's systems. > > Well, let's take a look at those system calls. They fall into a > number of major categories: > > *) BSD innovations > *) BSD socket interfaces (so if you want TCP/IP... is it bloat?) > *) BSD job control > *) BSD effective id and its extensions > *) BSD groups > *) New versions to maintain stable ABI's, e.g., (dup vs dup2 vs dup3, > wait vs wait3 vs wait4 vs waitpid, stat vs stat64, lstat vs lstat64, > chown vs chown32, etc.) > *) System V IPC support (is support for enterprise databases like > Oracle "bloat"?) > *) Posix real-time extensions > *) Posix extended attributes > *) Windows file streams support (the original reason for the *at(2) > system calls -- openat, linkat, renameat, and a dozen more) > > Ok, that last I'd agree was *pure* bloat, and an amazingly bad idea. > But there are plenty of people who have bugged/begged me to add > windows file streams because they were *convinced* it was a critical > feature. And I dare say bug-for-bugs Windows compatibility was worth > millions of $$$ of potential sales, which is why they agreed to add it > --- and why I kept on getting nagged to add that feature to ext4 (and > I pushed back where the Solaris developers caved, so there. :-) > > As for things like System V IPC support, that was only added to Linux > because it was worth $$$, because enterprise databases like DB2 and > Oracle demanded it. Is that evidence of "cancer"? You might not want > it, but that's a great example of "one person's bloat is another > person's critical feature". > > Or consider the dozen plus BSD sockets interface, which if removed > would mean no TCP/IP support, and no graphical windowing systems. > Critical feature, or bloat? > > But hey, if you only want V7 Unix, why are you complaining? Just go > and use it, and give up on all of this cancerous new features. And I > promise to get off of your lawn. :-) I'm with Doug here. Some time ago, I was asked to give a talk at OSU based on my life with UNIX. Mostly figured out what to say while driving down there. I started the talk by posing the question of what makes UNIX great. I said that while many people had different answers, to me it was the good abstractions and the composability of programs that it supported. I then asked why so many people had forgotten that which started a very lively discussion. When I look at linux (and pretty much anything else today), the loss of good abstractions is quite evident and in my opinion is responsible for much of the bloat and mess. This didn't begin with linux. To me, it started when UNIX moved out of research. Changes were made without any artistry or elegance. And it happened in BSD-land too. I never liked the socket interface, would have preferred to open("/dev/ip"). But, as Clem and I have discussed, the socket API was the result of the original code coming from elsewhere (BBN I think), and the political desire to keep the networking code separable from the rest of the kernel. At this point in time, I feel that linux is suffering from the tragedy of the commons. I'm not interested in getting into an argument over this. My opinion is that I think that more direction would have helped. I think that there was a golden era for contribution that has passed. There was a time when because of cost of machines and education that open-source contributors were somewhat of the same caliber. But now that machines are essentially free and every java programmer thinks that they know everything, the quality of contributions and especially design has dropped. While I use linux, it's no longer a system that I trust in any way. It's huge, bloated, rats-nest undocumented code. Yes, I know that a small percentage of people like Ted spend a lot of time on the code and therefore know it well. But I find it an impenetrable mess. When I first had a need to look at the UNIX kernel, it was easy to figure out what was going on. Not so with linux. Of course, I'm getting old and maybe not as good at this stuff as I used to be. You might claim that the code is stable and debugged, but if that was the case then I wouldn't be getting a new kernel on practically every daily update. To me, it also seems like linux is focused on two ends of the spectrum with the middle mostly ignored. Lots of focus on embedded and data centers. As a desktop system, it's tolerable but generally annoying. Again, somewhat a function of the tragedy of the commons; the environment consists of programs with little consistency. I think that if one compared the user interfaces to, let's say, gimp, audacity, and blender, the common features would be close to the null set. And those are just a few "major" programs. So a more interesting question to me is, when is linux going to die under its own weight, and what if anything will replace it. With the exception of Windows, most everything today has some roots in UNIX. If one was going to reimagine a system going back to the UNIX philosophy, what would it look like? Is there likely to ever be an opportunity for a replacement, designed system or are we stuck with the status quo forever? Jon From tytso at mit.edu Wed Feb 10 05:02:34 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Tue, 9 Feb 2021 14:02:34 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <5372.1612853750@hop.toad.com> References: <5372.1612853750@hop.toad.com> Message-ID: On Mon, Feb 08, 2021 at 10:55:50PM -0800, John Gilmore wrote: > > That was true decades ago, but no longer. In the intervening time, all > the major Linux distributions have stopped releasing OS's that support > 32-bit machines. Even those that support 32-bit CPUs have often > desupported the earlier CPUs (like, what was wrong with the 80386?). > Essentially NO applications require 64-bit address spaces, so arguably > if they wanted to lessen their workload, they should have desupported > the 64-bit architectures (or made kernels and OS's that would run on > both from a single release). But that wouldn't give them the > gee-whiz-look-at-all-the-new-features feeling. So there is currently a *single* volunteer supporting the 32-bit i386 platform for Debian, and in December 2020 there was an e-mail thread asking whether there were volunteer resources to be able to provide the necessary support (testing installers, building and testing packages for security updates, etc.) for the 3.5 years of stable support. I don't believe the final decision has been made, but if more people were willing to volunteer to make it happen, or to pay $$$ to provide that support, I'm sure Debian would be very happy to keep i386 on life support for the next stable release. Ultimately, it's all about what people are willing to support by providing direct volunteer support, or by putting the money where their mouth is. "We have met the enemy, and he is us." - Ted From chet.ramey at case.edu Wed Feb 10 05:06:25 2021 From: chet.ramey at case.edu (Chet Ramey) Date: Tue, 9 Feb 2021 14:06:25 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On 2/9/21 12:31 PM, John Cowan wrote: > From BTSJ 57:6: > > The file system maintains no locks visible to the user, nor is there > any restriction on the number of users who may have a file open for > reading or writing. Although it is possible for the contents of a file > to become scrambled when two users write on it simultaneously, in > practice difficulties do not arise.We take the view that locks are > neither necessary nor sufficient, in our environment, to prevent > interference between users of the same file. They are unnecessary > because we are not faced with large, single-file databases maintained > by independent processes. They are insufficient because locks in the > ordinary sense, whereby one user is prevented from writing on a file > that another user is reading, cannot prevent confusion when, for > example, both users are editing a file with an editor that makes a copy > of the file being edited. "In our environment" is doing some pretty heavy lifting there. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ From txomsy at yahoo.es Wed Feb 10 06:18:29 2021 From: txomsy at yahoo.es (Jose R Valverde) Date: Tue, 9 Feb 2021 20:18:29 +0000 (UTC) Subject: [TUHS] QNX In-Reply-To: <6022D345.9080001@gmail.com> References: <8092.1612856554@hop.toad.com> <6022D345.9080001@gmail.com> Message-ID: <1707674178.2144305.1612901909580@mail.yahoo.com> My experience was that old QNX (the one that was distributed on a floppy) was certainly wanting, however, the newer one, the one whose source was community open during the 90s and part of the 2000s (if I remember well) was pretty solid and standard. I ported a (significant) number of complex packages to it and made distribution packages without any problem. BTW, it was far advanced, making heavy use of Union file systems (sorta like the modern Snaps of Ubuntu) and the development tools were pretty powerful and comfy to use. Then it switched hands and the source was closed again. Which was a pity. I still cherish my copies of the source. En martes, 9 de febrero de 2021 19:25:29 CET, Nemo Nusquam escribió: On 09/02/2021 06:03, Robert Brockway wrote (in part): > Looks like BlackBerry now own QNX and the OS is still out there > keeping critical stuff running. According to their website, they are in 175 million vehicles (and a bunch of other things). N. From wobblygong at gmail.com Wed Feb 10 08:59:17 2021 From: wobblygong at gmail.com (Wesley Parish) Date: Wed, 10 Feb 2021 11:59:17 +1300 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <5372.1612853750@hop.toad.com> References: <5372.1612853750@hop.toad.com> Message-ID: Many of those mentioned in the fossbytes article have become 64-bit only. But I can recommend Anti-X (pronounced Antics) as a suitable OS for an old-but-good i386 box or laptop. Wesley Parish On 2/9/21, John Gilmore wrote: > Henry Bent wrote: >> Apple loves to move quickly and abandon >> compatibility, and in that respect it's an interesting counterpoint to >> Linux or a *BSD where you can have decades old binaries that still run. > > That was true decades ago, but no longer. In the intervening time, all > the major Linux distributions have stopped releasing OS's that support > 32-bit machines. Even those that support 32-bit CPUs have often > desupported the earlier CPUs (like, what was wrong with the 80386?). > Essentially NO applications require 64-bit address spaces, so arguably > if they wanted to lessen their workload, they should have desupported > the 64-bit architectures (or made kernels and OS's that would run on > both from a single release). But that wouldn't give them the > gee-whiz-look-at-all-the-new-features feeling. > > I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They > ran faster and smaller than the amd64 versions, and also ran old > binaries for more than a decade. But their vendors and support teams > decided that doing the release-engineering to keep them running was more > work than pulling the plug. > > Even Fedora has desupported the One Laptop Per Child hardware now -- no > new releases for millions of kids! And desupported all the other cheap > Intel mobile CPUs, let alone your typical desktop 80386, 80486, or > Pentium. Have you tried running Linux on a machine without a GPU > these days? It's truly sad that to gain stupid animated window tricks, > they broke compatability with millions of existing systems. > > Here's one overview of the niche distros that still have x86 support: > > https://fossbytes.com/best-lightweight-linux-distros/ > > Even those are dropping like flies, e.g. Ubuntu MATE now says "For older > hardware based on i386. Supported until April 2021", i.e. only til next > month! The PuppyLinux.com web site is now a 404. Etc. > > (I'm not up on what the BSD releases are doing.) > > John > > From lm at mcvoy.com Wed Feb 10 11:34:13 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 9 Feb 2021 17:34:13 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <5372.1612853750@hop.toad.com> Message-ID: <20210210013413.GR13701@mcvoy.com> On Tue, Feb 09, 2021 at 02:02:34PM -0500, Theodore Ts'o wrote: > On Mon, Feb 08, 2021 at 10:55:50PM -0800, John Gilmore wrote: > > That was true decades ago, but no longer. In the intervening time, all > > the major Linux distributions have stopped releasing OS's that support > > 32-bit machines. Even those that support 32-bit CPUs have often > > desupported the earlier CPUs (like, what was wrong with the 80386?). Um, John, it's 33mhz part. Who wants that? > So there is currently a *single* volunteer supporting the 32-bit i386 > platform for Debian, and in December 2020 there was an e-mail thread > asking whether there were volunteer resources to be able to provide > the necessary support (testing installers, building and testing > packages for security updates, etc.) for the 3.5 years of stable > support. John is one damn good programmer, maybe he'll offer. --lm From lm at mcvoy.com Wed Feb 10 11:41:23 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 9 Feb 2021 17:41:23 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> Message-ID: <20210210014123.GS13701@mcvoy.com> On Tue, Feb 09, 2021 at 11:00:15AM -0800, Jon Steinhart wrote: > While I use linux, it's no longer a system that I trust in any way. It's > huge, bloated, rats-nest undocumented code. Yes, I know that a small > percentage of people like Ted spend a lot of time on the code and therefore > know it well. But I find it an impenetrable mess. When I first had a need > to look at the UNIX kernel, it was easy to figure out what was going on. Not > so with linux. Of course, I'm getting old and maybe not as good at this stuff > as I used to be. Jon, I think you might be old enough to have run v7, if not, like me, you have read the Lions book and loved it. That kernel didn't do much, it was uniprocessor, block interrupts design, no networking, it was very basic. Amazingly nice to read, but it didn't do a lot. Which is part of the reason we liked it. I loved SunOS 4.x but it was also a uniprocessor kernel (Greg Limes tried to make it SMP but it was not a kernel that wanted to be SMP). I loved it because I could understand it and it was a bit more complex than v7. The problem space that kernels address these days include SMP, NUMA, and all sorts of other stuff. I'm not sure I could understand the Linux kernel even if I were in my prime. It's a harder space, you need to know a lot more, be skilled at a lot more. My take is we're old dudes yearning for the days when everything was simple. Remember when out of order wasn't a thing? Yeah, me too, I gave up on trying to debug kernels when kadb couldn't tell me what I was looking at. --lm From ggm at algebras.org Wed Feb 10 11:52:21 2021 From: ggm at algebras.org (George Michaelson) Date: Wed, 10 Feb 2021 11:52:21 +1000 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <20210210014123.GS13701@mcvoy.com> References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> Message-ID: I won't dispute your age, or how many layers of pearl are on the seed Larry, but MP unix was a thing long long ago. I am pretty sure it was written up in BSTJ, and there was Pyramid by 1984/5 and an MP unix system otherwise running at Melbourne University (Rob Elz) around 1988. You might be ancient, but you weren't THAT ancient in the 1980s. anyway, pearls before swine, and age before beauty. -G From lm at mcvoy.com Wed Feb 10 12:24:24 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 9 Feb 2021 18:24:24 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> Message-ID: <20210210022424.GT13701@mcvoy.com> I'm going to rant a little here, George, this is not to you, it's to the topic. You are correct but a lot of us learned from a uniprocessor kernel. I certainly did. So there is MP and then there is MP. SGI redid all of their kernel structs so that the most important stuff was on the same cache line. The 1980's and 1990's talked about SMP and then there was a whole series of papers that talked about cache affinity which was geek speak for the S in SMP wasn't. It's just layers and layers of more complexity. Which we can talk about and us older folks want to say isn't needed. We're wrong. I'm with Ted. [1] He's active in the Linux kernel, has been for a long time, he's someone that I know is better than me at kernel stuff. He has kept up, so far as I can tell, when there is yet another layer of something, he's a guy that goes and digs into that something and learns about it, reasons about it, weighs the tradeoffs. He is NOT a guy that just shoves stuff into the kernel on a whim. He thinks, he does the tradeoffs. And he is here saying well, what do you want? You want all the stuff you have or are you willing to go run in v7? Who among us is running v7, or some other kernel that we all love because we understand it? I'd venture a guess that it is noone. We like our X11, we like that we can do "make -j" and you can build a kernel in a minute or two, we like our web browsers, we like a lot of stuff that if you look at it from the lens "but it should be simple", but that lens doesn't give us what we want. I get it. I love the clean simple lines that were the original Unix but we live in a more complex world. Ted is straddling those lines and he's doing the best he can and his best is pretty darn good. I'd argue listen to Ted. He's got the balance. --lm [1] Truth in advertising, Ted and I are friends, we used to hike together in Pacifica, we like each other. On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: > I won't dispute your age, or how many layers of pearl are on the seed > Larry, but MP unix was a thing long long ago. > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > 1984/5 and an MP unix system otherwise running at Melbourne University > (Rob Elz) around 1988. > > You might be ancient, but you weren't THAT ancient in the 1980s. > > anyway, pearls before swine, and age before beauty. > > -G -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From andreww591 at gmail.com Wed Feb 10 12:31:44 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Tue, 9 Feb 2021 19:31:44 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: On 2/9/21, Theodore Ts'o wrote: > > Everything can be implemented in terms of a turing machine tape, so > I'm sure that's true. Whether or not it would be *performant* and > *secure* in the face of application level bugs might be a different > story, though. > seL4 basically provides no primitives other than send and receive, and UX/RT will just map read()/write()-family APIs onto send and receive (the IPC transport layer won't be trivial, but it will be simpler than those under most other microkernel OSes). Basically everything else will be implemented on top of the read()/write() APIs provided by the transport layer (memory mapping will sort of bypass it, but all user memory will be handled as memory-mapped files, even that which is anonymous on other systems). In order to better map onto seL4 IPC semantics, variants of read()/write() that operate on message registers and a shared buffer will be provided, but these will be compatible with each other and with the traditional versions (messages will be copied on read when a different variant was used to write them). Basically if there were something that couldn't be implemented efficiently and securely on top of a combination of read()/write() and shared memory, that would mean that it couldn't be securely implemented on top of IPC, and I'm not sure that there is anything like that. > > In fact, some of the terrible semantics of the Posix interfaces exist > only because there were traditional Unix vendors on the standards > committee insisting on semantics that *could* be implemented using a > user-mode library on top of normal file API's (I'm looking at you, > fcntl locking semantics, where a close of *any* file descriptor, even > a fd cloned via dup(2) or fork(2) will release the lock). So yes, > Posix fcntl(2) locking *can* be implemented in terms of normal file > API's.... AND IT WAS A TERRIBLE IDEA. (For more details, see [1].) > UX/RT's file locking will be implemented with RPCs to the process server just like open()/close() and the like (which will use read()/write()-family APIs underneath; the initial RPC connection to the process server will be permanently open but it will be possible to create new connections to manipulate the environment of child processes before starting them, so that fork() doesn't have to be a primitive anymore). AFAIK, little actually depends on those rather broken "close one FD and release all locks on that file" semantics, so UX/RT will implement more sane locking semantics by default. There will be a flag to revert to the traditional semantics (probably just implemented at the library level) in case anything actually depends on them. > > I could go on about other spectactularly bad ideas enshrined in POSIX, > such as telldir(2) and seekdir(2), which date all the way back to the > assumption that directories should only be implemented in terms of > linear linked lists with O(n) lookup performance, but I don't whine > about that as feature bloat imposed by external standards, but just > the cost of doing business. (Or at least, of relevance.) > The directory contents that normal user programs actually see on UX/RT will be in a standardized format managed by the VFS (since support for a limited form of union mounts will be built in). > > I'm not sure what you're referring to; if you mean the *at(2) system > calls, which is why they exist in Linux (not for !@#!? Windows file > streams support); they are needed to provide secure and performant > user-mode file servers for things like Samba. Trying to implement a > user-space file server using only the V7 Unix primitives will cause > you to have some really horrible Time of Use vs Time of Check (TOUTOC) > security gaps; you can narrow the TOUTOC races with some terrible > performance sucking impacts, but removing them entirely is almost > impossible. > I'm talking about implementing local filesystems in regular processes (rather than requiring them to be in the kernel) like in QNX or Plan 9, not about network filesystems (although of course network filesystem clients can be implemented on top of such an API). Linux has support for them through FUSE, but AFAIK it has performance issues and isn't very well integrated, so it isn't used all that much. When it comes to normal server processes, UX/RT will mostly depend on checking security on open() rather than on read()/write()-family APIs, which will limit the risk of TOCTTOU vulnerabilities. Where security does have to be checked on reads or writes (such as the ones underlying the RPC implementing open() itself), the data will be copied before checking. Using the traditional read()/write() instead of the new zero-copy equivalents should usually be good enough AFAIK, since they copy to a caller-provided buffer. > > That's because most people aren't going to port or rewrite application > software for some random OS, whether it is a research OS or someone's > new "simple, clean, reimplementation". And most users do expect to > have a working web browser.... and text editor..., and their favorite > games, whether it's nethack or spacewars, etc., etc., etc. > I'm very well aware of that. UX/RT will implement most Linux APIs (either in libraries, servers, or combinations of the two) and will have a Linux binary compatibility layer. The only major incompatibilities are likely to be with stuff that manages sessions and logins (since UX/RT will natively have a mostly process-oriented security model, with no way to fully revert to traditional Unix security outside of running programs in fakeroot containers). From crossd at gmail.com Wed Feb 10 12:44:49 2021 From: crossd at gmail.com (Dan Cross) Date: Tue, 9 Feb 2021 21:44:49 -0500 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <20210210022424.GT13701@mcvoy.com> References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> <20210210022424.GT13701@mcvoy.com> Message-ID: On Tue, Feb 9, 2021 at 9:25 PM Larry McVoy wrote: > I'm going to rant a little here, George, this is not to you, it's to > the topic. > All in all, that was a pretty tame Rant, Larry. :-) Who among us is running v7, or some other kernel that we all love > because we understand it? I'd venture a guess that it is noone. > We like our X11, we like that we can do "make -j" and you can build > a kernel in a minute or two, we like our web browsers, we like a lot > of stuff that if you look at it from the lens "but it should be simple", > but that lens doesn't give us what we want. > I had a stint in life where my "primary" environment was a VT320 hooked up to a VAXstation running VMS, from which I'd telnet to a Unix machine. Subjectively, it was among the more productive times in my life professionally: I felt that I wrote good code and could concentrate on what I was working on. Fast forward 15 years (so a little 10 years ago), I'm sitting in front of a Mac Pro desktop with two large displays and an infinite number of browser tabs open and I feel almost hopelessly productive. I just can't concentrate; I can't find anything; things are beeping at me all the time and I have no idea where the music is coming from. Ads are telling me I should buy all kinds of things I didn't even know I needed; the temptation to read the news, or email, or the plot of some movie I saw an ad for 20 years ago (but never saw) on wikipedia is too great and another 45 minutes are gone. So I go on ebay and find a VT420 in good condition and buy it; it arrives an unproductive week later, and I hook it up to the serial port on my Linux machine at work and configure getty and login and ... wow, this is terrible! It's just too dang and limiting. And that hum from the flyback transformer is annoyingly distracting. The lesson is that we look back at our old environments through the rosy glasses of nostalgia, but we forget the pain points. Yeah, we might moan about the X protocol or the complexity of SMP or filesystems or mmap() or whatever, but hey, programs that I care about to get my work done are already written for those environments, and do I _really_ want to write another shell or terminal program or editor or email client? Actually...no. No, I do not. So I'm sympathetic to this. I get it. I love the clean simple lines that were the original Unix > but we live in a more complex world. But this I take some exception to. Yes, the world is more complex, but part of the complexity of our systems is, as Jon asserts, poor abstractions. It's like the recent discussion of ZFS vs merged VM/Buffer caches: most people don't care. But as a system designer, I do. One _can_ build systems that support graphics and networking without X11 and sockets and with a small number of system calls. One _can_ provide some support for "legacy" systems by papering over the difference with a library (back in the day, someone even ported X11 to Plan 9), but it does get messy and you hit limitations at some point. Ted is straddling those lines > and he's doing the best he can and his best is pretty darn good. > I'd just like to stress I'm not trying to criticize Ted, or anyone else, really. We've got the systems we've got. But a lot of the complexity we've got in those systems comes from trying to retrofit a design that was fundamentally oriented towards a uniprocessor machine onto a multiprocessor system that looks approximately nothing like a PDP-11. I do agree with Jon that much of Linux's complexity is unjustified (functions called `foo` that call `__foo` that calls `__do_foo_for_bar`...I understand this is to limit nesting. But...dang), but much of it is forced by trying to accommodate a particular system model on systems that are no longer really amenable to that model. - Dan C. I'd argue listen to Ted. He's got the balance. > > --lm > > [1] Truth in advertising, Ted and I are friends, we used to hike together > in Pacifica, we like each other. > > On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: > > I won't dispute your age, or how many layers of pearl are on the seed > > Larry, but MP unix was a thing long long ago. > > > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > > 1984/5 and an MP unix system otherwise running at Melbourne University > > (Rob Elz) around 1988. > > > > You might be ancient, but you weren't THAT ancient in the 1980s. > > > > anyway, pearls before swine, and age before beauty. > > > > -G > > -- > --- > Larry McVoy lm at mcvoy.com > http://www.mcvoy.com/lm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Wed Feb 10 12:56:00 2021 From: imp at bsdimp.com (Warner Losh) Date: Tue, 9 Feb 2021 19:56:00 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <20210210014123.GS13701@mcvoy.com> References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> Message-ID: On Tue, Feb 9, 2021 at 6:41 PM Larry McVoy wrote: > Which is part of the reason we liked it. I loved SunOS 4.x but it was > also a uniprocessor kernel (Greg Limes tried to make it SMP but it was not > a kernel that wanted to be SMP). I loved it because I could understand > it and it was a bit more complex than v7. > David Barak and the OS group as Solbourne wwere able to do it for OS/MP. First as a ASMP kernel where CPU0 ran the unix kernel, but scheduled jobs for all the other CPUs, the as SMP where the kernel could run on any CPU with progressively finer locking on each release. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Wed Feb 10 12:57:10 2021 From: imp at bsdimp.com (Warner Losh) Date: Tue, 9 Feb 2021 19:57:10 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> Message-ID: On Tue, Feb 9, 2021 at 6:53 PM George Michaelson wrote: > I won't dispute your age, or how many layers of pearl are on the seed > Larry, but MP unix was a thing long long ago. > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > 1984/5 and an MP unix system otherwise running at Melbourne University > (Rob Elz) around 1988. > > You might be ancient, but you weren't THAT ancient in the 1980s. > The first MP Unix was MUNIX dating to the 5th edition and 1975. https://calhoun.nps.edu/handle/10945/20959 Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Wed Feb 10 13:02:10 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 9 Feb 2021 19:02:10 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> Message-ID: <20210210030210.GV13916@mcvoy.com> For the record, I've never seen the Solbourne code but I wanted to. Seemed like they were Unix people. They took SunOS to a better place. I suspect Greg Limes would agree. On Tue, Feb 09, 2021 at 07:56:00PM -0700, Warner Losh wrote: > On Tue, Feb 9, 2021 at 6:41 PM Larry McVoy wrote: > > > Which is part of the reason we liked it. I loved SunOS 4.x but it was > > also a uniprocessor kernel (Greg Limes tried to make it SMP but it was not > > a kernel that wanted to be SMP). I loved it because I could understand > > it and it was a bit more complex than v7. > > > > David Barak and the OS group as Solbourne wwere able to do it for OS/MP. > First as a ASMP kernel where CPU0 ran the unix kernel, but scheduled jobs > for all the other CPUs, the as SMP where the kernel could run on any CPU > with progressively finer locking on each release. > > Warner -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From lm at mcvoy.com Wed Feb 10 13:10:49 2021 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 9 Feb 2021 19:10:49 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> <20210210022424.GT13701@mcvoy.com> Message-ID: <20210210031049.GU13701@mcvoy.com> Sounds good, I've been out on my boat and with my feet I'm a mess, love that boat but I need go rest. I'll give this the reply it deserves in the morning. On Tue, Feb 09, 2021 at 09:44:49PM -0500, Dan Cross wrote: > On Tue, Feb 9, 2021 at 9:25 PM Larry McVoy wrote: > > > I'm going to rant a little here, George, this is not to you, it's to > > the topic. > > > > All in all, that was a pretty tame Rant, Larry. :-) > > Who among us is running v7, or some other kernel that we all love > > because we understand it? I'd venture a guess that it is noone. > > We like our X11, we like that we can do "make -j" and you can build > > a kernel in a minute or two, we like our web browsers, we like a lot > > of stuff that if you look at it from the lens "but it should be simple", > > but that lens doesn't give us what we want. > > > > I had a stint in life where my "primary" environment was a VT320 hooked up > to a VAXstation running VMS, from which I'd telnet to a Unix machine. > Subjectively, it was among the more productive times in my life > professionally: I felt that I wrote good code and could concentrate on what > I was working on. > > Fast forward 15 years (so a little 10 years ago), I'm sitting in front of a > Mac Pro desktop with two large displays and an infinite number of browser > tabs open and I feel almost hopelessly productive. I just can't > concentrate; I can't find anything; things are beeping at me all the time > and I have no idea where the music is coming from. Ads are telling me I > should buy all kinds of things I didn't even know I needed; the temptation > to read the news, or email, or the plot of some movie I saw an ad for 20 > years ago (but never saw) on wikipedia is too great and another 45 minutes > are gone. > > So I go on ebay and find a VT420 in good condition and buy it; it arrives > an unproductive week later, and I hook it up to the serial port on my Linux > machine at work and configure getty and login and ... wow, this is > terrible! It's just too dang and limiting. And that hum from the flyback > transformer is annoyingly distracting. > > The lesson is that we look back at our old environments through the rosy > glasses of nostalgia, but we forget the pain points. Yeah, we might moan > about the X protocol or the complexity of SMP or filesystems or mmap() or > whatever, but hey, programs that I care about to get my work done are > already written for those environments, and do I _really_ want to write > another shell or terminal program or editor or email client? Actually...no. > No, I do not. > > So I'm sympathetic to this. > > I get it. I love the clean simple lines that were the original Unix > > but we live in a more complex world. > > > But this I take some exception to. Yes, the world is more complex, but part > of the complexity of our systems is, as Jon asserts, poor abstractions. > It's like the recent discussion of ZFS vs merged VM/Buffer caches: most > people don't care. But as a system designer, I do. One _can_ build systems > that support graphics and networking without X11 and sockets and with a > small number of system calls. One _can_ provide some support for "legacy" > systems by papering over the difference with a library (back in the day, > someone even ported X11 to Plan 9), but it does get messy and you hit > limitations at some point. > > Ted is straddling those lines > > and he's doing the best he can and his best is pretty darn good. > > > > I'd just like to stress I'm not trying to criticize Ted, or anyone else, > really. We've got the systems we've got. But a lot of the complexity we've > got in those systems comes from trying to retrofit a design that was > fundamentally oriented towards a uniprocessor machine onto a multiprocessor > system that looks approximately nothing like a PDP-11. I do agree with Jon > that much of Linux's complexity is unjustified (functions called `foo` that > call `__foo` that calls `__do_foo_for_bar`...I understand this is to limit > nesting. But...dang), but much of it is forced by trying to accommodate a > particular system model on systems that are no longer really amenable to > that model. > > - Dan C. > > I'd argue listen to Ted. He's got the balance. > > > > --lm > > > > [1] Truth in advertising, Ted and I are friends, we used to hike together > > in Pacifica, we like each other. > > > > On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: > > > I won't dispute your age, or how many layers of pearl are on the seed > > > Larry, but MP unix was a thing long long ago. > > > > > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > > > 1984/5 and an MP unix system otherwise running at Melbourne University > > > (Rob Elz) around 1988. > > > > > > You might be ancient, but you weren't THAT ancient in the 1980s. > > > > > > anyway, pearls before swine, and age before beauty. > > > > > > -G > > > > -- > > --- > > Larry McVoy lm at mcvoy.com > > http://www.mcvoy.com/lm > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From andreww591 at gmail.com Wed Feb 10 13:53:47 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Tue, 9 Feb 2021 20:53:47 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <20210210014123.GS13701@mcvoy.com> References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> Message-ID: On 2/9/21, Larry McVoy wrote: > > The problem space that kernels address these days include SMP, NUMA, > and all sorts of other stuff. I'm not sure I could understand the Linux > kernel even if I were in my prime. It's a harder space, you need to > know a lot more, be skilled at a lot more. > > My take is we're old dudes yearning for the days when everything > was simple. Remember when out of order wasn't a thing? Yeah, me too, > I gave up on trying to debug kernels when kadb couldn't tell me what I > was looking at. > > --lm > Pure microkernels with indirect message destinations (i.e. not thread IDs) can simplify things somewhat with regards to multiprocessing, since almost all OS subsystems are just regular processes that run in their own contexts and can structure their threads as they please, as opposed to being kernel subsystems that have to deal with the concurrency issues that arise from the possibility of being called from any process context. The microkernel still has to deal with being called in any context, but it can use simpler mechanisms for dealing with concurrency than a monolithic kernel would because processes don't stay in kernel mode for nearly as long as they can in a monolithic kernel. From rminnich at gmail.com Thu Feb 11 01:31:28 2021 From: rminnich at gmail.com (ron minnich) Date: Wed, 10 Feb 2021 07:31:28 -0800 Subject: [TUHS] nothing to do with unix, everything to do with history Message-ID: There's so much experience here, I thought someone might know: "Our goal is to develop an emulator for the Burroughs B6700 system. We need help to find a complete release of MCP software for the Burroughs B6700. If you have old magnetic tapes (magtapes) in any format, or computer printer listings of software or micro-fiche, micro-film, punched-card decks for any Burroughs B6000 or Burroughs B7000 systems we would like to hear from you. Email nw at retroComputingTasmania.com" From woods at robohack.ca Thu Feb 11 04:57:05 2021 From: woods at robohack.ca (Greg A. Woods) Date: Wed, 10 Feb 2021 10:57:05 -0800 Subject: [TUHS] Seeking wisdom from Unix Greybeards In-Reply-To: <20201126214825.bDDjr%steffen@sdaoden.eu> References: <9c1595cc-54a1-8af9-0c2d-083cb04dd97c@spamtrap.tnetconsulting.net> <20201125172255.83D252146F@orac.inputplus.co.uk> <20201126145134.GB394251@mit.edu> <20201126214825.bDDjr%steffen@sdaoden.eu> Message-ID: At Thu, 26 Nov 2020 22:48:25 +0100, Steffen Nurpmeso wrote: Subject: Re: [TUHS] Seeking wisdom from Unix Greybeards > > ANSI escape sequences aka ISO 6429 came via ECMA-48 i have > learned, and that appeared first in 1976 (that via Wikidpedia). Wikipedia is a bit misleading here. This is one case where ANSI and ECMA worked together quite closely (and another example of where ISO took the result more or less directly, though on a different schedule). As it happens one can read about it much more directly from the original sources. First we can find that FIPS-86 is "in whole" ANSI-X3.64-1979 https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub86-1981.pdf Thus giving us "free" access to the original ANSI standard in a "new" digital (PDF) form. Here's the full copy of ANSI-X3.64-1979 verbatim (including cover pages): https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub86.pdf See in particular "Appendix H" in the latter. X3.64 also gives a good list of all the people and organisations which cooperated to create this standard (though interestingly only mentions ECMA-48 in that last appendix). There is also corroborating evidence of this cooperation in the preface ("BRIEF HISTORY") to the 2nd Edition of ECMA-48: https://www.ecma-international.org/wp-content/uploads/ECMA-48_2nd_edition_august_1979.pdf Note though that the link the 1st Edition of ECMA-48 here is wrong, so as yet I've not seen if there's any history given in that 1st edition): https://www.ecma-international.org/publications-and-standards/standards/ecma-48/ As an aside, the DEC VT100 terminal was an early (it came out a year before X3.64) and relatively complete (for a video terminal application) implementation of X3.64. BTW, I would in general agree with Steffen that implementing an application to output anything but X3.64/ECMA-48/ISO-6429 is rather pointless these days, _unless_ one wants to take advantage of any particular implementation's additional "private" features, and/or work around any annoying but inevitable bugs in various implementations. Also the API provided by, e.g. libcurses, often makes for much easier programming than direct use of escape sequences, or invention and maintenance of one's own API. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From kevin.bowling at kev009.com Thu Feb 11 06:03:39 2021 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Wed, 10 Feb 2021 13:03:39 -0700 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <202102091900.119J0Gv9850825@darkstar.fourwinds.com> <20210210014123.GS13701@mcvoy.com> <20210210022424.GT13701@mcvoy.com> Message-ID: On Tue, Feb 9, 2021 at 7:46 PM Dan Cross wrote: > > On Tue, Feb 9, 2021 at 9:25 PM Larry McVoy wrote: >> >> I'm going to rant a little here, George, this is not to you, it's to >> the topic. > > > All in all, that was a pretty tame Rant, Larry. :-) > >> Who among us is running v7, or some other kernel that we all love >> because we understand it? I'd venture a guess that it is noone. >> We like our X11, we like that we can do "make -j" and you can build >> a kernel in a minute or two, we like our web browsers, we like a lot >> of stuff that if you look at it from the lens "but it should be simple", >> but that lens doesn't give us what we want. > > > I had a stint in life where my "primary" environment was a VT320 hooked up to a VAXstation running VMS, from which I'd telnet to a Unix machine. Subjectively, it was among the more productive times in my life professionally: I felt that I wrote good code and could concentrate on what I was working on. > > Fast forward 15 years (so a little 10 years ago), I'm sitting in front of a Mac Pro desktop with two large displays and an infinite number of browser tabs open and I feel almost hopelessly productive. I just can't concentrate; I can't find anything; things are beeping at me all the time and I have no idea where the music is coming from. Ads are telling me I should buy all kinds of things I didn't even know I needed; the temptation to read the news, or email, or the plot of some movie I saw an ad for 20 years ago (but never saw) on wikipedia is too great and another 45 minutes are gone. This is the realest description of the modern predicament I have seen! Depending on mood and day, things can be great or awful with modern software for me. In particular, we live in a world of total abundance which is great but also overwhelming. There are no shortage of interesting kernels, libraries, and applications all for free which is particularly amazing having grown up in the brief era of the PC world where most interesting software was shrink-wrapped and cost prohibitive. I can't help but eventually tie the current status quo back to economics. There is a lot of (perceived?) power in flexing a large staff of programmers, whether they are productive or not. It's like having a large standing army during peacetime. The mere existence gives you power in the current market. Look at the first 10 companies on this list https://companiesmarketcap.com/. It's safe to say Aramco has top quality HPC and development staff and the rest are all mostly pure tech plays with large staffs of developers and systems engineers. So in a way, we are sustaining Full Employment Theory for people with *nix skills. That isn't great for quality. Early OS work was done by small teams relative to current, people who really cared about what they were doing. The modern situation is great for people's livelihoods, so unless you are retired make hay while the sun is shining. > So I go on ebay and find a VT420 in good condition and buy it; it arrives an unproductive week later, and I hook it up to the serial port on my Linux machine at work and configure getty and login and ... wow, this is terrible! It's just too dang and limiting. And that hum from the flyback transformer is annoyingly distracting. > > The lesson is that we look back at our old environments through the rosy glasses of nostalgia, but we forget the pain points. Yeah, we might moan about the X protocol or the complexity of SMP or filesystems or mmap() or whatever, but hey, programs that I care about to get my work done are already written for those environments, and do I _really_ want to write another shell or terminal program or editor or email client? Actually...no. No, I do not. > > So I'm sympathetic to this. > >> I get it. I love the clean simple lines that were the original Unix >> but we live in a more complex world. > > > But this I take some exception to. Yes, the world is more complex, but part of the complexity of our systems is, as Jon asserts, poor abstractions. It's like the recent discussion of ZFS vs merged VM/Buffer caches: most people don't care. But as a system designer, I do. One _can_ build systems that support graphics and networking without X11 and sockets and with a small number of system calls. One _can_ provide some support for "legacy" systems by papering over the difference with a library (back in the day, someone even ported X11 to Plan 9), but it does get messy and you hit limitations at some point. > >> Ted is straddling those lines >> and he's doing the best he can and his best is pretty darn good. > > > I'd just like to stress I'm not trying to criticize Ted, or anyone else, really. We've got the systems we've got. But a lot of the complexity we've got in those systems comes from trying to retrofit a design that was fundamentally oriented towards a uniprocessor machine onto a multiprocessor system that looks approximately nothing like a PDP-11. I do agree with Jon that much of Linux's complexity is unjustified (functions called `foo` that call `__foo` that calls `__do_foo_for_bar`...I understand this is to limit nesting. But...dang), but much of it is forced by trying to accommodate a particular system model on systems that are no longer really amenable to that model. > > - Dan C. > >> I'd argue listen to Ted. He's got the balance. >> >> --lm >> >> [1] Truth in advertising, Ted and I are friends, we used to hike together >> in Pacifica, we like each other. >> >> On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: >> > I won't dispute your age, or how many layers of pearl are on the seed >> > Larry, but MP unix was a thing long long ago. >> > >> > I am pretty sure it was written up in BSTJ, and there was Pyramid by >> > 1984/5 and an MP unix system otherwise running at Melbourne University >> > (Rob Elz) around 1988. >> > >> > You might be ancient, but you weren't THAT ancient in the 1980s. >> > >> > anyway, pearls before swine, and age before beauty. >> > >> > -G >> >> -- >> --- >> Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From woods at robohack.ca Thu Feb 11 06:48:49 2021 From: woods at robohack.ca (Greg A. Woods) Date: Wed, 10 Feb 2021 12:48:49 -0800 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: At Mon, 30 Nov 2020 11:54:37 -0500, Clem Cole wrote: Subject: Re: [TUHS] The UNIX Command Language (1976) > > yes ... but ... even UNIX binary folks had troff licenses and many/most at > ditroff licenses. I would like to try once again to dispell the apparent myth that troff was readily available to Unix users in wider circles. True, old troff might have been there in the distribution, but not necessarily as many vendors didn't include it even though they had the license since they knew most users didn't care about it, and of course the users didn't care about troff because _nobody_ had a C/A/T, (and hardly anyone cared to use nroff to format things for line printers). People would install Wordstar long before they even thought about using nroff. Ditroff (or sqtroff) was also incredibly rare to non-existent for 99% of the Unix sites I worked at and visited; even some time after it became available. Even sites running native AT&T Unix, e.g. on 3B2s, and thus could easily obtain it, often didn't want the added expense of installing it. So, old troff was basically a total useless waste of disk space until psroff came along. Psroff made troff useful, but IF And Only IF you had a C compiler _and_ the skill to install it. That combination was still incredibly rare. A C compiler was often the biggest impediment to many sites I worked at -- they didn't have programmers and they didn't want to shell out even cash more for any programming tools (even though they had often hired me as a consulting programmer to "fix their Unix system"!). Then, as you said, Groff arrived, though still that required a C compiler and (effectively for some time) a PostScript printer (while psroff would drive the far more common laserjet and similar without gyrations through DVI!). In circles I travelled through if one wanted true computer typesetting support it was _far_ easier and better (even after Groff came along) to install TeX, even if it meant hiring a consultant to do it, since that meant having far wider printer support (though realistically PostScript printers were the only viable solution at some point, e.g. especially after laser printers became available, i.e. outside Xerox and IBM shops). > I think the academics went LaTex and that had more to do with it. LaTex > was closer to Scribe for the PDP-10s and Vaxen, which had a short head lead > on all them until it went walled garden when CMU sold the rights (and even > its author - Brian Ried) could not use it at a Stanford. I worked with a group of guys who were extreme fans of the PlainTeX macros (and who absolutely hated LaTeX). They came from academic circles and commercial research groups. But I agree it was those other factors that have lead to an ongoing prevalence for TeX, and in particular its LaTeX macros; over and above troff and anything else like either in the computer typesetting world. I was never a fan of anything TeX (nor of anything SGML-like). I was quite a fan of, and an extreme expert in using, troff and tbl. However once I discovered Lout I dropped troff like a hot potato. I continue to use Lout exclusively to this day for "fine" typesetting work (anything that needs/prefers physical printing or a PDF). -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From lm at mcvoy.com Thu Feb 11 07:44:36 2021 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 10 Feb 2021 13:44:36 -0800 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: <20210210214436.GV13701@mcvoy.com> On Wed, Feb 10, 2021 at 12:48:49PM -0800, Greg A. Woods wrote: > At Mon, 30 Nov 2020 11:54:37 -0500, Clem Cole wrote: > Subject: Re: [TUHS] The UNIX Command Language (1976) > > > > yes ... but ... even UNIX binary folks had troff licenses and many/most at > > ditroff licenses. > > I would like to try once again to dispell the apparent myth that troff > was readily available to Unix users in wider circles. I had n/troff on the BSD based vaxen that UW-Madison CS had. There was a standard binder of docs (that I still have 35+ years later) that had the troff, -ms, -man, -me, tbl, eqn, pic, refer docs (no grap, sadly, I wrote my own). The Masscomps I used had working roffs. I've been using troff since well before 1985 (I found that Masscomp restor.e doc, it was 1985 but I was well into troff by then, I started with -man and -ms but was experimenting with -me for that paper. I liked -me well enough but -ms just made more sense to me so I went back to that and have been there ever since). The 3B1 that my roommate and I shared had working roff. I don't remember how we got stuff printed, I was for sure using troff for years before the postscript one came about. So for once, I'm gonna side with Clem on this one. I've always had troff and been very happy with it. I know LaTex sort of won but I'm not a fan. --lm P.S. Groff is C++, not C. That made it dicey until g++ got stable. From clemc at ccc.com Thu Feb 11 08:05:57 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 10 Feb 2021 17:05:57 -0500 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: ᐧ On Wed, Feb 10, 2021 at 3:49 PM Greg A. Woods wrote: > At Mon, 30 Nov 2020 11:54:37 -0500, Clem Cole wrote: > Subject: Re: [TUHS] The UNIX Command Language (1976) > > > > yes ... but ... even UNIX binary folks had troff licenses and many/most > at > > ditroff licenses. > > I would like to try once again to dispell the apparent myth that troff > was readily available to Unix users in wider circles. > Hard to call it a myth - it was quite available. In fact, I never used a single mainstream UNIX system from DEC, IBM, HP later Sun, Masscomp, Apollo that did not have it, and many if not all small systems did also. > > True, old troff might have been there in the distribution, but not > necessarily as many vendors didn't include it even though they had the > license since they knew most users didn't care about it, and of course > the users didn't care about troff because _nobody_ had a C/A/T, Yes, but after Tom Ferrin created vcat(1) in the late 1970s ('77 I think, but I've forgotten). Many people did have access to a plotter which cost about $1k in the late 1970s, or even later a 'wet' laser printer like the Imagen which cost about $5K a few years later. (and hardly anyone cared to use nroff to format things for line printers). > No offense, but that's just not true. Line printers and nroff were used a great deal to prep things, and often UNIX folks had access to a daisy shell printer for higher quality nroff output, much less using the line printer. > People would install Wordstar long before they even thought about using > nroff. > I did not know anyone that did that. But I'll take your word for it. Wordstar as I recall ran on 8-bit PCs. The only two people I knew that had larger CP/M systems back in the day that might have supported that were Phil Karn (*a.k.a.* KA9Q of TCP/IP for CP/M fame) and Guy Soytomayer - both were sometimes lab partners. But all three of us had access to the XGP in CMU CS dept, using Scribe on the PDP-10s, but we all used nroff most of the time because we had more cycles available on the UNIX boxes and the 10s required going to the terminal room. FWIW: my non-techie CMU course professors used to let you turning papers printed off the line printer and people used anything they had - which was Scribe on the 20s and nroff on the Unix, boxes and I've forgotten the name of the program that ran on the TSS, which the business majors like my roommate tended to use. > > Ditroff (or sqtroff) was also incredibly rare to non-existent for 99% of > the Unix sites I worked at and visited; even some time after it became > available. Even sites running native AT&T Unix, e.g. on 3B2s, and thus > could easily obtain it, often didn't want the added expense of > installing it. > The only time I had a deal with pure AT&T systems was occasionally at the AT&T, but even there most of them that I worked, had BSD or Research, based systems. The only 3B2 I ever saw was the one we were forced to buy by AT&T to get a System V source license at Stellar as the reference system, when PDP-11 and Vaxen stopped being the default. We did get ditroff from the toolchest for it, since we were including it in the base Stellar system. The Apple Laserwriter was fully in the wild with transcript by then, so vcat was not needed. > > So, old troff was basically a total useless waste of disk space until > psroff came along. > Sounds like you never had access to a plotter. > > Psroff made troff useful, psroff was very, very late in the UNIX development. vcat was nearly 10 years earlier, and psroff only showed up after the Apple Laserwriter which is what - early 1985 I think. But Versatec and the like plotter were all over the place, much less daisy wheel printers. True, the Hersey fonts were not nearly as nice as PostScript and the resolution was only 200 dpi (and it was a wet process) but most UNIX sites, particularly if you had invested in Vaxen had them. > but IF And Only IF you had a C compiler _and_ > the skill to install it. That combination was still incredibly rare. Excuse me... most end-users sites had them. Sun was the only one of the majors that did not ship a C compiler with the system by default. And as Larry has pointed out, even that was fixed by rms fairly soon afterward. But all of the suppliers of the majors UNIX implementations knew that you got the C compiler with UNIX. Jacks to open -- just include it. It sounds like your early UNIX experiences were in a limited version, which is a little sad and I can see that might color your thinking. > A C compiler was often the biggest impediment to many sites I worked at > -- > they didn't have programmers and they didn't want to shell out even cash > more for any programming tools (even though they had often hired me as a > consulting programmer to "fix their Unix system"!). > Yeech... sorry to hear that. > In circles I travelled through if one wanted true computer typesetting > support it was _far_ easier and better (even after Groff came along) Hmmm ... you were complaining you need a C compiler for ditroff, yet groff needs C++ IIRC.? > to install TeX, even if it meant hiring a consultant to do it, since that > Which means you need a Pascal compiler BTW .... and Tex was written using the PDP-10 Pascal extensions, so you had to have a Pascal that understood that. Which was often not easy, particularly on UNIX boxes. The UCB Pascal != PDP-10 Pascal. > meant having far wider printer support (though realistically PostScript > printers were the only viable solution at some point, e.g. especially > after laser printers became available, i.e. outside Xerox and IBM shops). > My guess this observation is because HP was late to the Postscript world and there while the eventual hpcat(1) was done for the vaxen and made it the USENET, it was fairly late in time, and I'm not sure if anyone at HP or anyone else ever wrote a ditroff backend for HP's PCL. The key is that Apple Laserwriters were pretty cheap and since they already did PS, most sites I knew just bought PS based ones and did not go HP until later when PS was built-in. > I was quite a fan of, and an extreme expert in using, troff and tbl. > Good to hear. > However once I discovered Lout I dropped troff like a hot potato. > Never used it, but I'll take your word for it. I believe it is very Scribe like and looking at it you can see in the influence. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Thu Feb 11 08:26:10 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 10 Feb 2021 17:26:10 -0500 Subject: [TUHS] Seeking wisdom from Unix Greybeards In-Reply-To: References: <9c1595cc-54a1-8af9-0c2d-083cb04dd97c@spamtrap.tnetconsulting.net> <20201125172255.83D252146F@orac.inputplus.co.uk> <20201126145134.GB394251@mit.edu> <20201126214825.bDDjr%steffen@sdaoden.eu> Message-ID: On Wed, Feb 10, 2021 at 1:57 PM Greg A. Woods wrote: > As an aside, the DEC VT100 terminal was an early (it came out a year before > X3.64) Yup, It was spec'ed at least a year ahead and its the code was committed before the standard was even in a draft vote. > and relatively complete (for a video terminal application) implementation > of X3.64. > Although different from and the missing stuff was a bear. Originally, VT100 != X3.64 which has caused many issues over the years. The big issue was the lack of a proper insert/delete, it used scrolling regions instead. Kudos to Mary Ann for working through that whole mess years ago! That's why back in the day, I preferred the H19 and later the Ambassador ;-) My memory is that was ECMA that got the DEC changes/differences added/put back/made legal. But originally, there were not. At one point, I had a copy of the internal DEC documentation (which I think I got from Tom Kent who wrote much of the rom code and had been on the committee for DEC at one point, but I don't remember). Clem ᐧ ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at fourwinds.com Thu Feb 11 08:36:49 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Wed, 10 Feb 2021 14:36:49 -0800 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: <202102102236.11AMann01820861@darkstar.fourwinds.com> Greg A. Woods writes: > > Ditroff (or sqtroff) was also incredibly rare to non-existent for 99% of > the Unix sites I worked at and visited; even some time after it became > available. Even sites running native AT&T Unix, e.g. on 3B2s, and thus > could easily obtain it, often didn't want the added expense of > installing it. Maybe for you; I had it everywhere that I worked. From ggm at algebras.org Thu Feb 11 09:05:17 2021 From: ggm at algebras.org (George Michaelson) Date: Thu, 11 Feb 2021 09:05:17 +1000 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: <202102102236.11AMann01820861@darkstar.fourwinds.com> References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: I wonder if this was a university BSD/Bell licence vs "everyone else" thing. I know we had ubiquitous use of nroff, troff and ditroff, in succession at Leeds and York across 82-84 and onward. That was with a benson-varian wet process printer from roll paper, cut marks thrown in free. because I'd used Tops-10 Runoff at uni, nroff made sense. The guys who walked in other doors wound up tooled in TeX which I didn't {relax} get. On Thu, Feb 11, 2021 at 8:37 AM Jon Steinhart wrote: > > Greg A. Woods writes: > > > > Ditroff (or sqtroff) was also incredibly rare to non-existent for 99% of > > the Unix sites I worked at and visited; even some time after it became > > available. Even sites running native AT&T Unix, e.g. on 3B2s, and thus > > could easily obtain it, often didn't want the added expense of > > installing it. > > Maybe for you; I had it everywhere that I worked. From ron at ronnatalie.com Thu Feb 11 10:27:17 2021 From: ron at ronnatalie.com (Ron Natalie) Date: Thu, 11 Feb 2021 00:27:17 +0000 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: We used nroff quite a bit with both the Model37 teletype (for which it wsa designed, ours even had the greek box on it) and with output filters for the lineprinter and the Diablos. Later on we drove troff into cat emulators that used Versatec printers. I don’t knwo wher Berkely’s vcat got their fonts, but the JHU verset had an amusing history on that. George Toth went down to the NRL which had a real CAT and printed out the fonts in large point size on film. In the basement of the biophysics bulding was a scanning transmission electron microscope which used a PDP-11/20 as its controller and an older (512x512 or so) framebuffer. George took the scanning wires off the microsope nad hooked them up to the X and Y of a tektronics oscilliscope. Then he put a photomutlipler tube in a scope camera housing and hoked the sense wire from the microscope to that. He now had the worlds most expensive flying spot scanner. He’d tape one letter at a time to the scope and then bring up the microscope sofware (DOS/BATCH I think) and tell it to run the microscope. Then without powering down the memory in the framebuffer, he’d boot up miniunix and copy the stuff from the framebuffer to an RX05 pack. After months of laboriously scanning he was able to write the CAT emulator. I had gone to work for Martin Marietta wirking on a classified project so I wrote hacks to the -mm macro package to handle security markings (automatically putting the highest on each page on thte top and bottom). Later when ditroff became available I continued to use it with various laserprinters. I even wrote macropackages to emulate IBM’s doc style when we were contracting with them. This was all to the chagrin of my boss who wanted us to switch to Framemaker. From lm at mcvoy.com Thu Feb 11 10:36:40 2021 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 10 Feb 2021 16:36:40 -0800 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: <20210211003640.GW13701@mcvoy.com> On Thu, Feb 11, 2021 at 12:27:17AM +0000, Ron Natalie wrote: > George Toth went down to the NRL which had a real CAT and printed out the > fonts in large point size on film. In the basement of the biophysics > bulding was a scanning transmission electron microscope which used a > PDP-11/20 as its controller and an older (512x512 or so) framebuffer. > George took the scanning wires off the microsope nad hooked them up to the X > and Y of a tektronics oscilliscope. Then he put a photomutlipler tube in > a scope camera housing and hoked the sense wire from the microscope to that. > > He now had the worlds most expensive flying spot scanner. He???d tape one > letter at a time to the scope and then bring up the microscope sofware > (DOS/BATCH I think) and tell it to run the microscope. Then without > powering down the memory in the framebuffer, he???d boot up miniunix and > copy the stuff from the framebuffer to an RX05 pack. > After months of laboriously scanning he was able to write the CAT emulator. That's dedication, what else did George do? From clemc at ccc.com Thu Feb 11 11:53:17 2021 From: clemc at ccc.com (Clem Cole) Date: Wed, 10 Feb 2021 20:53:17 -0500 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: Ron. That’s awesome. Ferrin used the Same set of Hersey Font that the XGP used. He got them from Stanford as I recall but they were publically (aka open source) On Wed, Feb 10, 2021 at 7:28 PM Ron Natalie wrote: > We used nroff quite a bit with both the Model37 teletype (for which it > wsa designed, ours even had the greek box on it) and with output filters > for the lineprinter and the Diablos. > > Later on we drove troff into cat emulators that used Versatec printers. > I don’t knwo wher Berkely’s vcat got their fonts, but the JHU verset > had an amusing history on that. > > George Toth went down to the NRL which had a real CAT and printed out > the fonts in large point size on film. In the basement of the > biophysics bulding was a scanning transmission electron microscope which > used a PDP-11/20 as its controller and an older (512x512 or so) > framebuffer. George took the scanning wires off the microsope nad > hooked them up to the X and Y of a tektronics oscilliscope. Then he > put a photomutlipler tube in a scope camera housing and hoked the sense > wire from the microscope to that. > > He now had the worlds most expensive flying spot scanner. He’d tape > one letter at a time to the scope and then bring up the microscope > sofware (DOS/BATCH I think) and tell it to run the microscope. Then > without powering down the memory in the framebuffer, he’d boot up > miniunix and copy the stuff from the framebuffer to an RX05 pack. > After months of laboriously scanning he was able to write the CAT > emulator. > > I had gone to work for Martin Marietta wirking on a classified project > so I wrote hacks to the -mm macro package to handle security markings > (automatically putting the highest on each page on thte top and bottom). > Later when ditroff became available I continued to use it with > various laserprinters. I even wrote macropackages to emulate IBM’s > doc style when we were contracting with them. > > This was all to the chagrin of my boss who wanted us to switch to > Framemaker. > > > > -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.salz at gmail.com Thu Feb 11 11:59:56 2021 From: rich.salz at gmail.com (Richard Salz) Date: Wed, 10 Feb 2021 20:59:56 -0500 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: There used to be a great memo all about stealing fonts. Joel might remember it, as it circulated among CMU MIT Stanford etc. in the days of the XGP and Dover printers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggm at algebras.org Thu Feb 11 12:04:44 2021 From: ggm at algebras.org (George Michaelson) Date: Thu, 11 Feb 2021 12:04:44 +1000 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: As long as you do it in the USA, it isn't stealing. I was going to write about what a fantastic steal this was but recalled a quirk in US IPR around fonts, and their images. On Thu, Feb 11, 2021 at 12:00 PM Richard Salz wrote: > > There used to be a great memo all about stealing fonts. Joel might remember it, as it circulated among CMU MIT Stanford etc. in the days of the XGP and Dover printers. > From mah at mhorton.net Thu Feb 11 12:30:31 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Wed, 10 Feb 2021 18:30:31 -0800 Subject: [TUHS] troff was not so widely usable In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> We had vtroff at Berkeley around 1980, on the big Versatec wet plotter, 4 pages wide. We got really good at cutting up the pages on the output. It used the Hershey font. It was horrible. Mangled somehow, lots of parts of glyphs missing. I called it the "Horse Shit" font. I took it as my mission to clean it up. I wrote "fed" to edit it, dot by dot, on the graphical HP 2648 terminal at Berkeley. I got all the fonts reasonably cleaned up, but it was laborious. I still hated Hershey. It was my dream to get real C/A/T output at the largest 36 point size, and scan it in to create a decent set of Times fonts. I finally got the C/A/T output years later at Bell Labs, but there were no scanners available to me at the time. Then True Type came along and it was moot. I did stumble onto one nice rendition of Times Roman in one point size, from Stanford, I think. I used it to write banner(6). On 2/10/21 5:53 PM, Clem Cole wrote: > Ron. That’s awesome.  Ferrin used the Same set of Hersey Font that the > XGP used.  He got them from Stanford as I recall but they were > publically (aka open source) > > On Wed, Feb 10, 2021 at 7:28 PM Ron Natalie > wrote: > > We used nroff quite a bit with both the Model37 teletype (for > which it > wsa designed, ours even had the greek box on it) and with output > filters > for the lineprinter and the Diablos. > > Later on we drove troff into cat emulators that used Versatec > printers. >     I don’t knwo wher Berkely’s vcat got their fonts, but the JHU > verset > had an amusing history on that. > > George Toth went down to the NRL which had a real CAT and printed out > the fonts in large point size on film.    In the basement of the > biophysics bulding was a scanning transmission electron microscope > which > used a PDP-11/20 as its controller and an older (512x512 or so) > framebuffer.    George took the scanning wires off the microsope nad > hooked them up to the X and Y of a tektronics oscilliscope.   Then he > put a photomutlipler tube in a scope camera housing and hoked the > sense > wire from the microscope to that. > > He now had the worlds most expensive flying spot scanner.  He’d tape > one letter at a time to the scope and then bring up the microscope > sofware (DOS/BATCH I think) and tell it to run the microscope.    > Then > without powering down the memory in the framebuffer, he’d boot up > miniunix and copy the stuff from the framebuffer to an RX05 pack. > After months of laboriously scanning he was able to write the CAT > emulator. > > I had gone to work for Martin Marietta wirking on a classified > project > so I wrote hacks to the -mm macro package to handle security markings > (automatically putting the highest on each page on thte top and > bottom). >     Later when ditroff became available I continued to use it with > various laserprinters.    I even wrote macropackages to emulate IBM’s > doc style when we were contracting with them. > > This was all to the chagrin of my boss who wanted us to switch to > Framemaker. > > > > -- > Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.salz at gmail.com Thu Feb 11 12:44:24 2021 From: rich.salz at gmail.com (Richard Salz) Date: Wed, 10 Feb 2021 21:44:24 -0500 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: On Wed, Feb 10, 2021, 9:04 PM George Michaelson wrote: > As long as you do it in the USA, it isn't stealing. > Not sure of that, but there are other techniques to protect it, like patent, trademark, and trade secret. Just like unpublished proprietary source code of AT&T, to coin a phrase. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Feb 11 12:52:53 2021 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 10 Feb 2021 18:52:53 -0800 Subject: [TUHS] troff was not so widely usable In-Reply-To: <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> Message-ID: <20210211025253.GX13701@mcvoy.com> The Hershey fonts were what we had, they kinda sucked but you worked with them. I think is a passage, you know those fonts, you were there, it was not great. People who haven't been there have no idea how lucky they are. On Wed, Feb 10, 2021 at 06:30:31PM -0800, Mary Ann Horton wrote: > We had vtroff at Berkeley around 1980, on the big Versatec wet plotter, 4 > pages wide. We got really good at cutting up the pages on the output. > > It used the Hershey font. It was horrible. Mangled somehow, lots of parts of > glyphs missing. I called it the "Horse Shit" font. > > I took it as my mission to clean it up. I wrote "fed" to edit it, dot by > dot, on the graphical HP 2648 terminal at Berkeley. I got all the fonts > reasonably cleaned up, but it was laborious. > > I still hated Hershey. It was my dream to get real C/A/T output at the > largest 36 point size, and scan it in to create a decent set of Times fonts. > I finally got the C/A/T output years later at Bell Labs, but there were no > scanners available to me at the time. Then True Type came along and it was > moot. > > I did stumble onto one nice rendition of Times Roman in one point size, from > Stanford, I think. I used it to write banner(6). > > On 2/10/21 5:53 PM, Clem Cole wrote: > >Ron. That???s awesome.?? Ferrin used the Same set of Hersey Font that the > >XGP used.?? He got them from Stanford as I recall but they were publically > >(aka open source) > > > >On Wed, Feb 10, 2021 at 7:28 PM Ron Natalie >> wrote: > > > > We used nroff quite a bit with both the Model37 teletype (for > > which it > > wsa designed, ours even had the greek box on it) and with output > > filters > > for the lineprinter and the Diablos. > > > > Later on we drove troff into cat emulators that used Versatec > > printers. > > ?? ?? I don???t knwo wher Berkely???s vcat got their fonts, but the JHU > > verset > > had an amusing history on that. > > > > George Toth went down to the NRL which had a real CAT and printed out > > the fonts in large point size on film.?? ?? In the basement of the > > biophysics bulding was a scanning transmission electron microscope > > which > > used a PDP-11/20 as its controller and an older (512x512 or so) > > framebuffer.?? ?? George took the scanning wires off the microsope nad > > hooked them up to the X and Y of a tektronics oscilliscope. ?? Then he > > put a photomutlipler tube in a scope camera housing and hoked the > > sense > > wire from the microscope to that. > > > > He now had the worlds most expensive flying spot scanner. ??He???d tape > > one letter at a time to the scope and then bring up the microscope > > sofware (DOS/BATCH I think) and tell it to run the microscope.?? ?? > > Then > > without powering down the memory in the framebuffer, he???d boot up > > miniunix and copy the stuff from the framebuffer to an RX05 pack. > > After months of laboriously scanning he was able to write the CAT > > emulator. > > > > I had gone to work for Martin Marietta wirking on a classified > > project > > so I wrote hacks to the -mm macro package to handle security markings > > (automatically putting the highest on each page on thte top and > > bottom). > > ?? ?? Later when ditroff became available I continued to use it with > > various laserprinters.?? ?? I even wrote macropackages to emulate IBM???s > > doc style when we were contracting with them. > > > > This was all to the chagrin of my boss who wanted us to switch to > > Framemaker. > > > > > > > >-- > >Sent from a handheld expect more typos than usual -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From usotsuki at buric.co Thu Feb 11 13:02:27 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Wed, 10 Feb 2021 22:02:27 -0500 (EST) Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: On Wed, 10 Feb 2021, Richard Salz wrote: > On Wed, Feb 10, 2021, 9:04 PM George Michaelson wrote: > > Not sure of that, but there are other techniques to protect it, like > patent, trademark, and trade secret. Just like unpublished proprietary > source code of AT&T, to coin a phrase. I think you can't copyright the shapes, but you can copyright the vectors that generate them because they're technically code. Something weird like that. But that's how you can have all those knockoff fonts Bitstream did, and why a font like Book Antiqua was possible under US law. -uso. From toby at telegraphics.com.au Thu Feb 11 14:07:33 2021 From: toby at telegraphics.com.au (Toby Thain) Date: Wed, 10 Feb 2021 23:07:33 -0500 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: <50db4b48-633b-216a-3799-40f9c924f337@telegraphics.com.au> On 2021-02-10 10:02 p.m., Steve Nickolas wrote: > On Wed, 10 Feb 2021, Richard Salz wrote: > >> On Wed, Feb 10, 2021, 9:04 PM George Michaelson wrote: >> >> Not sure of that, but there are other techniques to protect it, like >> patent, trademark, and trade secret.  Just like unpublished proprietary >> source code of AT&T, to coin a phrase. > > I think you can't copyright the shapes, but you can copyright the > vectors that generate them because they're technically code. > > Something weird like that. > > But that's how you can have all those knockoff fonts Bitstream did, and > why a font like Book Antiqua was possible under US law. Added irony and lesson: Bitstream's digitisations and originals were markedly better and more complete than Adobe's, who were able to use the licensed names. --T > > -uso. From andrew at humeweb.com Thu Feb 11 16:42:07 2021 From: andrew at humeweb.com (Andrew Hume) Date: Wed, 10 Feb 2021 22:42:07 -0800 Subject: [TUHS] troff was not so widely usable In-Reply-To: <20210211025253.GX13701@mcvoy.com> References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> <20210211025253.GX13701@mcvoy.com> Message-ID: <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> there was actually a weird fuss about the Merganthaler typesetter. Ken figured out how the fonts were encoded and so we had the raw outline data for all the fonts (they were gorgeous!). this enabled us to add in special characters like the peter weinberger face. ken wanted to do the right thing and tried to license the fonts from Merganthaler, but we had endless discussions where Ken would say “we know how the fonts are encoded, can we license them?” and the sales person would say “no you can’t; they’re secret”, and so on. we’d even show them the peter face but to no avail. so we kinda flew under the radar for that (but we tried!). the typesetter itself was entertaining; if i recall correctly, the software ran on a 8in floppy and Ken wrote a B compiler/run time system for the computer inside. andrew From robpike at gmail.com Thu Feb 11 17:12:02 2021 From: robpike at gmail.com (Rob Pike) Date: Thu, 11 Feb 2021 18:12:02 +1100 Subject: [TUHS] troff was not so widely usable In-Reply-To: <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> <20210211025253.GX13701@mcvoy.com> <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> Message-ID: https://www.cs.princeton.edu/~bwk/202/index.html On Thu, Feb 11, 2021 at 6:05 PM Andrew Hume wrote: > there was actually a weird fuss about the Merganthaler typesetter. > > Ken figured out how the fonts were encoded and so we had the raw outline > data for > all the fonts (they were gorgeous!). this enabled us to add in special > characters like > the peter weinberger face. > > ken wanted to do the right thing and tried to license the fonts from > Merganthaler, > but we had endless discussions where Ken would say “we know how the fonts > are encoded, > can we license them?” and the sales person would say “no you can’t; > they’re secret”, > and so on. we’d even show them the peter face but to no avail. so we kinda > flew > under the radar for that (but we tried!). > > the typesetter itself was entertaining; if i recall correctly, the > software ran on a 8in floppy > and Ken wrote a B compiler/run time system for the computer inside. > > andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From beebe at math.utah.edu Thu Feb 11 22:52:53 2021 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Thu, 11 Feb 2021 05:52:53 -0700 Subject: [TUHS] troff was not so widely usable Message-ID: Recent discussions on this list are about the problem getting fonts for typesetting before there was an industry to provide them. Noted font designer Chuck Bigelow has written about the subject here: Notes on typeface protection TUGboat 7(3) 146--151 October 1986 https://tug.org/TUGboat/tb07-3/tb16bigelow.pdf Other TUGboat papers by him and his design partner, Kris Holmes, might be of reader interest: Lucida and {\TeX}: lessons of logic and history https://tug.org/TUGboat/tb15-3/tb44bigelow.pdf About the DK versions of Lucida https://tug.org/TUGboat/tb36-3/tb114bigelow.pdf A short history of the Lucida math fonts https://tug.org/TUGboat/tb37-2/tb116bigelow-lucidamath.pdf Science and history behind the design of Lucida https://tug.org/TUGboat/tb39-3/tb123bigelow-lucida.pdf ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From gnu at toad.com Thu Feb 11 23:06:23 2021 From: gnu at toad.com (John Gilmore) Date: Thu, 11 Feb 2021 05:06:23 -0800 Subject: [TUHS] troff was not so widely usable In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> <20210211025253.GX13701@mcvoy.com> <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> Message-ID: <31617.1613048783@hop.toad.com> Andrew Hume wrote: > ken wanted to do the right thing and tried to license the fonts from > Merganthaler, but we had endless discussions where Ken would say "we > know how the fonts are encoded, can we license them?" and the sales > person would say "no you can't; they're secret"... The 202 reconstruction paper also bemoans the bit-rot loss of the information about the Mergenthaler character representation. This reminded me of a project that I and a small team did in the 1980s. We were licensees of Sun's NeWS source code, and we wanted our software to be able to use the wide variety of fonts sold commercially by Adobe and font design companies. The problem was, they were encoded in Adobe Type 1 font definitions, which Adobe considered a proprietary trade secret. Due to the lack of copyright protection for fonts (whose whole raison d'etre is to be copied onto paper many thousands or millions of times by each user), font designers used security by obscurity to protect their work. Our team ended up pulling the ROMs out of an original LaserWriter, and writing and improving a 68000 disassembler. One of our team members read the code, figured out which parts handled these fonts, and how it decoded them. He wrote that down in his own words in a plain text document, not a program, following the prevailing court decisions about how to avoid copyright issues while reverse-engineering a trade secret. Ultimately, we released that document to some interested people, so that others could implement support for Type 1 fonts. Shortly afterward, Adobe magnanimously decided to "release the specification", as Wikipedia says. Later, I got a nice note from L. Peter Deutsch, maintainer of Ghostscript, who said: I just received my copy of the Adobe Type 1 Font Format book, and compared the contents against the message you sent me last July. You guys at Grasshopper really did a good job of cracking the format. I was amused to see that the book omits quite a few operators that you deciphered on your own. This wasn't too long after Adobe had also been threatening Sun and its NeWS licensees for re-implementing PostScript from its spec, rather than licensing it from them. I had noticed a paragraph in the prospectus for their IPO, which said something like, "Adobe has put PostScript into the public domain in order to encourage its wide use." Either the language was free for anyone to implement, or they were guilty of securities fraud. When hoist on that petard, so to speak, they backed down. John Appendix: The reverse-engineered font format. I was happy to be able to find a copy of this on my current machine -- it too could have bit-rotted in the last 30 years. File mod date is 1989. I am not the author. Description of Adobe PostScript FontType 1 PostScript FontType 1 is Adobe's proprietary font format. The internal fonts in Apple LaserWriters (the Times, Helvetica, and Courier families) are stored in this format, although the stroke descriptions are not normally available outside the LaserWriter. Other fonts from Adobe are also in this format, although an additional layer of encryption prevents the stroke descriptions from being directly visible. IMPORTANT NOTE: The shrink-wrap license agreement under which external Adobe fonts are distributed expressly forbids the decryption and decompilation of these fonts. It does not appear to forbid the using of a program to decrypt and display these fonts, (since this is what happens to them inside a PostScript printer) as long as they are not used on more than one PRINTER at a time. PostScript fonts are accessed through Font Dictionaries. See the Red Book, section 5.3, for a description of the standard entries in a Font Dictionary. The entries we will concern ourselves with here are the dictionaries CharStrings and Private. The character shape descriptions are stored in CharStrings in an encrypted format. The shape descriptions are reached via the Encoding vector, see section 5.4. When a character needs to be rendered, a pseudo-code interpreter is called, and handed the encrypted shape description, and the Font Dictionary. The pseudo-code interpreter checks that this is a valid Type 1 font. The Font Dictionary for a valid font must contain an entry called `Private', which is a dictionary. Private must contain the entry `password', an integer having the value 5839. The shape descriptions have the ability to call other encrypted routines, accessed by their index in the array `Subrs', defined in Private. The encrypted routines can also call PostScript code directly, executing an element from the array `OtherSubrs', defined in Private. Execution of the pseudo-code occurs in the same environment as the BuildChar routine for a user defined font (see section 5.7). A gsave precedes this execution, and the CTM is modified by the FontMatrix. Upon exiting, a grestore is performed, and the currentpoint is updated by the width of the character. Encrypted routines from CharStrings and Subrs can be decrypted with the following program. Routines in OtherSubrs are ordinary, unencrypted PostScript code. #include int magic1, magic2, magic3, magic4, mask; main() { int c, d, skip4; #ifdef eexec magic1 = 0x9a36dda2 ; magic2 = 0x9a3704d3 ; #else magic1 = 0x3f8927b5 ; magic2 = 0x3fea375f ; #endif magic3 = 0x3fc5ce6d ; magic4 = 0x0d8658bf ; mask = magic1 ^ magic2 ; skip4 = 0 ; printf("<"); while ((c=getchar())!=EOF) { d = ((mask >> 8) ^ c) & 0xff ; if (++skip4 > 4) #ifdef eexec putchar(d); #else printf(" %02x", d); #endif mask = magic4 + magic3 * (mask + c) ; } printf(" >"); exit(0); } Note that this is the same decryption algorythm used for eexec, except that the initial mask value, as determined by magic1 and magic2, is different. The result of decrypting is a font rendering pseudo code. Strings are stored encrypted in memory, and decrypted on the fly by the interpreter as it executes the pseudo-code. Description of the pseudo-code. The pseudo-code interpreter obtains bytes in sequence from the decryption routine. These bytes are grouped into a sequence of instructions. Each instruction encodes either a number (which is pushed onto an internal stack), or an operation (which is performed). The initial byte of each pseudo-code instruction encodes its type and length. Initial bytes in the range 0x00-0x1f (inclusive) encode operations. Common operations are encoded completely by a single byte, and require no additional bytes. Less common operations are encoded in two bytes. The initial byte of a two byte operation is always 0x0c. The second byte has a value in the range 0x00-0x21 (inclusive), which specifies the operation. Initial bytes in the range 0x20-0xf6 (inclusive) encode small numbers. No additional bytes are required for a small number. The value pushed on the stack is the initial byte minus 0x8b. Initial bytes in the range 0xf7-0xfe (inclusive) encode medium numbers. Medium numbers are followed by one additional byte. Medium numbers come in two flavors: positive, and negative. Initial bytes in the range 0xf7-0xfa (inclusive) encode positive numbers, the remainder are negative. The magnitude of the number pushed is: (( ((initial_byte-0xf7)&3) << 8) | additional_byte) + 0x6c If the initial byte indicates that the number is negative, the number calculated above is negated before being pushed onto the stack. An initial byte of 0xff indicates a large number. A large number requires 4 additional bytes to specify its value. The large number's value is encoded directly in the additional bytes, most significant byte first, in decending order of significance. When a number is encountered, it is pushed onto an internal stack (seperate from the PostScript operand stack). Most operations take values from this stack, and a few return values on the stack. In general it is not important that this is not the real operand stack. The exceptions are OtherSubrs and StackShift. One of the arguments to OtherSubrs is an argument count. It transfers that many arguments from the internal stack to the operand stack before calling the PostScript procedure. After the procedure returns, StackShift may be used to transfer individual values from the operand stack to the internal stack. Another difference between the stacks is that some of the operations clear the stack after execution. These operations read their arguments from the bottom of the stack, not the top, so in this sence, it isn't really a stack, but it can still be thought of as behaving like one, at least locally. Operations which exhibit this behavior are marked below with an *. Many of the operations perform functions similar to certain PostScript operators. In these cases, only the name of the appropriate PostScript operator is given. Many of the path extension commands have additional versions which restrict the motion associated with them to being horizontal or vertical, thus one fewer argument is needed for them. These are identified by an `h' or `v', generally as the second character of the name. Thus rhlineto is equivalent to: { 0 exch rlineto } Additionally, the operators max and min are available, which are, for some reason, missing from PostScript. Codes labeled `(variable)' look up the given name in the Private dictionary and push the associated value onto the internal stack. Descriptions of short operations. 0x00 * VHintWidth ycenter ==> - 0x01 * VHint bottom vwidth ==> - 0x02 * HHintWidth xcenter ==> - 0x03 * HHint left hwidth ==> - These operations give information about the position of some of the features of a character to the non-linear scaling code. The dynamics of the non-linear scaling is not yet clear. The *Width operations take one argument, assuming StrokeWidth as the second arg (the first arg indicates the center of the line to be stroked). The other operations take 2 arguments, the first being a (horizontal or vertical) position of the (left or bottom) side of the feature, and the second being the width of the feature being described. These, and similar hint routines, are called before any path construction operators. 0x04 * rvmoveto ydelta ==> - 0x05 * rlineto xdelta ydelta ==> - 0x06 * rhlineto xdelta ==> - 0x07 * rvlineto ydelta ==> - 0x08 * rspline dx1 dy1 dx2 dy2 dx3 dy3 ==> - similar to rcurveto, but each control point is specified relative to the previous one, instead of relative to the currentpoint. 0x09 * closepath - ==> - 0x0a Subrs sub ==> - takes one argument, the number of the subroutine to execute in Private/Subrs. 0x0b Retn - ==> - return from subroutine. All elements of Subrs end with this operation. 0x0c LongOp followed by another operation code, as described below. 0x0d * Metrics lsb_x width_x ==> - takes two arguments: the left side bearing, and width. This information is used (with the y values of each assumed zero) if it is not overrided by an element in the font dictionary's Metrics entry. This is usually the first operation executed by a CharString routine. The internal currentpoint is set to the left side bearing. 0x0e Finish - ==> (stack no longer exists) clean up and either fill or stroke the path (depending on PaintType), grestore, and exit. Apparently, font cache information is determined here for all PaintTypes except 3. Fonts of PaintType 3 would call setcachedevice before executing any rendering operations, and would exit with QuickFinish. Some of the operations labeled here with ?'s are likely to be fill and stroke, for use only by PaintType 3 fonts. 0x0f * moveto x y ==> - 0x10 * lineto x y ==> - 0x11 * curveto x1 y1 x2 y2 x3 y3 ==> - 0x12 min a b ==> min(a, b) 0x13 * ? 26c8c6(3fe0000000000000) ? - ==> - 0x14 * newpath - ==> - 0x15 * rmoveto dx dy ==> - 0x16 * rhmoveto dx ==> - 0x17 * ? set two bits of something in gstate ? ferd ==> - 0x18 mul a b ==> a*b 0x19 strokewidth (variable) 0x1a baseline (variable) 0x1b capheight (variable) 0x1c bover (variable) 0x1e * htovrspline dx1 dx2 dy2 dy3 ==> - 0x1f * vtohrspline dy1 dx2 dy2 dx3 ==> - these take 4 args instead of six, set the remaining two to 0, and call rspline. The curves are constrained to start and end either vertically or horizontally. Long Operations 0x00 * ? toggle something and remember... strange. ? - ==> - this operation appears around some subrs. sometimes it is outside the call, and sometimes inside the subr itself. it always appears in pairs. the first call to it sets a value to zero. the second call sets it to some function of the currentpoint. the value is used to modify all of the coordinates somehow, apparently in conjunction with the non-linear scaling, but only when other conditions are met. 0x01 * HMHint x1 width1 x2 width2 x3 width3 ==> - 0x02 * VMHint y1 height1 y2 height2 y3 height3 ==> - These each take 6 args and appear to encode information for 3 position/value pairs. See HHint above. 0x03 * ? something about MinFeature ? 0x04 * arc x y r ang1 ang2 ==> - 0x05 * ? 1 arg to 26a9f4 ? ferd ==> - 0x06 * Accent ferd x y base accent ==> (no stack) Used to create composite characters. Executes the character routine corresponding to base, then adjusts the current position by x and y, and executes the character routine corresponding to accent. Base and accent are character codes (0-255). This terminates execution for this character. First arg is unclear, but modifies the x position of the accent character somehow. 0x07 * LongMetrics lsbx lsby widthx widthy ==> - same as Metrics, but specifies y values as well. 0x08 * setcachedevice llx lly urx ury ==> - character width is taken from the metric information, so only 4 args are required. the bounding box is expanded by strokewidth/2 on all sides before being passed to the actual routine. Presumably only used by PaintType 3 fonts. 0x09 * QuickFinish - ==> (no stack) exit without cleaning up or stroking or filling. just grestore. 0x0a add a b ==> a+b 0x0b sub a b ==> a-b 0x0c div a b ==> a/b 0x0d max a b ==> max(a, b) 0x0e neg a ==> -a 0x0f TestAdd a b c d ==> b > c ? a + d : a 0x10 OtherSubrs a1 a2 a3 ... an n index ==> - call directly to PostScript code. top of stack is index of code to call, next is number of args to transfer to operand stack, followed by the args to be transferred. 0x11 StackShift - ==> n move a number from the operand stack to the internal stack. 0x12 decend (variable) 0x13 ascend (variable) 0x14 overshoot (variable) 0x15 * ? set two bits of something in gstate ? ferd ==> - 0x16 xover (variable) 0x17 capover (variable) 0x18 aover (variable) 0x19 halfsw (variable) 0x1a PixelRound round a value to an even boundary in device space. this is just a guess, not verified. 0x1b * arcn x y r ang1 ang2 ==> - 0x1c exch a b ==> b a 0x1d index an ... a1 a0 n ==> an ... a1 a0 an 0x1e * VHintRound bottom vwidth ==> - 0x1f * HHintRound left hwidth ==> - Same as VHint and HHint, except the width is sent to the pixel round routine. Additionally, these will take negative widths and deal with them correctly, the others might not. 0x20 ** currentpoint - ==> x y after pushing the coordinates onto the top of the stack, the stack pointer is set to point at the bottom of the stack. presumably this should only be used when the stack is clear. in any case, no math can be performed on these values, as the stack is `empty' when they are there. this might be a bug, as I suspect these were meant to be passed to Qmoveto, but they can never get there. moveto, however will work correctly. 0x21 ** Qmoveto x y ==> - doesn't actually call moveto, just remembers new position internally. the arguments are poped from the top of the stack, and then the stack is cleared. From ron at ronnatalie.com Fri Feb 12 02:55:34 2021 From: ron at ronnatalie.com (Ron Natalie) Date: Thu, 11 Feb 2021 16:55:34 +0000 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: It's important to know the difference between a font and a typeface. A typeface isn't protectable. That's the representation of the actual letters on the printed page (or screen in our case). George was free to scan the output of the phototypesetter. The font is the process to make these (in modern days small programs that generate the letters). This is what can be protected by copyright. The name can be protected by trademark as well. HELVETICA is a trademark (now) of Mergenthaler Linotype. Arial is a similar typeface but that name is owned by Monotype. Straying a little from the topic, a real Linotype machine is a joy to behold. They have one at the Baltimore Museum of Science and Industry that they still fire up weekly. What it does is integrate a keyboard with the actual fonts (molds for molten lead) and casts a line of type (hence the name) at a time. After it does so, the molds go back into sorted hoppers for further use. To answer the other question about George Toth at JHU. He was our documetnation guy and went off to work for Airinc or something. I've not heard from him in a long time. We continued to use his verset and a versatec for a while with straight troff. I also hacked it to draw on the framebuffers in BRL's graphics labs. Later more fonts became available from the Berkeley vcat/vtroff. Ditroff allowed direct selection of mutliple fonts as opposed to having to hack on the "railmag" file (remember that guys?). Standard troff voodoo, just put a power of two backslashes in front of it until it works and if you still have problems add a \c. -Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From nobozo at gmail.com Fri Feb 12 03:34:11 2021 From: nobozo at gmail.com (Jon Forrest) Date: Thu, 11 Feb 2021 09:34:11 -0800 Subject: [TUHS] troff was not so widely usable In-Reply-To: <31617.1613048783@hop.toad.com> References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> <20210211025253.GX13701@mcvoy.com> <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> <31617.1613048783@hop.toad.com> Message-ID: <34f3d83f-c33e-83de-cb59-12209a7ff729@gmail.com> On 2/11/2021 5:06 AM, John Gilmore wrote: > This reminded me of a project that I and a small team did in the 1980s. > We were licensees of Sun's NeWS source code, and we wanted our software > to be able to use the wide variety of fonts sold commercially by Adobe > and font design companies. The problem was, they were encoded in Adobe > Type 1 font definitions, which Adobe considered a proprietary trade > secret. > > Our team ended up pulling the ROMs out of an original LaserWriter, and > writing and improving a 68000 disassembler. One of our team members > read the code, figured out which parts handled these fonts, and how it > decoded them. He wrote that down in his own words in a plain text > document, not a program, following the prevailing court decisions about > how to avoid copyright issues while reverse-engineering a trade secret. > Ultimately, we released that document to some interested people, so that > others could implement support for Type 1 fonts. Shortly afterward, > Adobe magnanimously decided to "release the specification", as Wikipedia > says. I always thought the Prof. Michael Harrison and his group in the CS Dept. at UC Berkeley were the first to do this. I found a reference to this in https://books.google.com/books?id=IToEAAAAMBAJ&pg=PT7&lpg=PT7&dq=michael++harrison+berkeley+postscript+fonts#v=onepage&q=michael%20%20harrison%20berkeley%20postscript%20fonts&f=false Plus, Mike told me personally that this is what happened. Jon From cowan at ccil.org Fri Feb 12 04:09:26 2021 From: cowan at ccil.org (John Cowan) Date: Thu, 11 Feb 2021 13:09:26 -0500 Subject: [TUHS] troff was not so widely usable In-Reply-To: <34f3d83f-c33e-83de-cb59-12209a7ff729@gmail.com> References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> <20210211025253.GX13701@mcvoy.com> <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> <31617.1613048783@hop.toad.com> <34f3d83f-c33e-83de-cb59-12209a7ff729@gmail.com> Message-ID: On Thu, Feb 11, 2021 at 12:34 PM Jon Forrest wrote: > I always thought the Prof. Michael Harrison and his group in the > CS Dept. at UC Berkeley were the first to do this. I found a reference > to this in > > > https://books.google.com/books?id=IToEAAAAMBAJ&pg=PT7&lpg=PT7&dq=michael++harrison+berkeley+postscript+fonts#v=onepage&q=michael%20%20harrison%20berkeley%20postscript%20fonts&f=false "It steam-engines when it comes steam-engine time." John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org How they ever reached any conclusion at all is starkly unknowable to the human mind. --"Backstage Lensman", Randall Garrett -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdm at cfcl.com Fri Feb 12 04:43:49 2021 From: rdm at cfcl.com (Rich Morin) Date: Thu, 11 Feb 2021 10:43:49 -0800 Subject: [TUHS] troff was not so widely usable In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> <4be538bd-a4ee-4287-4d61-9cc6e18c061b@mhorton.net> <20210211025253.GX13701@mcvoy.com> <8412062A-2360-4464-B834-2567AEA69C0D@humeweb.com> <31617.1613048783@hop.toad.com> <34f3d83f-c33e-83de-cb59-12209a7ff729@gmail.com> Message-ID: <55104AA3-B497-4852-8D5F-29513C000C71@cfcl.com> I've heard stories about a very high speed dot matrix printer used (IIRC) at Lawrence Berkeley Laboratory. Apparently, it used a bank of Hydrogen Thyratrons to multiplex high voltage onto a set of metal needles. The resulting sparks burned tiny black holes into the paper at 24K LPM. It occurs to me that it could probably have been used for graphics, typesetting, and such, but I dunno. Might anyone here be able to talk about (or provide links about) this beast? -r From cowan at ccil.org Fri Feb 12 06:27:27 2021 From: cowan at ccil.org (John Cowan) Date: Thu, 11 Feb 2021 15:27:27 -0500 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> <202102102236.11AMann01820861@darkstar.fourwinds.com> Message-ID: On Thu, Feb 11, 2021 at 11:56 AM Ron Natalie wrote: > It's important to know the difference between a font and a typeface. A > typeface isn't protectable. That's the representation of the actual > letters on the printed page (or screen in our case). George was free to > scan the output of the phototypesetter. > Or to make use of bitmap fonts, which are exact representations of the typeface, at least in the U.S. (In Europe there are design patents that make typefaces protectable.) See section 1.12 of < http://www.faqs.org/faqs/fonts-faq/part2/> for the relevant quotations from the Code of Federal Regulations. Some time ago, I was hacking on the program FIGlet, which is a bells-and-whistles banner program: you write FIGfonts as plain text files with N lines per big character, where N is the font height measured in small characters. It is capable of kerning big characters nicely by using a chosen small character to represent a "squishable space". My main two improvements were to extend the font format to represent up to 2^31 big characters and to accept mapping tables to convert input from a specific encoding to font indices. (I ended up writing a comprehensive decoder for arbitrary ISO 2022 text, possibly the only one that has ever existed.) In addition, I wrote a program to convert X BDS (bitmap) fonts to FIGfonts, and packaged the standard BDS fonts with the standard FIGlet library. That served me well later when I was working for the Associated Press. My boss told me that the New York Daily News, which was downstairs in the same building, was tired of paying Reuters for a program running on a PC that fetched the current headlines from Reuters and displayed them on a huge news ticker mounted on several sides of the newsroom, which was the size of a NY city block. Because the AP is a not-for-profit collective, once you buy in, all the services are free. The Reuters program was a Windows binary, and the manual they had for the ticker turned out to be for a different model altogether. I called the manufacturer and got the correct manual (it took several days). I wrote a program in Perl that would fetch an AP headline feed (in RSS) which I was responsible for, load up the X FIGfont specified as a command-line argument, and send the text to the ticker, a matter of writing a single column of bits and then the next and then the next, as the hardware took care of the actual horizontal scrolling for me. Once I had it running, I walked around to various people in the newsroom who didn't look too busy, and asked them what they thought? The general response was a weak "LGTM", until I got to one guy who said, "No, it doesn't look quite right." "If you'll follow me," I said, "I'll see what I can do." He and the guy in the next cube followed me to the desk where the PC sat. I killed the program and started it up with the name of a different font. "Mmm, not quite." We went through a number of fonts until I got "Yes! Now _that_ looks like a real newsroom font!" He asked the other guy, "Do you agree?" "Yes, that's great!" So they were happy and I was happy and when I got back to AP my boss was happy. Only later did I find out that my two guys were the city editor and the managing editor of the Daily News! Standard troff voodoo, just put a power of two backslashes in front of it > until it works and if you still have problems add a \c. > See "The Telnet Song" at . -------------- next part -------------- An HTML attachment was scrubbed... URL: From ama at ugr.es Fri Feb 12 07:01:58 2021 From: ama at ugr.es (Angel M Alganza) Date: Thu, 11 Feb 2021 22:01:58 +0100 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210131022500.GU4227@mcvoy.com> References: <202101301950.10UJoWeA456408@darkstar.fourwinds.com> <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> Message-ID: <20210211210158.GJ15023@zombi.ugr.es> On Sat, Jan 30, 2021 at 06:25:00PM -0800, Larry McVoy wrote: > BitKeeper has that code and proves that it can be done. I'm still using BitKeeper and enjoing it a lot. I like it much better than Git. :-) > So good on you that you like ZFS and FreeBSD. I don't and I don't for > really good reasons. I like FreeBSD, and I like ZFS (as a user, without knowing the internals), but I like Btrfs much better. I wish it was supported by the BSD's, mainly OpenBSD. What are your views on BTRFS, Larry? I'd like to know. :-) Regards. Ángel From woods at robohack.ca Fri Feb 12 07:58:52 2021 From: woods at robohack.ca (Greg A. Woods) Date: Thu, 11 Feb 2021 13:58:52 -0800 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: At Wed, 10 Feb 2021 17:05:57 -0500, Clem Cole wrote: Subject: Re: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) > > On Wed, Feb 10, 2021 at 3:49 PM Greg A. Woods wrote: > > > I would like to try once again to dispell the apparent myth that troff > > was readily available to Unix users in wider circles. > > > Hard to call it a myth - it was quite available. In fact, I never used a > single mainstream UNIX system from DEC, IBM, HP later Sun, Masscomp, Apollo > that did not have it, and many if not all small systems did also. I was much deeper in the trenches! (and in Canada too) I'm talking about small old systems, usually at very small companies but sometimes at very small departments in larger companies. Things like early Motorola 68k, NCR-Tower, Convergent, Plexus, Spectrix (plus Tandy and Altos and other Xenix-based ports). Even with full licenses the available nroff/troff package was often not installed as all too often there wasn't enough spare disk space to install it. Also with newer AT&T Unix ports, i.e. Release 2 and newer, it depended very much on the vendor, and sometimes even the distributor, as to whether or not the Documenter's Workbench would be a separate purchase or not, but usually it was and most of the customers I worked with would never pay for software that they didn't have a pressing need for, even if it was just a few $100. > Yes, but after Tom Ferrin created vcat(1) in the late 1970s ('77 I think, > but I've forgotten). Many people did have access to a plotter which cost > about $1k in the late 1970s, or even later a 'wet' laser printer like the > Imagen which cost about $5K a few years later. I don't know if I've ever seen a Versatec plotter, though perhaps at a trade show. I seem to remember grad students at university getting typeset copy off some kind of "wet" process typesetter driven by troff -- maybe even a C/A/T, but that was so far out of reach of undergrads that it wasn't even funny -- we just had a dot-matrix line printer (for the Unix machine -- there were real line printers on the Multics machine). Later on most of the kinds of customers I worked with would have a daisy-wheel printer at best, or perhaps just a dot-matrix line printer. That is until the HP Laserjet came along, followed of course not much longer by the Apple LaserWriter. > No offense, but that's just not true. Line printers and nroff were used a > great deal to prep things, and often UNIX folks had access to a daisy shell > printer for higher quality nroff output, much less using the line printer. Yeah, sure, lots of us Unix fans used nroff, but I doubt I ever had any small-system customers who used it, or would even know how to use it, nor would they want to learn how to use it, especially if they already had paid for a "proper" word processor.... > > People would install Wordstar long before they even thought about using > > nroff. > > > I did not know anyone that did that. But I'll take your word for it. > Wordstar as I recall ran on 8-bit PCs. Sorry, probably it was WordPerfect or something work-alike. The most recent site I remember using such a word processor on Unix was on an NCR-Tower32, which by then would have been running a newer Unix System V, probably Release 2, though maybe I upgraded them to Release 3 or whatever was current from NCR in the day, but I don't remember the details. They printed to a laser, probably as PCL. That would have been in the very early 1990s. > FWIW: my non-techie CMU course > professors used to let you turning papers printed off the line printer and > people used anything they had - which was Scribe on the 20s and nroff on > the Unix, boxes and I've forgotten the name of the program that ran on the > TSS, which the business majors like my roommate tended to use. I remember learning the old "roff" on a PDP-11/60 (which was by then running V7) and submitting course work hand-cut from 132-column fan-fold paper. I also remember being disappointed when they "replaced" it with nroff (probably it had just been an old v6 binary) and I had to learn how to format everything all over again. :-) > > but IF And Only IF you had a C compiler _and_ > > the skill to install it. That combination was still incredibly rare. > > Excuse me... most end-users sites had them. > > It sounds like your early UNIX experiences were in a limited version, which > is a little sad and I can see that might color your thinking. There were a great number of small sites running various different ports of Unix -- many of which were purpose-built to run some application such as an accounting system or word processing system. Often the owners didn't even really know they were running Unix or some derivative. If the compiler was an add-on they certainly didn't have it, even if it was a free add-on. I think Microsoft Xenix for example always had an add-on compiler (though perhaps some of its many sub-licensees would bundle it), and of course by the time AT&T Unix System V came out the compiler (i.e. SGS) and DWB were both add-ons that took up disk space and were usually added $$$ too. > Hmmm ... you were complaining you need a C compiler for ditroff, yet groff > needs C++ IIRC.? Plain C for "psroff", but yes, indeed, C++ for groff. > My guess this observation is because HP was late to the Postscript world > and there while the eventual hpcat(1) was done for the vaxen and made it > the USENET, it was fairly late in time, and I don't think I ever heard of hpcat, and I can only find one reference to it online with google (in an old 1985 Los Alamos newsletter). > I'm not sure if anyone at HP or > anyone else ever wrote a ditroff backend for HP's PCL. Rick Richardson wrote Jetroff. The second release became commercial software, but I briefly used the original 1.0 "shareware" release quite successfully: https://www.tuhs.org/Usenet/comp.sources.misc/1988-September/thread.html > The key is that Apple Laserwriters were pretty cheap and since they already > did PS, most sites I knew just bought PS based ones and did not go HP until > later when PS was built-in. My best ever used equipment deal (where I paid actual $$$), was for an almost brand-new Apple LaserWriter 16/600 that had come off lease at a PC leasing agency in Toronto. They didn't know heads or tails about any kind of Apple gear and didn't know how to get the Ethernet adapter for it. I paid just $64.00. It was still on its first toner cartridge. I drove away like I'd robbed the place! Before that I had a cranky old monster of a PS printer of some kind -- it has a 10MB 5.25" hard drive in it that I think contained the fonts. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From ggm at algebras.org Fri Feb 12 15:22:37 2021 From: ggm at algebras.org (George Michaelson) Date: Fri, 12 Feb 2021 15:22:37 +1000 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: the transition from wet to dry was amazing. we had the wet, and it worked but was fussy. I think it was some odd DPI like 120. Lowish res. Jaggies on the fonts. I printed a poem Dave Barron wrote about the birth of the ICL 2900 hundred in old english font, you can see the jaggies on it. When it first comes out, it is shiny like its just been given birth to (sorry) and some horrendous solvent I don't want to think about evaporated, and it was done. Sort-of shiny surface. Not very nice. The first decent dry laser we got was a canon unit, which IIRC was in that olivetti "we know how do do design" space-age orange shade. It was a giant brick on its end, about the height of a bar-fridge, and it was 200dpi I think, and we loved it. Insanely good to get things that clean, and not have to slice a roll of paper. I mean, for the entire time we've been talking about Xerox machines were coming down in size although a campus printer at the bindery for PhDs is an assembly line of giant car sized units, but even the small ones the engineers have a black diver case of wierd tools, gizmos to reach behind the thingamajig and tickle the whatsit with a hairy brush.. they are seriously complex machines (or were). Getting one of these russian-tanks in a jet-age orange italian designer bar-fridge was .. cool. But then.. we realised all the s/w tools we were using were designed for US letter. Inches. Not A4 friendly. I mean, the BSD licence I've referred to several times, It was printed on legal, it even had the honest-to-god crimped red seal of the regents of the university of california on it, we didn't have envelopes which fitted this stuff. We didn't have ring binders with holes in the right places for the printed manuals, and when we printed our own, if we didn't check and check and check, the US-Legal / A4 thing went bananas. People hoarded magic init sequences to make things have margins and page sizes which worked for them. I still remember trying to co-erce 2up printing to work nicely. We were so desperate for decent output there was a market for a sort of 'player piano' box which sat on an IBM golfball printer, and hammered the keys for you. Daisy wheel was good, but you can't beat an IBM selectric if you want to cut a roneostat for some agitprop (they were the invention of the devil too. the ink goes everywhere) Nowadays, a good photocopier from fuji has a 48MP camera with a giant lens, there is none of this scanning nonsense, its one photo and done. The smarts are in MIPS chips and do things like hide 'you leaked it' codes in the printed output. I used one to copy my passport for a re-issuance recently, I swear you could pass money from them (I am told they have image recognition for money and won't print it. Damn skynet. they're taking over already) On Fri, Feb 12, 2021 at 7:59 AM Greg A. Woods wrote: > > At Wed, 10 Feb 2021 17:05:57 -0500, Clem Cole wrote: > Subject: Re: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) > > > > On Wed, Feb 10, 2021 at 3:49 PM Greg A. Woods wrote: > > > > > I would like to try once again to dispell the apparent myth that troff > > > was readily available to Unix users in wider circles. > > > > > Hard to call it a myth - it was quite available. In fact, I never used a > > single mainstream UNIX system from DEC, IBM, HP later Sun, Masscomp, Apollo > > that did not have it, and many if not all small systems did also. > > I was much deeper in the trenches! (and in Canada too) > > I'm talking about small old systems, usually at very small companies but > sometimes at very small departments in larger companies. Things like > early Motorola 68k, NCR-Tower, Convergent, Plexus, Spectrix (plus Tandy > and Altos and other Xenix-based ports). Even with full licenses the > available nroff/troff package was often not installed as all too often > there wasn't enough spare disk space to install it. > > Also with newer AT&T Unix ports, i.e. Release 2 and newer, it depended > very much on the vendor, and sometimes even the distributor, as to > whether or not the Documenter's Workbench would be a separate purchase > or not, but usually it was and most of the customers I worked with would > never pay for software that they didn't have a pressing need for, even > if it was just a few $100. > > > Yes, but after Tom Ferrin created vcat(1) in the late 1970s ('77 I think, > > but I've forgotten). Many people did have access to a plotter which cost > > about $1k in the late 1970s, or even later a 'wet' laser printer like the > > Imagen which cost about $5K a few years later. > > I don't know if I've ever seen a Versatec plotter, though perhaps at a > trade show. > > I seem to remember grad students at university getting typeset copy off > some kind of "wet" process typesetter driven by troff -- maybe even a > C/A/T, but that was so far out of reach of undergrads that it wasn't > even funny -- we just had a dot-matrix line printer (for the Unix > machine -- there were real line printers on the Multics machine). > > Later on most of the kinds of customers I worked with would have a > daisy-wheel printer at best, or perhaps just a dot-matrix line printer. > That is until the HP Laserjet came along, followed of course not much > longer by the Apple LaserWriter. > > > No offense, but that's just not true. Line printers and nroff were used a > > great deal to prep things, and often UNIX folks had access to a daisy shell > > printer for higher quality nroff output, much less using the line printer. > > Yeah, sure, lots of us Unix fans used nroff, but I doubt I ever had any > small-system customers who used it, or would even know how to use it, > nor would they want to learn how to use it, especially if they already > had paid for a "proper" word processor.... > > > > People would install Wordstar long before they even thought about using > > > nroff. > > > > > I did not know anyone that did that. But I'll take your word for it. > > Wordstar as I recall ran on 8-bit PCs. > > Sorry, probably it was WordPerfect or something work-alike. > > The most recent site I remember using such a word processor on Unix was > on an NCR-Tower32, which by then would have been running a newer Unix > System V, probably Release 2, though maybe I upgraded them to Release 3 > or whatever was current from NCR in the day, but I don't remember the > details. They printed to a laser, probably as PCL. That would have > been in the very early 1990s. > > > FWIW: my non-techie CMU course > > professors used to let you turning papers printed off the line printer and > > people used anything they had - which was Scribe on the 20s and nroff on > > the Unix, boxes and I've forgotten the name of the program that ran on the > > TSS, which the business majors like my roommate tended to use. > > I remember learning the old "roff" on a PDP-11/60 (which was by then > running V7) and submitting course work hand-cut from 132-column fan-fold > paper. I also remember being disappointed when they "replaced" it with > nroff (probably it had just been an old v6 binary) and I had to learn > how to format everything all over again. :-) > > > > > but IF And Only IF you had a C compiler _and_ > > > the skill to install it. That combination was still incredibly rare. > > > > Excuse me... most end-users sites had them. > > > > It sounds like your early UNIX experiences were in a limited version, which > > is a little sad and I can see that might color your thinking. > > There were a great number of small sites running various different ports > of Unix -- many of which were purpose-built to run some application such > as an accounting system or word processing system. > > Often the owners didn't even really know they were running Unix or some > derivative. If the compiler was an add-on they certainly didn't have > it, even if it was a free add-on. > > I think Microsoft Xenix for example always had an add-on compiler > (though perhaps some of its many sub-licensees would bundle it), and of > course by the time AT&T Unix System V came out the compiler (i.e. SGS) > and DWB were both add-ons that took up disk space and were usually added > $$$ too. > > > Hmmm ... you were complaining you need a C compiler for ditroff, yet groff > > needs C++ IIRC.? > > Plain C for "psroff", but yes, indeed, C++ for groff. > > > My guess this observation is because HP was late to the Postscript world > > and there while the eventual hpcat(1) was done for the vaxen and made it > > the USENET, it was fairly late in time, and > > I don't think I ever heard of hpcat, and I can only find one reference > to it online with google (in an old 1985 Los Alamos newsletter). > > > I'm not sure if anyone at HP or > > anyone else ever wrote a ditroff backend for HP's PCL. > > Rick Richardson wrote Jetroff. The second release became commercial > software, but I briefly used the original 1.0 "shareware" release quite > successfully: > > https://www.tuhs.org/Usenet/comp.sources.misc/1988-September/thread.html > > > The key is that Apple Laserwriters were pretty cheap and since they already > > did PS, most sites I knew just bought PS based ones and did not go HP until > > later when PS was built-in. > > My best ever used equipment deal (where I paid actual $$$), was for an > almost brand-new Apple LaserWriter 16/600 that had come off lease at a > PC leasing agency in Toronto. They didn't know heads or tails about any > kind of Apple gear and didn't know how to get the Ethernet adapter for > it. I paid just $64.00. It was still on its first toner cartridge. I > drove away like I'd robbed the place! Before that I had a cranky old > monster of a PS printer of some kind -- it has a 10MB 5.25" hard drive > in it that I think contained the fonts. > > -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms From ama at ugr.es Fri Feb 12 23:48:11 2021 From: ama at ugr.es (Angel M Alganza) Date: Fri, 12 Feb 2021 14:48:11 +0100 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: <20210208182123.GI13701@mcvoy.com> Message-ID: <20210212134811.GF6275@zombi.ugr.es> On Mon, Feb 08, 2021 at 10:32:03AM -0800, Justin Coffey wrote: > My question then is, are there any examples of projects that maintained > discipline, focus and relevance over years/decades that serve as counter > examples to the above statement(s)? OpenBSD? Go? Is there anything to > learn here? I think OpenBSD, yes, and also Haiku. Every so often somebody argues that they need to do this or that, and release an stable (not beta) version, if they want to compete with orher OS. Their answer is always that they will release it when it's ready and that they aren't try to compete for a "market" share or anything, but ship a good product when it's right. I don't know if that will remain true after they ship R1 (binary compatible with BeOS R5), or if they will feel free to start adding things in for other reasons than making the best OS they can, and frack it up. I hope not! Regards. Ángel From ama at ugr.es Fri Feb 12 23:39:43 2021 From: ama at ugr.es (Angel M Alganza) Date: Fri, 12 Feb 2021 14:39:43 +0100 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: References: Message-ID: <20210212133943.GE6275@zombi.ugr.es> On Mon, Feb 08, 2021 at 10:43:54AM -0800, Dan Stromberg wrote: > I love Linux, especially Debian lately. I used Debian everywhere (desktops, laptops, servers) both at work and at home. Until they decided to ship it with SystemD and don't give an alternative. I then switched to Devuan, and I can't be happier. It's the exact wonderful Debian experience, with the freedom of choice that Debian always gave me until it didn't anymore. I used to say I would go back in a heart beat if it gave an init alternative to SystemD, but I don't think I would anymore. > But I also have high hopes for Redox OS, and may switch someday: > https://en.wikipedia.org/wiki/Redox_(operating_system) > https://www.theregister.com/2019/11/29/after_four_years_rusty_os_nearly_selfhosting/ Me too, as well as for Haiku OS, even though it's not a UN!X derivative, it's POSIX compliant (or so they claim), and it works very nicely as a desktop OS. Cheers, Ángel From dave at horsfall.org Sat Feb 13 08:13:22 2021 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 13 Feb 2021 09:13:22 +1100 (EST) Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: On Thu, 11 Feb 2021, Greg A. Woods wrote: > I don't know if I've ever seen a Versatec plotter, though perhaps at a > trade show. The LV-11? We had one at Uni of NSW; it spent most of its time printing biorhythm charts which were all the rage back then. -- Dave From ron at ronnatalie.com Sat Feb 13 08:18:15 2021 From: ron at ronnatalie.com (Ron Natalie) Date: Fri, 12 Feb 2021 22:18:15 +0000 Subject: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) In-Reply-To: References: <8b580c46-ecfb-9383-ed43-08108b3ee7bf@tllds.com> <20201130163753.GB18187@mcvoy.com> Message-ID: I hated those things. They were what we used for regular listings at BRL and the chemicals made my skin break out. The chemicals also reacted with the ink in the standard government issue felt tip pens (which ironically were made by blind people) and caused your notes to fade quickly (and the pens to stop working). ------ Original Message ------ From: "Dave Horsfall" To: "The Eunuchs Hysterical Society" Sent: 2/12/2021 5:13:22 PM Subject: Re: [TUHS] troff was not so widely usable (was: The UNIX Command Language (1976)) >On Thu, 11 Feb 2021, Greg A. Woods wrote: > >>I don't know if I've ever seen a Versatec plotter, though perhaps at a trade show. > >The LV-11? We had one at Uni of NSW; it spent most of its time printing biorhythm charts which were all the rage back then. > >-- Dave From jsteve at superglobalmegacorp.com Sat Feb 13 11:06:36 2021 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Sat, 13 Feb 2021 09:06:36 +0800 Subject: [TUHS] 68k prototypes & microcode Message-ID: <0F0B9BFC06289346B88512B91E55670D300E@EXCHANGE> You might find this interesting https://twitter.com/i/status/1320767372853190659 It's a pi (arm) running Musashi a 68000 core, but using voltage buffers it's plugged into the 68000 socket of an Amiga! You can find more info on their github: https://github.com/captain-amygdala/pistorm Maybe we are at the point where numerous cheap CPU's can eliminate FPGA's? -----Original Message----- From: Michael Parson [SMTP:mparson at bl.org] Sent: Friday, February 05, 2021 10:43 PM To: The Eunuchs Hysterical Society Subject: Re: [TUHS] 68k prototypes & microcode On 2021-02-04 16:47, Henry Bent wrote: > On Thu, Feb 4, 2021, 17:40 Adam Thornton wrote: > >> I'm probably Stockholm Syndrommed about 6502. It's what I grew up on, >> and >> I still like it a great deal. Admittedly register-starved (well, >> unless >> you consider the zero page a whole page of registers), but...simple, >> easy >> to fit in your head, kinda wonderful. >> >> I'd love a 64-bit 6502-alike (but I'd probably give it more than three >> registers). I mean given how little silicon (or how few FPGA gates) a >> reasonable version of that would take, might as well include 65C02 and >> 65816 cores in there too with some sort of mode-switching instruction. >> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus be >> fun? >> Throw in an onboard MMU and FPU too, I suppose, and then you could >> have a >> real system on it. >> >> > Sounds like a perfect project for an FPGA. If there's already a 6502 > implementation out there, converting to 64 bit should be fairly easy. There are FPGA implementations of the 6502 out there. If you've not seen it, check out the MiSTer[0] project, FPGA implementations of a LOT of computers, going back as far as the EDSAC, PDP-1, a LOT of 8, 16, and 32 bit systems from the 70s and 80s along with gaming consoles from the 70s and 80s. Keeping this semi-TUHS related, one guy[1] has even implemented a Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4, 5, Linux, NetBSD, and even the Sparc version of NeXTSTEP, but it's not part of the "official" MiSTer bits (yet?). -- Michael Parson Pflugerville, TX KF5LGQ [0] https://github.com/MiSTer-devel/Main_MiSTer/wiki [1] https://temlib.org/site/ [2] https://temlib.org/pub/mister/SS/ From gregg.drwho8 at gmail.com Sat Feb 13 12:30:21 2021 From: gregg.drwho8 at gmail.com (Gregg Levine) Date: Fri, 12 Feb 2021 21:30:21 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <0F0B9BFC06289346B88512B91E55670D300E@EXCHANGE> References: <0F0B9BFC06289346B88512B91E55670D300E@EXCHANGE> Message-ID: An amazing idea. ----- Gregg C Levine gregg.drwho8 at gmail.com "This signature fought the Time Wars, time and again." On Fri, Feb 12, 2021 at 7:51 PM Jason Stevens wrote: > > You might find this interesting > > https://twitter.com/i/status/1320767372853190659 > > > It's a pi (arm) running Musashi a 68000 core, but using voltage buffers it's > plugged into the 68000 socket of an Amiga! > > You can find more info on their github: > > https://github.com/captain-amygdala/pistorm > > > Maybe we are at the point where numerous cheap CPU's can eliminate FPGA's? > > -----Original Message----- > From: Michael Parson [SMTP:mparson at bl.org] > Sent: Friday, February 05, 2021 10:43 PM > To: The Eunuchs Hysterical Society > Subject: Re: [TUHS] 68k prototypes & microcode > > On 2021-02-04 16:47, Henry Bent wrote: > > On Thu, Feb 4, 2021, 17:40 Adam Thornton > wrote: > > > >> I'm probably Stockholm Syndrommed about 6502. It's what I grew > up on, > >> and > >> I still like it a great deal. Admittedly register-starved (well, > > >> unless > >> you consider the zero page a whole page of registers), > but...simple, > >> easy > >> to fit in your head, kinda wonderful. > >> > >> I'd love a 64-bit 6502-alike (but I'd probably give it more than > three > >> registers). I mean given how little silicon (or how few FPGA > gates) a > >> reasonable version of that would take, might as well include > 65C02 and > >> 65816 cores in there too with some sort of mode-switching > instruction. > >> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus > be > >> fun? > >> Throw in an onboard MMU and FPU too, I suppose, and then you > could > >> have a > >> real system on it. > >> > >> > > Sounds like a perfect project for an FPGA. If there's already a > 6502 > > implementation out there, converting to 64 bit should be fairly > easy. > > There are FPGA implementations of the 6502 out there. If you've not > seen > it, check out the MiSTer[0] project, FPGA implementations of a LOT > of > computers, going back as far as the EDSAC, PDP-1, a LOT of 8, 16, > and 32 > bit systems from the 70s and 80s along with gaming consoles from the > 70s > and 80s. > > Keeping this semi-TUHS related, one guy[1] has even implemented a > Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4, 5, Linux, > NetBSD, and even the Sparc version of NeXTSTEP, but it's not part of > the > "official" MiSTer bits (yet?). > > -- > Michael Parson > Pflugerville, TX > KF5LGQ > > [0] https://github.com/MiSTer-devel/Main_MiSTer/wiki > [1] https://temlib.org/site/ > [2] https://temlib.org/pub/mister/SS/ From jsteve at superglobalmegacorp.com Sat Feb 13 14:34:08 2021 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Sat, 13 Feb 2021 12:34:08 +0800 Subject: [TUHS] 68k prototypes & microcode Message-ID: <0F0B9BFC06289346B88512B91E55670D300F@EXCHANGE> Apparently they are getting 68040 levels of performance with a Pi... and that interpreted. Going with JIT it's way higher. -----Original Message----- From: Gregg Levine [SMTP:gregg.drwho8 at gmail.com] Sent: Saturday, February 13, 2021 10:30 AM To: Jason Stevens; The Eunuchs Hysterical Society Subject: Re: [TUHS] 68k prototypes & microcode An amazing idea. ----- Gregg C Levine gregg.drwho8 at gmail.com "This signature fought the Time Wars, time and again." On Fri, Feb 12, 2021 at 7:51 PM Jason Stevens wrote: > > You might find this interesting > > https://twitter.com/i/status/1320767372853190659 > > > It's a pi (arm) running Musashi a 68000 core, but using voltage buffers it's > plugged into the 68000 socket of an Amiga! > > You can find more info on their github: > > https://github.com/captain-amygdala/pistorm > > > Maybe we are at the point where numerous cheap CPU's can eliminate FPGA's? > > -----Original Message----- > From: Michael Parson [SMTP:mparson at bl.org] > Sent: Friday, February 05, 2021 10:43 PM > To: The Eunuchs Hysterical Society > Subject: Re: [TUHS] 68k prototypes & microcode > > On 2021-02-04 16:47, Henry Bent wrote: > > On Thu, Feb 4, 2021, 17:40 Adam Thornton > wrote: > > > >> I'm probably Stockholm Syndrommed about 6502. It's what I grew > up on, > >> and > >> I still like it a great deal. Admittedly register-starved (well, > > >> unless > >> you consider the zero page a whole page of registers), > but...simple, > >> easy > >> to fit in your head, kinda wonderful. > >> > >> I'd love a 64-bit 6502-alike (but I'd probably give it more than > three > >> registers). I mean given how little silicon (or how few FPGA > gates) a > >> reasonable version of that would take, might as well include > 65C02 and > >> 65816 cores in there too with some sort of mode-switching > instruction. > >> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus > be > >> fun? > >> Throw in an onboard MMU and FPU too, I suppose, and then you > could > >> have a > >> real system on it. > >> > >> > > Sounds like a perfect project for an FPGA. If there's already a > 6502 > > implementation out there, converting to 64 bit should be fairly > easy. > > There are FPGA implementations of the 6502 out there. If you've not > seen > it, check out the MiSTer[0] project, FPGA implementations of a LOT > of > computers, going back as far as the EDSAC, PDP-1, a LOT of 8, 16, > and 32 > bit systems from the 70s and 80s along with gaming consoles from the > 70s > and 80s. > > Keeping this semi-TUHS related, one guy[1] has even implemented a > Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4, 5, Linux, > NetBSD, and even the Sparc version of NeXTSTEP, but it's not part of > the > "official" MiSTer bits (yet?). > > -- > Michael Parson > Pflugerville, TX > KF5LGQ > > [0] https://github.com/MiSTer-devel/Main_MiSTer/wiki > [1] https://temlib.org/site/ > [2] https://temlib.org/pub/mister/SS/ From toby at telegraphics.com.au Sat Feb 13 16:05:07 2021 From: toby at telegraphics.com.au (Toby Thain) Date: Sat, 13 Feb 2021 01:05:07 -0500 Subject: [TUHS] 68k prototypes & microcode In-Reply-To: <0F0B9BFC06289346B88512B91E55670D300F@EXCHANGE> References: <0F0B9BFC06289346B88512B91E55670D300F@EXCHANGE> Message-ID: <94b9ad5c-dd0b-9195-d391-787bafdf510f@telegraphics.com.au> On 2021-02-12 11:34 p.m., Jason Stevens wrote: > Apparently they are getting 68040 levels of performance with a Pi... and > that interpreted. Going with JIT it's way higher. Before we get too breathless, this is roughly what was achieved with a PowerPC 601 emulating 68K ...approximately 28 years ago. --T > > -----Original Message----- > From: Gregg Levine [SMTP:gregg.drwho8 at gmail.com] > Sent: Saturday, February 13, 2021 10:30 AM > To: Jason Stevens; The Eunuchs Hysterical Society > Subject: Re: [TUHS] 68k prototypes & microcode > > An amazing idea. > ----- > Gregg C Levine gregg.drwho8 at gmail.com > "This signature fought the Time Wars, time and again." > > On Fri, Feb 12, 2021 at 7:51 PM Jason Stevens > wrote: > > > > You might find this interesting > > > > https://twitter.com/i/status/1320767372853190659 > > > > > > It's a pi (arm) running Musashi a 68000 core, but using voltage > buffers it's > > plugged into the 68000 socket of an Amiga! > > > > You can find more info on their github: > > > > https://github.com/captain-amygdala/pistorm > > > > > > Maybe we are at the point where numerous cheap CPU's can eliminate > FPGA's? > > > > -----Original Message----- > > From: Michael Parson [SMTP:mparson at bl.org] > > Sent: Friday, February 05, 2021 10:43 PM > > To: The Eunuchs Hysterical Society > > Subject: Re: [TUHS] 68k prototypes & microcode > > > > On 2021-02-04 16:47, Henry Bent wrote: > > > On Thu, Feb 4, 2021, 17:40 Adam Thornton > > > wrote: > > > > > >> I'm probably Stockholm Syndrommed about 6502. It's > what I grew > > up on, > > >> and > > >> I still like it a great deal. Admittedly > register-starved (well, > > > > >> unless > > >> you consider the zero page a whole page of registers), > > but...simple, > > >> easy > > >> to fit in your head, kinda wonderful. > > >> > > >> I'd love a 64-bit 6502-alike (but I'd probably give it > more than > > three > > >> registers). I mean given how little silicon (or how > few FPGA > > gates) a > > >> reasonable version of that would take, might as well > include > > 65C02 and > > >> 65816 cores in there too with some sort of > mode-switching > > instruction. > > >> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit > address bus > > be > > >> fun? > > >> Throw in an onboard MMU and FPU too, I suppose, and > then you > > could > > >> have a > > >> real system on it. > > >> > > >> > > > Sounds like a perfect project for an FPGA. If there's > already a > > 6502 > > > implementation out there, converting to 64 bit should be > fairly > > easy. > > > > There are FPGA implementations of the 6502 out there. If > you've not > > seen > > it, check out the MiSTer[0] project, FPGA implementations > of a LOT > > of > > computers, going back as far as the EDSAC, PDP-1, a LOT of > 8, 16, > > and 32 > > bit systems from the 70s and 80s along with gaming > consoles from the > > 70s > > and 80s. > > > > Keeping this semi-TUHS related, one guy[1] has even > implemented a > > Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4, > 5, Linux, > > NetBSD, and even the Sparc version of NeXTSTEP, but it's > not part of > > the > > "official" MiSTer bits (yet?). > > > > -- > > Michael Parson > > Pflugerville, TX > > KF5LGQ > > > > [0] https://github.com/MiSTer-devel/Main_MiSTer/wiki > > [1] https://temlib.org/site/ > > [2] https://temlib.org/pub/mister/SS/ > From tuhs at cuzuco.com Sat Feb 13 19:00:03 2021 From: tuhs at cuzuco.com (Brian Walden) Date: Sat, 13 Feb 2021 04:00:03 -0500 (EST) Subject: [TUHS] banner (was troff was not so widely usable) Message-ID: <202102130900.11D903MT021054@cuzuco.com> Thank you for banner! I used the data, abliet modified, 40 years ago in 1981, for a banner program as well, on an IBM 1130 (manufactured 1972) so it could print on an 1132 line printer. The floor would vibrate when it printed those banners. I used "X" as the printed char as the 1132 did not have the # char. But those banners looked great! I wrote it in FORTRAN IV. On punched cards. I did this because from 1980-1982 I only had access to UNIX on Monday evenings from 7PM-9PM, using a DEC LA120 terminal, it was slow and never had enough ink on the ribbon. I had only 8K of core memory with only EBCIDIC uppercase so there were lots of compromises and cleverness needed - - read in a 16-bit integer as a packed two 8-bit numbers - limit the banner output to only A-Za-z0-9 !?#@'*+,-.= - unpack the char data into buffer and then process it. - fix the "U" charater data - find the run-lenght ecnodings that could be consoldated to save space (seeing those made me think it had to have been generated data) The program still survives here - http://ibm1130.cuzuco.com/ (with sample output runs) Also since I had to type all those numbers onto punch cards with a 029 keypunch, to speed things up I coded my own free-form atoi() equivalent in FORTRAN, reading cards, then packed two numbers into a integer, then punch out those numbers along with card ID numbers in columns 73-80 on the 1442. This was many weeks of keypunching, checking, fixing and re-keypunching. That code is here http://ibm1130.cuzuco.com/ipack.html When done the deck was around 8" or so. It took well over a minute to read in the data cards, after complition. Again thanks! Many hundreds of banners for many people were printed by this, around 2 to 3 a week, until July 1982, when that IBM was replaced by a Prime system. I still have many found memeories of that 1130. -Brian Mary Ann Horton (mah at mhorton.net) wrote: > We had vtroff at Berkeley around 1980, on the big Versatec wet plotter, > 4 pages wide. We got really good at cutting up the pages on the output. > > It used the Hershey font. It was horrible. Mangled somehow, lots of > parts of glyphs missing. I called it the "Horse Shit" font. > > I took it as my mission to clean it up. I wrote "fed" to edit it, dot by > dot, on the graphical HP 2648 terminal at Berkeley. I got all the fonts > reasonably cleaned up, but it was laborious. > > I still hated Hershey. It was my dream to get real C/A/T output at the > largest 36 point size, and scan it in to create a decent set of Times > fonts. I finally got the C/A/T output years later at Bell Labs, but > there were no scanners available to me at the time. Then True Type came > along and it was moot. > > I did stumble onto one nice rendition of Times Roman in one point size, > from Stanford, I think. I used it to write banner(6). From will.senn at gmail.com Sun Feb 14 01:20:14 2021 From: will.senn at gmail.com (Will Senn) Date: Sat, 13 Feb 2021 09:20:14 -0600 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: <202102130900.11D903MT021054@cuzuco.com> References: <202102130900.11D903MT021054@cuzuco.com> Message-ID: <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> On 2/13/21 3:00 AM, Brian Walden wrote: > Thank you for banner! I used the data, abliet modified, 40 years ago > in 1981, for a banner program as well, on an IBM 1130 (manufactured 1972) > so it could print on an 1132 line printer. The floor would vibrate > when it printed those banners. I used "X" as the printed char as the > 1132 did not have the # char. But those banners looked great! > I wrote it in FORTRAN IV. On punched cards. I did this because > from 1980-1982 I only had access to UNIX on Monday evenings from > 7PM-9PM, using a DEC LA120 terminal, it was slow and never had > enough ink on the ribbon. > > I had only 8K of core memory with only EBCIDIC uppercase so there > were lots of compromises and cleverness needed - > - read in a 16-bit integer as a packed two 8-bit numbers > - limit the banner output to only A-Za-z0-9 !?#@'*+,-.= > - unpack the char data into buffer and then process it. > - fix the "U" charater data > - find the run-lenght ecnodings that could be consoldated to save space > (seeing those made me think it had to have been generated data) > > The program still survives here - http://ibm1130.cuzuco.com/ > (with sample output runs) > > Also since I had to type all those numbers onto punch cards > with a 029 keypunch, to speed things up I coded my own free-form > atoi() equivalent in FORTRAN, reading cards, then packed two numbers into > a integer, then punch out those numbers along with card ID numbers in columns > 73-80 on the 1442. This was many weeks of keypunching, checking, > fixing and re-keypunching. > That code is here http://ibm1130.cuzuco.com/ipack.html > > When done the deck was around 8" or so. It took well over a > minute to read in the data cards, after complition. > > Again thanks! Many hundreds of banners for many people were printed > by this, around 2 to 3 a week, until July 1982, when that IBM > was replaced by a Prime system. I still have many found memeories of > that 1130. > > -Brian > > Mary Ann Horton (mah at mhorton.net) wrote: >> We had vtroff at Berkeley around 1980, on the big Versatec wet plotter, >> 4 pages wide. We got really good at cutting up the pages on the output. >> >> It used the Hershey font. It was horrible. Mangled somehow, lots of >> parts of glyphs missing. I called it the "Horse Shit" font. >> >> I took it as my mission to clean it up. I wrote "fed" to edit it, dot by >> dot, on the graphical HP 2648 terminal at Berkeley. I got all the fonts >> reasonably cleaned up, but it was laborious. >> >> I still hated Hershey. It was my dream to get real C/A/T output at the >> largest 36 point size, and scan it in to create a decent set of Times >> fonts. I finally got the C/A/T output years later at Bell Labs, but >> there were no scanners available to me at the time. Then True Type came >> along and it was moot. >> >> I did stumble onto one nice rendition of Times Roman in one point size, >> from Stanford, I think. I used it to write banner(6). Nice. I wrote a banner program in 1984, as a freshman in college for the TRS-80 Model 100 laptop (with an 8x40 LCD), in BASIC, which if I recall was the OS of the thing? It would peek the character ROM and use the encodings (characters were stored in ROM as a 2d binary array bitmap) to determine what to print to the printer and did some form of vertical and horizontal expansion to reasonably fill up the sheets. I don't remember if I took the horizontal and vertical expansion as input from the user or what (it's been a while, and the code is long gone), or if I just figured out what looked good on the ol' dot matrix we had access to and set them... but it was, at the time, my crowning achievement in programming. Everyone else in the class took pages and pages of code to print their banners without reference to the character ROM, whereas mine did it in very few lines of easy to understand, if somewhat complex (not complicated, mind you), code (the story of my much later career). Wow, that brings back memories :).. snip! after writing what turned into my life story, I decided to spare y'all. Suffice it to say my early experiences with computation (Commodore PET, TRS-80 Model 100, DEC Rainbow 100) and later more formal educational experiences (my first real maths and upper division cs professors) changed my life's trajectory and gave me the tools to help me rise out of decades of extremely harsh circumstances. Thank you Dennis, especially for C. I wish I could have known you and thanked you personally. C was my vehicle out of the depths of poverty and hardship. Sigh, sniff, and smile. Now, I explore Unix, both historic and modern for fun, pester y'all with questions, opinions, and sundry, teach CS and IS for fun and pay, and hope that I can share 1/10th of the joy I experience every day with my students and inspire them to pursue careers in the field. Banner on! Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sun Feb 14 02:57:30 2021 From: imp at bsdimp.com (Warner Losh) Date: Sat, 13 Feb 2021 09:57:30 -0700 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> References: <202102130900.11D903MT021054@cuzuco.com> <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> Message-ID: On Sat, Feb 13, 2021 at 8:21 AM Will Senn wrote: > Nice. I wrote a banner program in 1984 > I wrote one in 83. And several of my fellow students at college did this as well. It seemed to be a common thing back in the day. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From mah at mhorton.net Sun Feb 14 03:13:05 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Sat, 13 Feb 2021 09:13:05 -0800 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: <202102130900.11D903MT021054@cuzuco.com> References: <202102130900.11D903MT021054@cuzuco.com> Message-ID: Thank you for the kind words, and the inspiring story of your port to FORTRAN! I was surprised to find there is a Wikipedia page for the banner program. This brings back earlier memories for me. In High School in 1972, our school had an ASR33 and dial-up access to an HP BASIC system. We were also lucky enough to be part of a scouting program that gave us access to a UNIVAC 1108 mainframe at nearby Gulf General Atomic, where we could keypunch and run FORTRAN programs and print onto a fast line printer. One of my programs was a simpler banner program, printing large sideways banners with the 5x7 dot matrix I'd seen on Decwriters and CRT terminals. I drew and typed in the data by hand, a far simpler job since it was only 5x7, and the output was blocky. I supported upper and lower case, but like the terminals, there was no room below the baseline for descenders, and characters like "g" wound up elevated. I printed our high school catch phrase, "Debug Off Line!", and posted above the ASR33 at school. I got lots of crap about how the g looked like a 9. One friend signed my senior high school yearbook with the tag line "Debu9 Off Line!" On 2/13/21 1:00 AM, Brian Walden wrote: > Thank you for banner! I used the data, abliet modified, 40 years ago > in 1981, for a banner program as well, on an IBM 1130 (manufactured 1972) > so it could print on an 1132 line printer. The floor would vibrate > when it printed those banners. I used "X" as the printed char as the > 1132 did not have the # char. But those banners looked great! > I wrote it in FORTRAN IV. On punched cards. I did this because > from 1980-1982 I only had access to UNIX on Monday evenings from > 7PM-9PM, using a DEC LA120 terminal, it was slow and never had > enough ink on the ribbon. > From dave at horsfall.org Sun Feb 14 06:09:31 2021 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 14 Feb 2021 07:09:31 +1100 (EST) Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: References: <202102130900.11D903MT021054@cuzuco.com> <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> Message-ID: On Sat, 13 Feb 2021, Warner Losh wrote: > I wrote one in 83. And several of my fellow students at college did this > as well. It seemed to be a common thing back in the day. I've used lots of different banner programs on various systems; I think even OS/360 had one (well, ours did anyway). -- Dave From jcapp at anteil.com Sun Feb 14 06:28:31 2021 From: jcapp at anteil.com (Jim Capp) Date: Sat, 13 Feb 2021 15:28:31 -0500 (EST) Subject: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 In-Reply-To: Message-ID: <2809928.975.1613248111703.JavaMail.root@zimbraanteil> Hey folks, Is anyone interested in a Dec Alpha or a Sun Sparc 4? I haven't touched these devices in 10+ years, and they were working before they were put on the shelf. I'd like to send them to a good home, rather than the local recycling center. Cheers, Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From earl.baugh at gmail.com Sun Feb 14 06:36:39 2021 From: earl.baugh at gmail.com (Earl Baugh) Date: Sat, 13 Feb 2021 15:36:39 -0500 Subject: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 In-Reply-To: <2809928.975.1613248111703.JavaMail.root@zimbraanteil> References: <2809928.975.1613248111703.JavaMail.root@zimbraanteil> Message-ID: <0B622B5D-E214-4A7D-9A75-8E7C8AE3A397@gmail.com> What size? I’d be interested in both, depending on models. Earl Sent from my iPhone > On Feb 13, 2021, at 3:29 PM, Jim Capp wrote: > >  > Hey folks, > > Is anyone interested in a Dec Alpha or a Sun Sparc 4? I haven't touched these devices in 10+ years, and they were working before they were put on the shelf. I'd like to send them to a good home, rather than the local recycling center. > > Cheers, > > Jim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcapp at anteil.com Sun Feb 14 06:45:01 2021 From: jcapp at anteil.com (Jim Capp) Date: Sat, 13 Feb 2021 15:45:01 -0500 (EST) Subject: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 In-Reply-To: <0B622B5D-E214-4A7D-9A75-8E7C8AE3A397@gmail.com> Message-ID: <27171908.982.1613249101738.JavaMail.root@zimbraanteil> The DEC Alpha is about the size of a typical PC. I take some pictures tomorrow and send them to you. From: "Earl Baugh" To: "Jim Capp" Cc: "The Eunuchs Hysterical Society" Sent: Saturday, February 13, 2021 3:36:39 PM Subject: Re: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 What size? I’d be interested in both, depending on models. Earl Sent from my iPhone On Feb 13, 2021, at 3:29 PM, Jim Capp wrote: Hey folks, Is anyone interested in a Dec Alpha or a Sun Sparc 4? I haven't touched these devices in 10+ years, and they were working before they were put on the shelf. I'd like to send them to a good home, rather than the local recycling center. Cheers, Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From earl.baugh at gmail.com Sun Feb 14 07:24:36 2021 From: earl.baugh at gmail.com (Earl Baugh) Date: Sat, 13 Feb 2021 16:24:36 -0500 Subject: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 In-Reply-To: <27171908.982.1613249101738.JavaMail.root@zimbraanteil> References: <0B622B5D-E214-4A7D-9A75-8E7C8AE3A397@gmail.com> <27171908.982.1613249101738.JavaMail.root@zimbraanteil> Message-ID: That would be great, thanks! You can reply directly to me at earl at baugh.org Earl On Sat, Feb 13, 2021 at 3:45 PM Jim Capp wrote: > The DEC Alpha is about the size of a typical PC. I take some pictures > tomorrow and send them to you. > > ------------------------------ > *From: *"Earl Baugh" > *To: *"Jim Capp" > *Cc: *"The Eunuchs Hysterical Society" > *Sent: *Saturday, February 13, 2021 3:36:39 PM > *Subject: *Re: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 > > What size? I’d be interested in both, depending on models. > > Earl > > Sent from my iPhone > > On Feb 13, 2021, at 3:29 PM, Jim Capp wrote: > >  > Hey folks, > > Is anyone interested in a Dec Alpha or a Sun Sparc 4? I haven't touched > these devices in 10+ years, and they were working before they were put on > the shelf. I'd like to send them to a good home, rather than the local > recycling center. > > Cheers, > > Jim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtaylor at tnetconsulting.net Sun Feb 14 08:13:15 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sat, 13 Feb 2021 15:13:15 -0700 Subject: [TUHS] Any interest in a Dec Alpha or a Sun Sparc 4 In-Reply-To: <2809928.975.1613248111703.JavaMail.root@zimbraanteil> References: <2809928.975.1613248111703.JavaMail.root@zimbraanteil> Message-ID: <7191e03a-8ef1-cdee-4e75-c65fd3830585@spamtrap.tnetconsulting.net> On 2/13/21 1:28 PM, Jim Capp wrote: > Hey folks, Hi, > Is anyone interested in a Dec Alpha or a Sun Sparc 4?  I haven't touched > these devices in 10+ years, and they were working before they were put > on the shelf.  I'd like to send them to a good home, rather than the > local recycling center. Where are they located? -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From mike.ab3ap at gmail.com Sun Feb 14 08:21:42 2021 From: mike.ab3ap at gmail.com (Mike Markowski) Date: Sat, 13 Feb 2021 17:21:42 -0500 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: References: <202102130900.11D903MT021054@cuzuco.com> <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> Message-ID: <4b5f0feb-c473-4087-7b0b-9706bced886f@gmail.com> On 2/13/21 3:09 PM, Dave Horsfall wrote: > On Sat, 13 Feb 2021, Warner Losh wrote: > >> I wrote one in 83. And several of my fellow students at college did >> this as well. It seemed to be a common thing back in the day. > > I've used lots of different banner programs on various systems; I think > even OS/360 had one (well, ours did anyway). > > -- Dave As an undergrad in the early 1980s, posters made from line printer strips were popular. Character overstrikes were used as pixels and could be discerned as photos from a few feet away. These filled a wall in our student office / study area. Given the times & 100% male occupancy, let's just say the posters wouldn't fly today... Each poster was multiple strips wide. Does such a program ring a bell? Ascii art was popular, but I don't recall details on making them. Mike Markowski From mah at mhorton.net Sun Feb 14 10:27:30 2021 From: mah at mhorton.net (Mary Ann Horton) Date: Sat, 13 Feb 2021 16:27:30 -0800 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: <4b5f0feb-c473-4087-7b0b-9706bced886f@gmail.com> References: <202102130900.11D903MT021054@cuzuco.com> <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> <4b5f0feb-c473-4087-7b0b-9706bced886f@gmail.com> Message-ID: <08771638-9900-aea8-0015-93e2fcf25932@mhorton.net> Picture tapes. I had a collection of 20 or so. A few of them were girly pictures, but there were several excellent ones. Nemoy as Spock holding a model of the Enterprise. Neil Armstrong on the moon. My favorite was the PSA grinning bird over the San Francisco Bay - it was 8 strips wide. FORTRAN carriage control to cause overstriking. I recently got my collection read off the magtape. My understanding was the a photo was scanned at 256 grayscale levels, and the program let you tune the contrast with 16 gray levels of different overstrikes, ranging from 4 blanks to M, W, X, @ overstruck. There's a tool called asa2pdf that can turn the carriage control files into PDF, but printing on a laser printer leads to a chore with an office paper cutter and lots of staples and scotch tape. I put one together of SAN FRAN as a parting gift to a coworker at my retirement luncheon.     Mary Ann On 2/13/21 2:21 PM, Mike Markowski wrote: > On 2/13/21 3:09 PM, Dave Horsfall wrote: >> On Sat, 13 Feb 2021, Warner Losh wrote: >> >>> I wrote one in 83. And several of my fellow students at college did >>> this as well. It seemed to be a common thing back in the day. >> >> I've used lots of different banner programs on various systems; I >> think even OS/360 had one (well, ours did anyway). >> >> -- Dave > > As an undergrad in the early 1980s, posters made from line printer > strips were popular.  Character overstrikes were used as pixels and > could be discerned as photos from a few feet away.  These filled a > wall in our student office / study area.  Given the times & 100% male > occupancy, let's just say the posters wouldn't fly today...  Each > poster was multiple strips wide.  Does such a program ring a bell?  > Ascii art was popular, but I don't recall details on making them. > > Mike Markowski > From woods at robohack.ca Sun Feb 14 12:04:32 2021 From: woods at robohack.ca (Greg A. Woods) Date: Sat, 13 Feb 2021 18:04:32 -0800 Subject: [TUHS] tangential unix question: whatever happened to NeWS? In-Reply-To: <202101242045.10OKjDvA964774@darkstar.fourwinds.com> References: <20210124183653.GD21030@mcvoy.com> <202101242045.10OKjDvA964774@darkstar.fourwinds.com> Message-ID: At Sun, 24 Jan 2021 12:45:13 -0800, Jon Steinhart wrote: Subject: Re: [TUHS] tangential unix question: whatever happened to NeWS? > > To the best of my knowledge, NeWS was the first window system to provide > device-independent graphics. You could just do things without having > to mess around with counting pixels and figuring out what sort of color > system was behind things. I'm not so sure about that. There was Project JADE from University of Calgary: http://hdl.handle.net/1880/46070 I rarely see it mentioned, yet it was in my experience quite far ahead of its time in all aspects of distributed computing, complete with a nice GUI able to run on generic bit-mapped display workstations and using Unix servers. The lack of knowledge about it dismays me somewhat because I knew the guys who created it -- they were grad students at the time I was an undergrad at UofC. Now interestingly enough James Gosling would likely have known all about this, since he kept ties with UofC for quite some time, and in the same timeframe. I remember sitting beside him in a terminal room at UofC near xmas time in about 1980 or 1981 while he upgraded the version of Gosmacs we used on the main undergrad 11/780. That was about the time that Project JADE was beginning too. I don't know too much about the history of NeWS, except I didn't see even a hint of it until long after JADE was already long in the tooth. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From jon at fourwinds.com Sun Feb 14 12:49:47 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Sat, 13 Feb 2021 18:49:47 -0800 Subject: [TUHS] tangential unix question: whatever happened to NeWS? In-Reply-To: References: <20210124183653.GD21030@mcvoy.com> <202101242045.10OKjDvA964774@darkstar.fourwinds.com> Message-ID: <202102140249.11E2nq3d2519142@darkstar.fourwinds.com> Greg A. Woods writes: > > At Sun, 24 Jan 2021 12:45:13 -0800, Jon Steinhart wrote: > > > > To the best of my knowledge, NeWS was the first window system to provide > > device-independent graphics. You could just do things without having > > to mess around with counting pixels and figuring out what sort of color > > system was behind things. > > I'm not so sure about that. > > There was Project JADE from University of Calgary: > > http://hdl.handle.net/1880/46070 > > I rarely see it mentioned, yet it was in my experience quite far ahead > of its time in all aspects of distributed computing, complete with a > nice GUI able to run on generic bit-mapped display workstations and > using Unix servers. The lack of knowledge about it dismays me somewhat > because I knew the guys who created it -- they were grad students at the > time I was an undergrad at UofC. > > Now interestingly enough James Gosling would likely have known all about > this, since he kept ties with UofC for quite some time, and in the same > timeframe. I remember sitting beside him in a terminal room at UofC > near xmas time in about 1980 or 1981 while he upgraded the version of > Gosmacs we used on the main undergrad 11/780. That was about the time > that Project JADE was beginning too. > > I don't know too much about the history of NeWS, except I didn't see > even a hint of it until long after JADE was already long in the tooth. Thanks, I had forgotten about that. The question of device independent graphics is a hard one. Device independent graphics had been around for a long time in terms of various display list processors that got mangled into things like CORE, GKS, and PHIGS. But just because, for example, Sun provided a GKS package on top of SunView didn't make SunView device independent. You're probably correct that NeWS was not the first window system to support device independent graphics. I do believe that it was the first one to be "ubiquitous" in that the window system itself used the same graphics as was available to the user. Doesn't that document just scream "troff" at you? Jon From will.senn at gmail.com Sun Feb 14 13:33:31 2021 From: will.senn at gmail.com (Will Senn) Date: Sat, 13 Feb 2021 21:33:31 -0600 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: <08771638-9900-aea8-0015-93e2fcf25932@mhorton.net> References: <202102130900.11D903MT021054@cuzuco.com> <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> <4b5f0feb-c473-4087-7b0b-9706bced886f@gmail.com> <08771638-9900-aea8-0015-93e2fcf25932@mhorton.net> Message-ID: On 2/13/21 6:27 PM, Mary Ann Horton wrote: > Picture tapes. I had a collection of 20 or so. A few of them were > girly pictures, but there were several excellent ones. Nemoy as Spock > holding a model of the Enterprise. I remember this one from back in the day: https://www.atariarchives.org/bcc1/showpage.php?page=cover1 Detail from image (small enough to include here): Will -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: khkcnhekmomkbdkf.jpg Type: image/jpeg Size: 69550 bytes Desc: not available URL: From rdm at cfcl.com Sun Feb 14 14:53:37 2021 From: rdm at cfcl.com (Rich Morin) Date: Sat, 13 Feb 2021 20:53:37 -0800 Subject: [TUHS] tangential unix question: whatever happened to NeWS? In-Reply-To: <202102140249.11E2nq3d2519142@darkstar.fourwinds.com> References: <20210124183653.GD21030@mcvoy.com> <202101242045.10OKjDvA964774@darkstar.fourwinds.com> <202102140249.11E2nq3d2519142@darkstar.fourwinds.com> Message-ID: small, possibly relevant anecdote... > On Feb 13, 2021, at 18:49, Jon Steinhart wrote: > > Greg A. Woods writes: >> >> At Sun, 24 Jan 2021 12:45:13 -0800, Jon Steinhart wrote: >>> >>> To the best of my knowledge, NeWS was the first window system to provide >>> device-independent graphics. You could just do things without having >>> to mess around with counting pixels and figuring out what sort of color >>> system was behind things. >> > ... > The question of device independent graphics is a hard one. Device > independent graphics had been around for a long time in terms of > various display list processors that got mangled into things like > CORE, GKS, and PHIGS. But just because, for example, Sun provided > a GKS package on top of SunView didn't make SunView device independent. Dunno if anyone will find this interesting, but I hacked up a text-based front end for SunCORE, back in 1983 or so. IIRC, it was called iC, for interpreted Core. It read a line-oriented stream of ASCII commands and argument lists. After parsing these lines, it used the SunCORE library to render the result. I also wrote a utility to grab screen images and dump them to a dot matrix printer. The only "production" user for these hacks was my spouse, Vicki Brown. She used them to generate graphics (e.g., dendograms) for her Master's thesis (M.S. Microbiology, University of Maryland). The source data for the graphics was line printer plot output from a pair of UMD (IBM and Univac) mainframes. The text of the thesis was formatted using nroff and ms macros, then printed on a Datel 30 (IBM I/O Selectric clone), using still more hacky software. I had to translate the ASCII to BCDIC, add shift and timing characters, etc. (But it all worked and got her thesis printed... :-) Because the mainframe analysis programs used very different data formats, Vicki created a third format for text entry, preafrooding, etc. She then transcoded the data using sed(1) and pushed it (at 300 BAUD) to UMD. She then captured and downloaded the line printer files, transcoded back to ASCII, and used awk(1) to boil down the line printer plots (which ran on for MANY sheets of paper) so they would fit on single letter-size pages. Dr. Rita R. Colwell (https://en.wikipedia.org/wiki/Rita_R._Colwell) was her thesis advisor. After accepting the thesis, she asked Vicki to translate the AWK scripts into Fortran, so her team could render the plots on a Calcomp plotter. The translated code, predictably, was a great deal larger (and took longer to run :-) than the AWK version. -r P.S. Vicki and I learned awk(1) and sed(2) with the kind help of Jim Joyce, who got me interested in Unix all those years ago... P.P.S. Vicki has since moved through Perl to Python and such and would be happy to find remote work as a data massager. Please respond off-list... From crossd at gmail.com Mon Feb 15 05:08:48 2021 From: crossd at gmail.com (Dan Cross) Date: Sun, 14 Feb 2021 13:08:48 -0600 Subject: [TUHS] Fwd: [multicians] History of C (with Multics reference) In-Reply-To: <30368.1613327707837544705@groups.io> References: <1607711516.31417164@apps.rackspace.com> <30368.1613327707837544705@groups.io> Message-ID: FYI, interesting. ---------- Forwarded message --------- From: Tom Van Vleck Date: Sun, Feb 14, 2021, 12:35 PM Subject: Re: [multicians] History of C (with Multics reference) To: Remember the story that Ken Thompson had written a language called "Bon" which was one of the forerunners of "B" which then led to "new B" and then to "C"? I just found Ken Thompson's "Bon Users Manual" dated Feb 1, 1969, as told to M. D. McIlroy and R. Morris in Jerry Saltzer's files online at MIT. http://people.csail.mit.edu/saltzer/Multics/MHP-Saltzer-060508/filedrawers/180.btl-misc/Scan%204.PDF _._,_._,_ ------------------------------ Groups.io Links: You receive all messages sent to this group. View/Reply Online (#4231) | Reply To Group | Reply To Sender | Mute This Topic | New Topic ------------------------------ -- sent via multicians at groups.io -- more Multics info at https::// multicians.org/ ------------------------------ Your Subscription | Contact Group Owner | Unsubscribe [ crossd at gmail.com] _._,_._,_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmhanson at eschatologist.net Mon Feb 15 07:05:55 2021 From: cmhanson at eschatologist.net (Chris Hanson) Date: Sun, 14 Feb 2021 13:05:55 -0800 Subject: [TUHS] Prime Time Freeware Message-ID: Has anyone written down the story of Prime Time Freeware or archived the various distributions? Is there even a complete listing of what they distributed? I’ve imaged my own stuff (PTF AI 1-1, PTF SDK for UnixWare 1-1, PTF Tools & Toys for UnixWare 1-1) but I’d really like to find the original PTF 1-1 and things like it. — Chris From cym224 at gmail.com Mon Feb 15 08:04:01 2021 From: cym224 at gmail.com (Nemo Nusquam) Date: Sun, 14 Feb 2021 17:04:01 -0500 Subject: [TUHS] Prime Time Freeware In-Reply-To: References: Message-ID: <60299E51.90205@gmail.com> On 14/02/2021 16:05, Chris Hanson wrote: > Has anyone written down the story of Prime Time Freeware or archived the various distributions? Is there even a complete listing of what they distributed? Rich Morin is on this list. N. From kennethgoodwin56 at gmail.com Mon Feb 15 10:17:18 2021 From: kennethgoodwin56 at gmail.com (Kenneth Goodwin) Date: Sun, 14 Feb 2021 19:17:18 -0500 Subject: [TUHS] Prime Time Freeware In-Reply-To: <60299E51.90205@gmail.com> References: <60299E51.90205@gmail.com> Message-ID: I believe I may still have the cdroms somewhere around. I will try to track them down. Still unpacking from a move. On Sun, Feb 14, 2021, 5:04 PM Nemo Nusquam wrote: > On 14/02/2021 16:05, Chris Hanson wrote: > > Has anyone written down the story of Prime Time Freeware or archived the > various distributions? Is there even a complete listing of what they > distributed? > Rich Morin is on this list. > > N. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pugs at ieee.org Tue Feb 16 03:32:26 2021 From: pugs at ieee.org (Tom Lyon) Date: Mon, 15 Feb 2021 09:32:26 -0800 Subject: [TUHS] banner (was troff was not so widely usable) In-Reply-To: References: <202102130900.11D903MT021054@cuzuco.com> <22d1ac5d-caaa-5dd1-0a30-263b041b3a08@gmail.com> <4b5f0feb-c473-4087-7b0b-9706bced886f@gmail.com> <08771638-9900-aea8-0015-93e2fcf25932@mhorton.net> Message-ID: I believe many of these images, especially Spock, came from Sam Harbison (RIP) at Princeton. They were EBCDIC art, not ASCII! Made on the IBM/360 with the 1403 printer. See http://q7.neurotica.com/Oldtech/ASCII/ On Sat, Feb 13, 2021 at 7:34 PM Will Senn wrote: > On 2/13/21 6:27 PM, Mary Ann Horton wrote: > > Picture tapes. I had a collection of 20 or so. A few of them were girly > pictures, but there were several excellent ones. Nemoy as Spock holding a > model of the Enterprise. > > > I remember this one from back in the day: > > https://www.atariarchives.org/bcc1/showpage.php?page=cover1 > > Detail from image (small enough to include here): > > > > Will > -- - Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: khkcnhekmomkbdkf.jpg Type: image/jpeg Size: 69550 bytes Desc: not available URL: From jon at fourwinds.com Tue Feb 16 05:56:27 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Mon, 15 Feb 2021 11:56:27 -0800 Subject: [TUHS] Abstractions Message-ID: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Was thinking about our recent discussion about system call bloat and such. Seemed to me that there was some argument that it was needed in order to support modern needs. As I tried to say, I think that a good part of the bloat stemmed from we-need-to-add-this-to-support-that thinking instead of what's-the-best-way-to-extend-the-system-to-support-this-need thinking. So if y'all are up for it, I'd like to have a discussion on what abstractions would be appropriate in order to meet modern needs. Any takers? Jon From dave at horsfall.org Tue Feb 16 07:52:24 2021 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 16 Feb 2021 08:52:24 +1100 (EST) Subject: [TUHS] Abstractions In-Reply-To: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: On Mon, 15 Feb 2021, Jon Steinhart wrote: [...] > So if y'all are up for it, I'd like to have a discussion on what > abstractions would be appropriate in order to meet modern needs. Any > takers? Somebody once suggested a filesystem interface (it certainly fits the Unix philosophy); I don't recall the exact details. -- Dave From cmhanson at eschatologist.net Tue Feb 16 09:32:56 2021 From: cmhanson at eschatologist.net (Chris Hanson) Date: Mon, 15 Feb 2021 15:32:56 -0800 Subject: [TUHS] Prime Time Freeware In-Reply-To: <60299E51.90205@gmail.com> References: <60299E51.90205@gmail.com> Message-ID: <6D7514A5-41E9-40E6-A0B8-E62BFE805415@eschatologist.net> On Feb 14, 2021, at 2:04 PM, Nemo Nusquam wrote: > > On 14/02/2021 16:05, Chris Hanson wrote: >> Has anyone written down the story of Prime Time Freeware or archived the various distributions? Is there even a complete listing of what they distributed? > Rich Morin is on this list. Indeed, that's one of the reasons I thought to ask here. :) -- Chris From dwalker at doomd.net Tue Feb 16 12:28:27 2021 From: dwalker at doomd.net (Derrik Walker v2.0) Date: Mon, 15 Feb 2021 21:28:27 -0500 Subject: [TUHS] Prime Time Freeware In-Reply-To: References: Message-ID: I have the PTF 4-2 book and the two CD’s that came with it. I found it at a local Microcenter in the Early to Mid ’90’s. I had ported a bunch of it MachTen. But that work, unfortunately, has been lost to time. - Derrik > On Feb 14, 2021, at 4:05 PM, Chris Hanson wrote: > > Has anyone written down the story of Prime Time Freeware or archived the various distributions? Is there even a complete listing of what they distributed? > > I’ve imaged my own stuff (PTF AI 1-1, PTF SDK for UnixWare 1-1, PTF Tools & Toys for UnixWare 1-1) but I’d really like to find the original PTF 1-1 and things like it. > > — Chris > From gnu at toad.com Tue Feb 16 16:31:23 2021 From: gnu at toad.com (John Gilmore) Date: Mon, 15 Feb 2021 22:31:23 -0800 Subject: [TUHS] Fwd: [multicians] History of C (with Multics reference) In-Reply-To: References: <1607711516.31417164@apps.rackspace.com> <30368.1613327707837544705@groups.io> Message-ID: <3803.1613457083@hop.toad.com> > Remember the story that Ken Thompson had written a language called "Bon" > which was one of the forerunners of "B" which then led to "new B" and then > to "C"? > > I just found Ken Thompson's "Bon Users Manual" dated Feb 1, 1969, as told > to M. D. McIlroy and R. Morris > in Jerry Saltzer's files online at MIT. > http://people.csail.mit.edu/saltzer/Multics/MHP-Saltzer-060508/filedrawers/180.btl-misc/Scan%204.PDF There was clearly a lot of cross-fertilization between early APL systems and Bon. (APL was the first computer language I dug deeply into.) Some of the common elements are the interactive execution environment, untyped variables, and automatic application of builtin functions (like +) across all elements of arrays. John From arnold at skeeve.com Tue Feb 16 17:13:13 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Tue, 16 Feb 2021 00:13:13 -0700 Subject: [TUHS] Abstractions In-Reply-To: References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: <202102160713.11G7DDqN014326@freefriends.org> Dave Horsfall wrote: > On Mon, 15 Feb 2021, Jon Steinhart wrote: > > [...] > > > So if y'all are up for it, I'd like to have a discussion on what > > abstractions would be appropriate in order to meet modern needs. Any > > takers? > > Somebody once suggested a filesystem interface (it certainly fits the Unix > philosophy); I don't recall the exact details. > > -- Dave And it was done, over 30 years ago; see Plan 9 from Bell Labs.... Arnold From tih at hamartun.priv.no Tue Feb 16 18:15:37 2021 From: tih at hamartun.priv.no (Tom Ivar Helbekkmo) Date: Tue, 16 Feb 2021 09:15:37 +0100 Subject: [TUHS] Abstractions In-Reply-To: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> (Jon Steinhart's message of "Mon, 15 Feb 2021 11:56:27 -0800") References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: Jon Steinhart writes: > So if y'all are up for it, I'd like to have a discussion on what > abstractions would be appropriate in order to meet modern needs. Any > takers? A late friend of mine felt strongly that Unix needed an SQL interface to the kernel. With all information and configuration in a well designed schema, system administration could be greatly enhanced, he felt, and could have standard interaction patterns across components -- instead of all the quirky command line interfaces we have today, and their user oriented output formats that you need to parse to use the data. sysctl done right, so to speak. -tih -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay From wobblygong at gmail.com Tue Feb 16 20:04:07 2021 From: wobblygong at gmail.com (Wesley Parish) Date: Tue, 16 Feb 2021 23:04:07 +1300 Subject: [TUHS] Abstractions In-Reply-To: References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: Now that is an interesting idea. Did he ever get around to developing it? Any documents? Any experimental results? (Mind you, he'd've run into CJ Date's reservations on the incompleteness of SQL as a language stuck between relational algebra and relational calculus ... :) ) Wesley Parish On 16/02/21 9:15 pm, Tom Ivar Helbekkmo via TUHS wrote: > Jon Steinhart writes: > >> So if y'all are up for it, I'd like to have a discussion on what >> abstractions would be appropriate in order to meet modern needs. Any >> takers? > A late friend of mine felt strongly that Unix needed an SQL interface to > the kernel. With all information and configuration in a well designed > schema, system administration could be greatly enhanced, he felt, and > could have standard interaction patterns across components -- instead of > all the quirky command line interfaces we have today, and their user > oriented output formats that you need to parse to use the data. > > sysctl done right, so to speak. > > -tih From rdm at cfcl.com Tue Feb 16 22:26:09 2021 From: rdm at cfcl.com (Rich Morin) Date: Tue, 16 Feb 2021 04:26:09 -0800 Subject: [TUHS] Abstractions In-Reply-To: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: > On Feb 15, 2021, at 11:56, Jon Steinhart wrote: > > Was thinking about our recent discussion about system call bloat and such. > Seemed to me that there was some argument that it was needed in order to > support modern needs. As I tried to say, I think that a good part of the > bloat stemmed from we-need-to-add-this-to-support-that thinking instead > of what's-the-best-way-to-extend-the-system-to-support-this-need thinking. > > So if y'all are up for it, I'd like to have a discussion on what abstractions > would be appropriate in order to meet modern needs. Any takers? The folks behind the Nerves Project (https://www.nerves-project.org) have done some serious thinking about this question, albeit mostly confined to the IoT space. They have also written (and distribute) some nifty implementation code. I won't try to cover all of their work here, but some high points include: - automated build and cross-compilation of entire Linux-based systems - automated distribution of (and fallbacks for) updated system code - separation of code and data using read-only and read/write file systems - support for multiple target platforms (e.g., processors, boards) - Erlang-style supervision trees (via Elixir) for critical services, etc. - extremely rapid boot times for the resulting (Linux-based) systems For more information, check out their web site, watch some presentations, and/or (gasp!) try out the code... -r From tuhs at cuzuco.com Wed Feb 17 05:29:50 2021 From: tuhs at cuzuco.com (Brian Walden) Date: Tue, 16 Feb 2021 14:29:50 -0500 (EST) Subject: [TUHS] banner (was troff was not so widely usable) Message-ID: <202102161930.11GJToaA000273@cuzuco.com> BTW that is the same Sam Harbison that co-authored "C: A Reference Manual" - https://www.amazon.com/Reference-Manual-Samuel-P-Harbison/dp/013089592X His memorial page is here - https://paw.princeton.edu/memorial/samuel-p-harbison-74 Those in the Pittsburgh area will recognize that family name. His father (also Samuel P. Harbison) obituary is here - https://www.nytimes.com/1976/07/20/archives/samuel-harbison-dies-in-pittsburgh.html Some information on his grandfather (and yes, also Samuel P. Harbison) is here - https://sites.google.com/site/1009davisavenue/history who ran this - https://en.wikipedia.org/wiki/Harbison-Walker_Refractories_Company Tom Lyon ) wrote: > I believe many of these images, especially Spock, came from Sam Harbison > (RIP) at Princeton. > They were EBCDIC art, not ASCII! Made on the IBM/360 with the 1403 printer. > See http://q7.neurotica.com/Oldtech/ASCII/ From gnu at toad.com Wed Feb 17 05:39:09 2021 From: gnu at toad.com (John Gilmore) Date: Tue, 16 Feb 2021 11:39:09 -0800 Subject: [TUHS] Prime Time Freeware <- Sun User Group tapes In-Reply-To: <7193CD22-B517-41AA-BE0F-3BAFBAD52A62@cfcl.com> References: <7193CD22-B517-41AA-BE0F-3BAFBAD52A62@cfcl.com> Message-ID: <30266.1613504349@hop.toad.com> Rich Morin wrote: > PTF was inspired, in large part, by the volunteer work that produced the > Sun User Group (SUG) tapes. Because most of the original volunteers had > other fish to fry, I decided to broaden the focus and attempt a > (somewhat) commercial venture. PTF, for better or worse, was the > result. > > So, I should also relate some stories about running for and serving on > the SUG board, hassling with AT&T and Sun's lawyers, assembling > SUGtapes, etc. My copies of the SUGtapes are (probably) long gone, but > John Gilmore (if nobody else :-) probably has the tapes and/or their > included bits. While I was involved, the Sun User Group made three tapes of freely available software, in 1985, 1987, and 1989. The 1989 tape includes both of the earlier ones, as well as new material. A copy of both the 1987 tape and the 1989 tape are here: http://www.toad.com/SunUserGroupTape-Rel-1987.1.0.tar.gz http://www.toad.com/SunUserGroupTape-Rel-1989.tar http://www.toad.com/ I'll have to do a bit more digging to turn up more than vague memories about our dealings with the lawyers... John From jon at fourwinds.com Wed Feb 17 05:59:32 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Tue, 16 Feb 2021 11:59:32 -0800 Subject: [TUHS] Abstractions In-Reply-To: References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: <202102161959.11GJxWC5676454@darkstar.fourwinds.com> Tom Ivar Helbekkmo writes: > Jon Steinhart writes: > > > So if y'all are up for it, I'd like to have a discussion on what > > abstractions would be appropriate in order to meet modern needs. Any > > takers? > > A late friend of mine felt strongly that Unix needed an SQL interface to > the kernel. With all information and configuration in a well designed > schema, system administration could be greatly enhanced, he felt, and > could have standard interaction patterns across components -- instead of > all the quirky command line interfaces we have today, and their user > oriented output formats that you need to parse to use the data. > > sysctl done right, so to speak. OK, that's interesting and makes my brain a bit crazy. Are we talking select file_descriptor from file_table where file_name='foo' && flags='O_EXCL'; delete from process_table where process_id=pid; and so on? Lots of possibilities for weird joins. But, this wasn't exactly what I was looking for in my original post which was maybe too terse. There have been heated discussions on this list about kernel API bloat. In my opinion, these discussions have mainly been people grumbling about what they don't like. I'd like to flip the discussion around to what we would like. Ken and Dennis did a great job with initial abstractions. Some on this list have claimed that these abstractions weren't sufficient for modern times. Now that we have new information from modern use cases, how would we rethink the basic abstractions? Quoting from something that I wrote a few years ago: The original Apple Macintosh API was published in 1985 in a three-­ volume set of books called Inside Macintosh (Addison-­Wesley). The set was over 1,200 pages long. It’s completely obsolete; modern (UNIX-based) Macs don’t use any of it. Why didn’t this API design last? By contrast, version 6 of the UNIX operating system was released 10 years earlier in 1975, with a 321-page manual. It embodied a completely different approach that sported a narrow and deep API. Both the UNIX API and a large number of the original applications are still in widespread use today, more than 40 years later, which is a testament to the quality of the design. Not only that, but a large number of the libraries are still in use and essentially unchanged, though their functionality has been copied into many other systems. While I don't have a count of the number of entries in the original Mac API, I'm guessing that number of Linux system calls is getting closer to that number. Is there any way that the abstractions can be rethought to get us back to an API that more concise and flexible? By flexible I mean the ability to support new functionality without adding more system calls? While the SQL interface notion is interesting, to me it's more in line with using a different language to access the API. But it would be interesting to see it fleshed out because maybe the abstractions provided by various tables would be different. Because it's easy pickings, I would claim that the socket system call is out of line with the UNIX abstractions; it exists because of practical political considerations, not because it's needed. I think that it would have fit better folded into the open system call. Something else added along with the networking was readv/writev. In this case, I would claim that those are the correct modern abstraction and that read/write are a subset. Hope that clarifies the discussion that I'm trying to kick off. Jon From will.senn at gmail.com Wed Feb 17 06:33:05 2021 From: will.senn at gmail.com (Will Senn) Date: Tue, 16 Feb 2021 14:33:05 -0600 Subject: [TUHS] cut, paste, join, etc. Message-ID: <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> All, I'm tooling along during our newfangled rolling blackouts and frigid temperatures (in Texas!) and reading some good old unix books. I keep coming across the commands cut and paste and join and suchlike. I use cut all the time for stuff like: ls -l | tr -s ' '| cut -f1,4,9 -d \ ... -rw-r--r-- staff main.rs and who | grep wsenn | cut -c 1-8,10-17 wsenn   console wsenn   ttys000 but that's just cuz it's convenient and useful. To my knowledge, I've never used paste or join outside of initially coming across them. But, they seem to 'fit' with cut. My question for y'all is, was there a subset of related utilities that these were part of that served some common purpose? On a related note, join seems like part of an aborted (aka never fully realized) attempt at a text based rdb to me... What say you? Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Wed Feb 17 07:02:47 2021 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 17 Feb 2021 08:02:47 +1100 (EST) Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> References: <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> Message-ID: On Tue, 16 Feb 2021, Will Senn wrote: > To my knowledge, I've never used paste or join outside of initially > coming across them. But, they seem to 'fit' with cut. My question for > y'all is, was there a subset of related utilities that these were part > of that served some common purpose? On a related note, join seems like > part of an aborted (aka never fully realized) attempt at a text based > rdb to me... I use "cut" a fair bit, rarely use "paste", but as for "join" and RDBs, just look at the man page: "join — relational database operator". As for future use, who knows? Could be a fun project for someone with time on their hands (not me!). -- Dave, who once implemented a "join" operation with BDB From rdm at cfcl.com Wed Feb 17 07:12:35 2021 From: rdm at cfcl.com (Rich Morin) Date: Tue, 16 Feb 2021 13:12:35 -0800 Subject: [TUHS] Prime Time Freeware <- Sun User Group tapes In-Reply-To: <30266.1613504349@hop.toad.com> References: <7193CD22-B517-41AA-BE0F-3BAFBAD52A62@cfcl.com> <30266.1613504349@hop.toad.com> Message-ID: > On Feb 16, 2021, at 11:39, John Gilmore wrote: > > ... A copy of both the 1987 tape and the 1989 tape are here: > > http://www.toad.com/SunUserGroupTape-Rel-1987.1.0.tar.gz > http://www.toad.com/SunUserGroupTape-Rel-1989.tar > http://www.toad.com/ As always, John Gilmore rocks... (John, can you tell folks about the history, rationale, and text of the Sun-1's PROM identification message?) > I'll have to do a bit more digging to turn up more than vague memories > about our dealings with the lawyers... I have two legal war stories to relate, offhand... # AT&T I tried to find a way to get permission to include copies of text files (both vanilla and modified, IIRC) that were part of the "binary" release. (I thought it might be useful for folks to have backup and/or "improved" versions.) I was flatly informed that all files in a binary distribution were, by definition, binary (not text). So, we had nothing to discuss. I never did manage to break through that legal stonewall. # Sun The other story (fortunately!) ended more successfully. You see, Sun's legal staff had drafted a "minimal" agreement that folks _donating_ bits were expected to sign off on. IIRC, it ran on for about twenty pages (!). John and I were both appalled and tried to tell the lawyer we met with that the authors and organizations involved wouldn't sign off on anything like this; indeed, they wouldn't even read it. Since John was pretty incensed, I got to play Good Cop. IIRC, I asked John to list the issues that he thought reasonable for the agreement to cover. With some grumbling, he came up with a set of issues whose legalese filled one side of a (letter-size :-) page. I then turned to the Sun lawyer (who was actually trying to be helpful and make things work) and asked him to tell us about any _critical_ issues John had left out. However, I only gave him a one-page budget. Adding the mutually acceptable issues brought the agreement up to two pages, which I got John (with a bit more grumbling) to accept. The lawyer then had the (unenviable) task of getting Sun to sign off on it. He did, and SUG was able to get donation sign-offs and produce a useful tape (whew!). I'm sure it helped that the folks doing the first tape were working at LLNL... -r From drb at msu.edu Wed Feb 17 07:06:32 2021 From: drb at msu.edu (Dennis Boone) Date: Tue, 16 Feb 2021 16:06:32 -0500 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: (Your message of Tue, 16 Feb 2021 14:33:05 -0600.) <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> References: <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> Message-ID: <20210216210632.B7AF1339771@yagi.h-net.msu.edu> > To my knowledge, I've never used paste or join outside of initially > coming across them. But, they seem to 'fit' with cut. My question for > y'all is, was there a subset of related utilities that these were > part of that served some common purpose? On a related note, join > seems like part of an aborted (aka never fully realized) attempt at a > text based rdb to me... My copy is hiding from me, so I can't be sure, but iirc Bourne's _The Unix System_ (978-0-201-13791-0) had a section on this sort of "text database" and may have discussed the `join` command. De From will.senn at gmail.com Wed Feb 17 07:15:45 2021 From: will.senn at gmail.com (Will Senn) Date: Tue, 16 Feb 2021 15:15:45 -0600 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: References: <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> Message-ID: On 2/16/21 3:02 PM, Dave Horsfall wrote: > On Tue, 16 Feb 2021, Will Senn wrote: > >> To my knowledge, I've never used paste or join outside of initially >> coming across them. But, they seem to 'fit' with cut. My question for >> y'all is, was there a subset of related utilities that these were >> part of that served some common purpose? On a related note, join >> seems like part of an aborted (aka never fully realized) attempt at a >> text based rdb to me... > > I use "cut" a fair bit, rarely use "paste", but as for "join" and > RDBs, just look at the man page: "join — relational database > operator".  As for future use, who knows?  Could be a fun project for > someone with time on their hands (not me!). > > -- Dave, who once implemented a "join" operation with BDB Oh brother! RTFM... properly... :). Still, I'm curious about the history. Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Wed Feb 17 07:26:24 2021 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 17 Feb 2021 08:26:24 +1100 (EST) Subject: [TUHS] cut, paste, join, etc. In-Reply-To: References: <3987726c-db35-79fc-00cb-5d979cfaf53a@gmail.com> Message-ID: On Tue, 16 Feb 2021, Will Senn wrote: > Oh brother! RTFM... properly... :). Still, I'm curious about the > history. We all have our moments :-) Yes, I'd like to know the history too; those tools definitely have a database-ish look about them. All the bits seem to be there; they just have to be, ahem, joined together... -- Dave From coppero1237 at gmail.com Wed Feb 17 07:59:23 2021 From: coppero1237 at gmail.com (Tyler Adams) Date: Tue, 16 Feb 2021 23:59:23 +0200 Subject: [TUHS] What would "a unix restaurant" look like? Message-ID: I've been writing about unix design principles recently and tried explaining "The Rule of Silence" by imagining unix as a restaurant . Do you agree with how I presented it? Would you do it differently? Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Wed Feb 17 08:46:43 2021 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Tue, 16 Feb 2021 23:46:43 +0100 Subject: [TUHS] Abstractions In-Reply-To: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: <20210216224643.p5_uK%steffen@sdaoden.eu> Jon Steinhart wrote in <202102151956.11FJuRIh3079869 at darkstar.fourwinds.com>: |Was thinking about our recent discussion about system call bloat and such. |Seemed to me that there was some argument that it was needed in order to |support modern needs. As I tried to say, I think that a good part of the |bloat stemmed from we-need-to-add-this-to-support-that thinking instead |of what's-the-best-way-to-extend-the-system-to-support-this-need thinking. | |So if y'all are up for it, I'd like to have a discussion on what abstrac\ |tions |would be appropriate in order to meet modern needs. Any takers? Proper program exit integer status codes. Now that "set -o pipefail" is a standardized feature of POSIX shells all that is needed are programs which properly handle errors and also report that to the outside. This is very hard, especially when put over existing codebases. But also new code. For example i use BTRFS (with a long term perspective to switch to ZFS, because of restartable snapshot sends, and also because of ZFS encrypted partitions to replace my several encfs-encrypted on-demand storages, these now can even be shared in between FreeBSD and Linux), (i use it at all because it ships with the Linux kernel, can be compiled-in, is copyright-compatible, that is i wanted to test that coming from over two decades of ext2/3/4 on Linux and of course the default of FreeBSD, and i really drive the entire thing with subvolumes, only the EFI boot partition is truly separate), anyhow, receiving snapshots can fail but the snapshot counts as having been properly received, and no exit status whatsoever will report the failure. (At least in my practical experiences.) Easy scriptability with proper (also meaning automatically interpretable) error reports. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From woods at robohack.ca Wed Feb 17 08:55:09 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 16 Feb 2021 14:55:09 -0800 Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <13ded1a4-d717-c57c-5168-0f1f44ca4b5b@gmail.com> References: <5372.1612853750@hop.toad.com> <13ded1a4-d717-c57c-5168-0f1f44ca4b5b@gmail.com> Message-ID: Henry Bent wrote: > Apple loves to move quickly and > abandon compatibility, and in that respect it's an interesting > counterpoint to Linux or a *BSD where you can have decades old > binaries that still run. Nothing in the open-source (OS) world churns and moves around and grows and wiggles as much as Linux. The rate of change of the kernel is simply incomprehensibly staggering. Linux userland isn't much better off. In the commercial (OS) world Microsoft might be churning at a similar rate. Apple on the other hand is relatively stable by comparison -- too stable at times and not always regularly picking up fixes from third-party projects they make use of. Apple is guilty of extreme churn in their GUI though -- at least from a user's perspective (perhaps their APIs are a bit more stable, but somehow from observing them from afar I highly doubt it). Apple's ABIs seem relatively stable -- I still run a few binary apps (albeit quite simple ones) I installed over a decade ago and haven't updated since. At Mon, 8 Feb 2021 22:05:57 -0900, Michael Huff wrote: Subject: Re: [TUHS] Macs and future unix derivatives > > I don't think there's any change on NetBSD, no idea about OpenBSD but > I assume they're the same. Indeed, NetBSD/i386 is still a "tier 1" port as of the 9.1 release last fall: http://wiki.NetBSD.org/ports/ Note there is a caveat with regard to true 80386: "Any i486 or better CPU should work - genuine Intel or a compatible such as Cyrix, AMD, or NexGen." Also NetBSD comes with a good group of compatability and emulation modules for its kernel ABI, including support for both all of its own older releases, as well as for port-specific third-party ABI emulations, such as Linux and SCO Unix. See for example: https://wiki.NetBSD.org/guide/linux/ I've only once ever needed to run a Linux binary, and quite a long time ago, so I'm not so up to date on these things, but it may well be that NetBSD/i386 can run old 32-bit Linux i386 binaries better than any current release of Linux. Personally I work in a world where there's source code for every application I use, which means I generally only need backward compatability for the earliest release I might be running at any given time -- I.e. just enough to keep things running after an upgrade while I get it all re-compiled and tested. > In all honest, I don't think that backwards compatibility has ever > been that great on Linux -at least not for the last twenty or so > years, in my (limited) experience. Really good backward compatability for older kernel and library ABIs is a cornerstone of NetBSD release engineering. It is very well designed and implemented and it is pretty much guaranteed to work or get fixed. Unlike Linux it doesn't rely on shared libraries to work, and also the system shared libraries have very good backward compatability support as well. I.e. NetBSD backward compatability is far more complete and reliable at all levels (including ABIs and their APIs) -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From m.douglas.mcilroy at dartmouth.edu Wed Feb 17 11:08:13 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Tue, 16 Feb 2021 20:08:13 -0500 Subject: [TUHS] cut, paste, join, etc. Message-ID: Will Senn wrote, > join seems like part of an aborted (aka never fully realized) attempt at a text based rdb to me As the original author of join, I can attest that there was no thought of parlaying join into a database system. It was inspired by databases, but liberated from them, much as grep was liberated from an editor. Doug From will.senn at gmail.com Wed Feb 17 11:16:17 2021 From: will.senn at gmail.com (Will Senn) Date: Tue, 16 Feb 2021 19:16:17 -0600 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: References: Message-ID: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> On 2/16/21 7:08 PM, M Douglas McIlroy wrote: > Will Senn wrote, >> join seems like part of an aborted (aka never fully realized) attempt at a text based rdb to me > As the original author of join, I can attest that there was no thought > of parlaying join into a database system. It was inspired by > databases, but liberated from them, much as grep was liberated from an > editor. > > Doug Nice! Thanks Doug. Too bad, though... one gets ever tired of having to log into db's and a simple text db system would be useful. Even sqlite, which I love, requires login to get at information... I'm already logged in, why can't I just ask for my info and have it returned? Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdm at cfcl.com Wed Feb 17 11:33:19 2021 From: rdm at cfcl.com (Rich Morin) Date: Tue, 16 Feb 2021 17:33:19 -0800 Subject: [TUHS] What would "a unix restaurant" look like? In-Reply-To: References: Message-ID: > On Feb 16, 2021, at 13:59, Tyler Adams wrote: > > I've been writing about Unix design principles recently and tried explaining > "The Rule of Silence" by imagining Unix as a restaurant. Do you agree with > how I presented it? Would you do it differently? Apple's A/UX team used to joke about Mac versus Unix restaurant ordering, eg: W: What would you like to order? U: I'd like a green salad, with blue cheese dressing on the side, no croutons. W: Would you like to order an appetizer, dessert, entree, or side dish? M: I'd like to order a side dish. W: Would you like to order cottage cheese, french fries, a salad, or ... M: ... All of us actually ordered our meals in Unix style, but all too often we heard folks ordering in Mac style. -r From gtaylor at tnetconsulting.net Wed Feb 17 11:43:07 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 16 Feb 2021 18:43:07 -0700 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> Message-ID: <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> On 2/16/21 6:16 PM, Will Senn wrote: > Nice! Thanks Doug. Too bad, though... one gets ever tired of having to > log into db's and a simple text db system would be useful. Even sqlite, > which I love, requires login to get at information... I'm already logged > in, why can't I just ask for my info and have it returned? What do you mean by "log into db's" in relation to SQLite? I've never needed to enter a username and password to access SQLite. If you /do/ mean username and password, I believe that some DBs will allow you to authenticate using Kerberos. Thus you should be able to streamline DB access along with access to many other things. If you /don't/ mean username and password, then what do you mean? Are you referring to needing to run a command to open and access the SQLite DB? Taking a quick gander at sqlite3 --help makes me think that you can append the SQL(ite) command that you want to run to the command line. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From will.senn at gmail.com Wed Feb 17 12:26:11 2021 From: will.senn at gmail.com (Will Senn) Date: Tue, 16 Feb 2021 20:26:11 -0600 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> Message-ID: <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> On 2/16/21 7:43 PM, Grant Taylor via TUHS wrote: > On 2/16/21 6:16 PM, Will Senn wrote: >> Nice! Thanks Doug. Too bad, though... one gets ever tired of having >> to log into db's and a simple text db system would be useful. Even >> sqlite, which I love, requires login to get at information... I'm >> already logged in, why can't I just ask for my info and have it >> returned? > > What do you mean by "log into db's" in relation to SQLite?  I've never > needed to enter a username and password to access SQLite. > > If you /do/ mean username and password, I believe that some DBs will > allow you to authenticate using Kerberos.  Thus you should be able to > streamline DB access along with access to many other things. > > If you /don't/ mean username and password, then what do you mean? Are > you referring to needing to run a command to open and access the > SQLite DB?  Taking a quick gander at sqlite3 --help makes me think > that you can append the SQL(ite) command that you want to run to the > command line. > > > Oops. That's right, no username & password, but you still need to bring it up and interact with it... accept, as you say, you can enter your sql as an argument to the executable. OK, I suppose ... grump, grump... Not quite what I was thinking, but I'd be hard pressed to argue the difference between creating a handful of files in the filesystem (vs tables in sqlite) and then using some unix filter utilities to access and combine the file relations (vs passing sql to sqlite) other than, it'd be fun if there were select, col, row (grep?), join (inner, outer, natural), utils that worked with text without the need to worry about the finickiness of the database (don't stone me as a database unbeliever, I've used plenty in my day). Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmhanson at eschatologist.net Wed Feb 17 12:58:15 2021 From: cmhanson at eschatologist.net (Chris Hanson) Date: Tue, 16 Feb 2021 18:58:15 -0800 Subject: [TUHS] CMU Andrew wm/wmc? Message-ID: I was lucky enough to actually have a chance to use wm at Carnegie Mellon before it was fully retired in favor of X11 on the systems in public clusters; it made a monochrome DECstation 3100 with 8MB much more livable. When it was retired, it was still usable for a while because the CMU Computer Club maintained an enhanced version (wmc) that everyone had access to, and Club members got access to its sources. Did anyone happen to preserve the wm or wmc codebase? There's some documentation in the papers that were published about the wm and Andrew API but no code. -- Chris From cowan at ccil.org Wed Feb 17 13:29:35 2021 From: cowan at ccil.org (John Cowan) Date: Tue, 16 Feb 2021 22:29:35 -0500 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> Message-ID: I'm not sure what you're thinking of, but there is no login in SQLite: its only access control is at the DB level, and that's Unix file permissions. Carl Strozzi's NOSQL system (not to be confused with the concept of NoSQL databases) is a relational database built using ordinary Unix utilities and pipelines. Each table is a TSV file with a header line whose fields are the column names prefixed by ^A so that they always sort to the top. It also provides commands like "jointable", which is "join" wrapped in an awk script that collects the column names from the tables and does a natural join. The package can be downloaded from < http://www.strozzi.it/shared/nosql/nosql-4.1.11.tar.gz>. The documentation is shonky, but the code works nicely. On Tue, Feb 16, 2021 at 8:17 PM Will Senn wrote: > On 2/16/21 7:08 PM, M Douglas McIlroy wrote: > > Will Senn wrote, > > join seems like part of an aborted (aka never fully realized) attempt at a text based rdb to me > > As the original author of join, I can attest that there was no thought > of parlaying join into a database system. It was inspired by > databases, but liberated from them, much as grep was liberated from an > editor. > > Doug > > Nice! Thanks Doug. Too bad, though... one gets ever tired of having to log > into db's and a simple text db system would be useful. Even sqlite, which I > love, requires login to get at information... I'm already logged in, why > can't I just ask for my info and have it returned? > > Will > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cowan at ccil.org Wed Feb 17 13:49:01 2021 From: cowan at ccil.org (John Cowan) Date: Tue, 16 Feb 2021 22:49:01 -0500 Subject: [TUHS] Fwd: [multicians] History of C (with Multics reference) In-Reply-To: <3803.1613457083@hop.toad.com> References: <1607711516.31417164@apps.rackspace.com> <30368.1613327707837544705@groups.io> <3803.1613457083@hop.toad.com> Message-ID: On Tue, Feb 16, 2021 at 1:32 AM John Gilmore wrote: > There was clearly a lot of cross-fertilization between early APL systems > and Bon. (APL was the first computer language I dug deeply into.) Some > of the common elements are the interactive execution environment, > untyped variables, and automatic application of builtin functions (like > +) across all elements of arrays. > In particular, doing gotos by assigning to a variable is very old-school APL. John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org Is a chair finely made tragic or comic? Is the portrait of Mona Lisa good if I desire to see it? Is the bust of Sir Philip Crampton lyrical, epical or dramatic? If a man hacking in fury at a block of wood make there an image of a cow, is that image a work of art? If not, why not? --Stephen Dedalus -------------- next part -------------- An HTML attachment was scrubbed... URL: From tytso at mit.edu Wed Feb 17 14:01:32 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Tue, 16 Feb 2021 23:01:32 -0500 Subject: [TUHS] Abstractions In-Reply-To: <202102161959.11GJxWC5676454@darkstar.fourwinds.com> References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> <202102161959.11GJxWC5676454@darkstar.fourwinds.com> Message-ID: It's always useful to talk about requirements as the first part of the design process. At the high level, how important is backwards compatibility? Is the problem of how support existing application in scope, or not? Or is the assumption that emulation libraries will always be sufficient. How about performance, either of applications using the new API, or applcications using the legacy API's? And what are the hardware platforms that this new set of abstractions going to target? Is the goal only to target small embedded systems? Mobile handsets? Desktop systems? Is it supposed to be able to scale to super computers? Are web front-ends that need to be able to accept thousands of incoming TCP connections per second, and then redirect those connections to application logic servers in scope? Solutions that involve being able to support intpret general SQL queries may not scale in terms of performance and the ability to support thousands of file descriptors in a single process. Backwards compatibility is why we have multiple asynchronous I/O interfaces --- from select, poll, epoll, kqueue, and io_uring. And the reason why we've had multiple asynchronus I/O interfaces over the decades is because the performance requirements have changed, and the capability of hardware interfaces for high performance I/O has changed; it's no longer about I/O ports and interrupts, but instead, having multiple request and response queues through memory mapped I/O, and the need to be able to use multiple CPU's and multiplexing multiple network or storage transactions across a single doorbell or system call. If all of this is out of scope, then the design process will be much simpler, and perhaps more elegant; but the resulting design will not be useful for many of the use cases where Linux is used today. And perhaps that's OK. On the other hand, one person's simple, elegant design is another person's toy that isn't fit for their purpose. IBM once said that part of Linux's power is that it scales from wrist watches to super computers. Is that in scope for this theoretical design question? - Ted From bakul at iitbombay.org Wed Feb 17 14:06:01 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Tue, 16 Feb 2021 20:06:01 -0800 Subject: [TUHS] SQL OS (Re: Abstractions In-Reply-To: References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: On Feb 16, 2021, at 12:15 AM, Tom Ivar Helbekkmo via TUHS wrote: > > Jon Steinhart writes: > >> So if y'all are up for it, I'd like to have a discussion on what >> abstractions would be appropriate in order to meet modern needs. Any >> takers? > > A late friend of mine felt strongly that Unix needed an SQL interface to > the kernel. With all information and configuration in a well designed > schema, system administration could be greatly enhanced, he felt, and > could have standard interaction patterns across components -- instead of > all the quirky command line interfaces we have today, and their user > oriented output formats that you need to parse to use the data. Not quite the same but Arthur Whitney, the author of the K array programming language did something called kOS, mainly to run K apps. It initially ran on Linux but then on "bare metal". The entire OS + a graphic layer called z (to replace X11) fit in 62kB. But it seems he never released it. No idea why. kdb (built on top of K) is a columnar database. An old article on kOS. http://archive.vector.org.uk/art10501320 Also note that in mid 80s there was at least one company building Unix with atomic transactions IO. I forget their name now -- it was Tolerant Systems or Relational Systems or something. As a contractor I wrote some testing framework for them for regression testing etc. As I recall the OS was quite slow. There were a bunch of Unx workstations startups in the Silicon Valley in '80s. Not sure their stories have been told (and I knew only a few bits and pieces that I picked up as a contractor and forgot soon). From gtaylor at tnetconsulting.net Wed Feb 17 14:08:15 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 16 Feb 2021 21:08:15 -0700 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> Message-ID: On 2/16/21 7:26 PM, Will Senn wrote: > Oops. That's right, no username & password, but you still need to bring > it up and interact with it... accept, as you say, you can enter your sql > as an argument to the executable. OK, I suppose ... grump, grump... ;-) Take a moment and grump. I know that I've made similar mistakes from unknown options. > Not quite what I was thinking, but I'd be hard pressed to argue the > difference between creating a handful of files in the filesystem > (vs tables in sqlite) and then using some unix filter utilities to > access and combine the file relations (vs passing sql to sqlite) I don't know where the line is to transition from stock text files and an actual DB. I naively suspect that by the time you need an index, you should have transitioned to a DB. > other than, it'd be fun if there were select, col, row (grep?), join > (inner, outer, natural), utils that worked with text without the need > to worry about the finickiness of the database I'm confident that it's quite possible to do similar types of, if not actually the same, operation with traditional Unix utilities vs SQL, at least for relatively simple queries. The last time I looked, join didn't want to work on more than two inputs at one time. So you're left with something like two different joins, one of which working on the output from the other one. I suspect that one of the differences is where the data lives. If it's STDIO, then traditional Unix utilities are king. If it's something application specific and only accessed by said application, then a DB is probably a better bet. Then there's the fact that some consider file systems to be a big DB that is mounted. }:-) > (don't stone me as a database unbeliever, I've used plenty in my day). Use of something does not implicitly make you a supporter of or advocate for something. ;-) I like SQLite and Berkeley DB in that they don't require a full RDBMS running. Instead, an application can load what it needs and access the DB itself. I don't remember how many files SQLite uses to store a DB. A single (or few) file(s) make it relatively easy to exchange DBs with people. E.g. someone can populate the DB and then send copies of it to coworkers for their distributed use. Something that's harder to do with a typical RDBMS. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From dave at horsfall.org Wed Feb 17 14:14:03 2021 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 17 Feb 2021 15:14:03 +1100 (EST) Subject: [TUHS] Fwd: [multicians] History of C (with Multics reference) In-Reply-To: References: <1607711516.31417164@apps.rackspace.com> <30368.1613327707837544705@groups.io> <3803.1613457083@hop.toad.com> Message-ID: On Tue, 16 Feb 2021, John Cowan wrote: > In particular, doing gotos by assigning to a variable is very old-school > APL. I spent a fun year with APL\360 in CompSci, but could you please elaborate on that? I know; this is an ASCII window on my MacBook so you won't be able to show the code :-) ObOF: Some joker once pinned an 80-column card with APL on it (yes, it can be done) with a job card behind it to the Computer Centre's notice board; under it was the inscription "Who said that APL programs weren't transparent?" -- Dave, who enjoyed that write-only language From bakul at iitbombay.org Wed Feb 17 15:51:06 2021 From: bakul at iitbombay.org (Bakul Shah) Date: Tue, 16 Feb 2021 21:51:06 -0800 Subject: [TUHS] [multicians] History of C (with Multics reference) In-Reply-To: References: <1607711516.31417164@apps.rackspace.com> <30368.1613327707837544705@groups.io> <3803.1613457083@hop.toad.com> Message-ID: On Feb 16, 2021, at 8:14 PM, Dave Horsfall wrote: > > On Tue, 16 Feb 2021, John Cowan wrote: > >> In particular, doing gotos by assigning to a variable is very old-school APL. > > I spent a fun year with APL\360 in CompSci, but could you please elaborate on that? I know; this is an ASCII window on my MacBook so you won't be able to show the code :-) The goto operator is "-> label", while assignment is "var <- value" In Ken Iverson's 1962 book "A Programming language" he shows branches by showing flowchart like arrow connecting a source stmt to target but he doesn't use any goto or branch operator symbol or labels. So not sure what John means. As a grad student I did part time programming for a prof doing research in cancer epidemiology. I convinced him to let me use APL instead of PL/I but the "funny money" quickly ran out and it was back to PL/I! From cmhanson at eschatologist.net Wed Feb 17 16:50:40 2021 From: cmhanson at eschatologist.net (Chris Hanson) Date: Tue, 16 Feb 2021 22:50:40 -0800 Subject: [TUHS] Abstractions In-Reply-To: <202102161959.11GJxWC5676454@darkstar.fourwinds.com> References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> <202102161959.11GJxWC5676454@darkstar.fourwinds.com> Message-ID: <56624C4D-B5DE-4664-A522-5D8E3BDF320A@eschatologist.net> On Feb 16, 2021, at 11:59 AM, Jon Steinhart wrote: > > The original Apple Macintosh API was published in 1985 in a three-­ > volume set of books called Inside Macintosh (Addison-­Wesley). The > set was over 1,200 pages long. It’s completely obsolete; modern > (UNIX-based) Macs don’t use any of it. Why didn’t this API > design last? I think this is a little bit of a red herring; most of the original Macintosh Toolbox APIs would not be considered "system calls" then or now. The Macintosh Operating System APIs were a much more tightly-scoped set on top of which was the Toolbox. For example, in the original filesystem and device driver interfaces, you had _PBOpen, _PBClose, _PBRead, _PBWrite, and _PBControl. Sound familiar? One major difference is that these took a struct full of arguments (a parameter block in Macintosh API terminology) and could be used either synchronously or asynchronously with a callback, unlike the core UNIX filesystem calls. A more oranges-to-oranges comparison would be to look at the Macintosh Operating System and Toolbox API surface compared with, say, the SunOS and SunWindows API surface… And then, of course, there's the question of how long the design lasted: The Carbon API set is a direct descendant of the original Macintosh Operating System and Toolbox API set, and was supported for the entire lifetime of 32-bit executables on the Mac. I ported plenty of OS & Toolbox code to Carbon and it was mostly a matter of updating UI metrics and replacing direct structure accesses with equivalent function calls. -- Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnu at toad.com Wed Feb 17 20:14:14 2021 From: gnu at toad.com (John Gilmore) Date: Wed, 17 Feb 2021 02:14:14 -0800 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> Message-ID: <21803.1613556854@hop.toad.com> Grant Taylor via TUHS wrote: > I don't know where the line is to transition from stock text files and > an actual DB. I naively suspect that by the time you need an index, you > should have transitioned to a DB. Didn't AT&T Research at some point write a database, called Daytona, that worked like ordinary Unix commands? E.g. it just sat there in disk files when you weren't using it. There was no "database server". When you wanted to do some operation on it, you ran a command, which read the database and did what you wanted and wrote out results and stopped and returned to the shell prompt. How novel! Supposedly it had high performance on large collections of data, with millions or billions of records. Things like telephone billing data. I found a couple of conference papers about it, but never saw specs for it, not even man pages. How did Daytona fit into Unix history? Was it ever part of a Unix release? John From davida at pobox.com Wed Feb 17 22:09:15 2021 From: davida at pobox.com (David Arnold) Date: Wed, 17 Feb 2021 23:09:15 +1100 Subject: [TUHS] Abstractions In-Reply-To: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> References: <202102151956.11FJuRIh3079869@darkstar.fourwinds.com> Message-ID: <6294B905-1CEB-43C2-AAAF-085339D57EC3@pobox.com> > On 16 Feb 2021, at 06:56, Jon Steinhart wrote: > > Was thinking about our recent discussion about system call bloat and such. > Seemed to me that there was some argument that it was needed in order to > support modern needs. As I tried to say, I think that a good part of the > bloat stemmed from we-need-to-add-this-to-support-that thinking instead > of what's-the-best-way-to-extend-the-system-to-support-this-need thinking. > > So if y'all are up for it, I'd like to have a discussion on what abstractions > would be appropriate in order to meet modern needs. Any takers? Plan9 showed that it’s possible to evolve the Unix model to encompass new needs without compromising the abstraction, although to be fair, it basically addressed only the first 15-20 years of changes since V7. Freedom to break backward compatibility is obviously a key enabler, and difficult to manage for a commercial system. Despite its various issues, I think the Mach abstractions also stand up well as an insightful effort for their time. One area that has continued to evolve in Unix, with a trail of (mostly) still-supported-but-no-longer-recommended APIs, is asynchronous event handling. mpx, select, poll, kevents, AIO, /dev/poll, epoll, port_create, inotify, dnotify, FEN, etc. What a mess! Containers, jails, zones, namespaces, etc, is another area with diverse solutions, none of which have been sufficiently the Right Thing to be adopted by everyone else. For today’s uses and hardware, the Unix API does too much: rich, stateful APIs copying everything from userland to kernel and back again — the context switching and data copying time is prohibitive, and so the kernel ends up being bypassed once it’s checked the permissions and allocated the hardware resources. I hesitate to call it a micro-kernel model, but the kernel is used less, and libraries and services take on more of the work. d From andrew at humeweb.com Thu Feb 18 00:52:57 2021 From: andrew at humeweb.com (Andrew Hume) Date: Wed, 17 Feb 2021 06:52:57 -0800 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <21803.1613556854@hop.toad.com> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> <21803.1613556854@hop.toad.com> Message-ID: daytona was always a separate commercial product. it was an extremely large, very efficient database. you should think of it as analogous to a large postgres system. rick greer was the primary author; an overview paper is http://www09.sigmod.org/sigmod/sigmod99/eproceedings/papers/greer.pdf for many years, probably now as well, it was the main way that at&t stored per-call information. as of the mid 2000s, it had over 2 trillion calls in it. > On Feb 17, 2021, at 2:14 AM, John Gilmore wrote: > > Grant Taylor via TUHS wrote: >> I don't know where the line is to transition from stock text files and >> an actual DB. I naively suspect that by the time you need an index, you >> should have transitioned to a DB. > > Didn't AT&T Research at some point write a database, called Daytona, > that worked like ordinary Unix commands? E.g. it just sat there in disk > files when you weren't using it. There was no "database server". When > you wanted to do some operation on it, you ran a command, which read the > database and did what you wanted and wrote out results and stopped and > returned to the shell prompt. How novel! > > Supposedly it had high performance on large collections of data, > with millions or billions of records. Things like telephone billing > data. > > I found a couple of conference papers about it, but never saw specs for > it, not even man pages. How did Daytona fit into Unix history? Was > it ever part of a Unix release? > > John > From emu at e-bbes.com Thu Feb 18 02:07:03 2021 From: emu at e-bbes.com (emanuel stiebler) Date: Wed, 17 Feb 2021 11:07:03 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <202102070732.1177Wd3r014240@freefriends.org> References: <202102070732.1177Wd3r014240@freefriends.org> Message-ID: On 2021-02-07 02:32, arnold at skeeve.com wrote: > Hi. > > Thanks for the update. The speed comparison is interesting. It should actually be the same, as the emulator slows down the CPU, so it is more realistic? From dave at horsfall.org Thu Feb 18 06:49:14 2021 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 18 Feb 2021 07:49:14 +1100 (EST) Subject: [TUHS] cut, paste, join, etc. In-Reply-To: References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> Message-ID: On Tue, 16 Feb 2021, Grant Taylor via TUHS wrote: > Then there's the fact that some consider file systems to be a big DB > that is mounted. }:-) It is; it's a hierarchical DB (and is still used as such). -- Dave, who remembers the hierarchical/relational DB wars From erc at pobox.com Thu Feb 18 08:00:06 2021 From: erc at pobox.com (Ed Carp) Date: Wed, 17 Feb 2021 15:00:06 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: Wasn't the 3B1 the same thing as the 7300? From lm at mcvoy.com Thu Feb 18 08:14:34 2021 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 17 Feb 2021 14:14:34 -0800 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: <20210217221434.GK19472@mcvoy.com> On Wed, Feb 17, 2021 at 03:00:06PM -0700, Ed Carp wrote: > Wasn't the 3B1 the same thing as the 7300? Yes. Nice machine for the time. From crossd at gmail.com Thu Feb 18 09:58:18 2021 From: crossd at gmail.com (Dan Cross) Date: Wed, 17 Feb 2021 18:58:18 -0500 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <21803.1613556854@hop.toad.com> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> <21803.1613556854@hop.toad.com> Message-ID: On Wed, Feb 17, 2021 at 5:16 AM John Gilmore wrote: > Grant Taylor via TUHS wrote: > > I don't know where the line is to transition from stock text files and > > an actual DB. I naively suspect that by the time you need an index, you > > should have transitioned to a DB. > > Didn't AT&T Research at some point write a database, called Daytona, > that worked like ordinary Unix commands? E.g. it just sat there in disk > files when you weren't using it. There was no "database server". When > you wanted to do some operation on it, you ran a command, which read the > database and did what you wanted and wrote out results and stopped and > returned to the shell prompt. How novel! > > Supposedly it had high performance on large collections of data, > with millions or billions of records. Things like telephone billing > data. > > I found a couple of conference papers about it, but never saw specs for > it, not even man pages. How did Daytona fit into Unix history? Was > it ever part of a Unix release? > It seems that Andrew has addressed Daytona, but there was a small database package called `pq` that shipped with plan9 at one point that I believe started life on Unix. It was based on "flat" text files as the underlying data source, and one would describe relations internally using some mechanism (almost certainly another special file). An interesting feature was that it was "implicitly relational": you specified the data you wanted and it constructed and executed a query internally: no need to "JOIN" tables on attributes and so forth. I believe it supported indices that were created via a special command. I think it was used as the data source for the AT&T internal "POST" system. A big downside was that you could not add records to the database in real time. It was taken to Cibernet Inc (they did billing reconciliation for wireless carriers. That is, you have an AT&T phone but make a call that's picked up by T-Mobile's tower: T-Mobile lets you make the call but AT&T has to pay them for the service. I contracted for them for a short time when I got out of the Marine Corps---the first time) and enhanced and renamed "Eteron" and the record append issue was, I believe, solved. Sadly, I think that technology was lost when Cibernet was acquired. It was kind of cool. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erc at pobox.com Thu Feb 18 11:30:05 2021 From: erc at pobox.com (Ed Carp) Date: Wed, 17 Feb 2021 18:30:05 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <20210217221434.GK19472@mcvoy.com> References: <20210217221434.GK19472@mcvoy.com> Message-ID: Yup. I had one. :) On 2/17/21, Larry McVoy wrote: > On Wed, Feb 17, 2021 at 03:00:06PM -0700, Ed Carp wrote: >> Wasn't the 3B1 the same thing as the 7300? > > Yes. Nice machine for the time. > From cowan at ccil.org Thu Feb 18 13:23:26 2021 From: cowan at ccil.org (John Cowan) Date: Wed, 17 Feb 2021 22:23:26 -0500 Subject: [TUHS] [multicians] History of C (with Multics reference) In-Reply-To: References: <1607711516.31417164@apps.rackspace.com> <30368.1613327707837544705@groups.io> <3803.1613457083@hop.toad.com> Message-ID: On Wed, Feb 17, 2021 at 12:52 AM Bakul Shah wrote: > The goto operator is "-> label", while assignment is "var <- value" > I overstated the case. However, goto is in fact "-> expression", where expression is an integer scalar referring to a line (implicitly numbered from 1 upwards) of the current definition; a goto to a nonexistent line such as 0 exits the current definition or program. Labels were added later, and are essentially local variables bound to the line number they appear on. Modern APL uses structured-programming constructs like all other post-Ratfor languages. John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org One of the oil men in heaven started a rumor of a gusher down in hell. All the other oil men left in a hurry for hell. As he gets to thinking about the rumor he had started he says to himself there might be something in it after all. So he leaves for hell in a hurry. --Carl Sandburg -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Thu Feb 18 17:59:39 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Thu, 18 Feb 2021 00:59:39 -0700 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: References: Message-ID: <202102180759.11I7xdVc007366@freefriends.org> Ed Carp wrote: > Wasn't the 3B1 the same thing as the 7300? There were differences in the amounts of memory and size of disk. The 3B1 had room for a larger disk and thus its case was shaped differently. In terms of other hardware and the software, they were the same. Arnold From brad at anduin.eldar.org Fri Feb 19 04:07:55 2021 From: brad at anduin.eldar.org (Brad Spencer) Date: Thu, 18 Feb 2021 13:07:55 -0500 Subject: [TUHS] AT&T 3B1 - Emulation available In-Reply-To: <202102180759.11I7xdVc007366@freefriends.org> (arnold@skeeve.com) Message-ID: arnold at skeeve.com writes: > Ed Carp wrote: > >> Wasn't the 3B1 the same thing as the 7300? > > There were differences in the amounts of memory and size of disk. The 3B1 > had room for a larger disk and thus its case was shaped differently. > In terms of other hardware and the software, they were the same. > > Arnold Hmm.. think I used one of those 7300, aka Unix PC systems when I was an undergrad a long time ago. It looked like images I find on the Net in any case, but it was a long time ago. Whatever it was that we had, I remember that the floppy drive was 5.25 inch and used 512 byte sectors. I had a Radio Shack Color Computer 3 at the time and the disk controller on that would read a 512 double density byte sector disk just fine. I had gotten pretty good at reading foreign disks on the CC3 and I put a copy of /bin/sh onto the floppy on the Unix PC and then used the CC3 to adjust the ownership and mode to make the copy of the sh binary setuid root. Since the Unix PC would allow anyone to mount the floppy (at least on the one we had) and since they didn't restrict setuid for the mounted floppy I ended up with a root shell. Fun times... used it for some class work instead of the PDP11/44 running BSD that we also had at the university. -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From tuhs at cuzuco.com Fri Feb 19 06:20:01 2021 From: tuhs at cuzuco.com (Brian Walden) Date: Thu, 18 Feb 2021 15:20:01 -0500 (EST) Subject: [TUHS] cut, paste, join, etc. Message-ID: <202102182020.11IKK1dt006545@cuzuco.com> The last group before I left the labs in 1992 was on was the POST team. pq stood for "post query," but POST consisted of - - mailx: (from SVR3.1) as the mail user agent - UPAS: (from research UNIX) as the mail delivery agent - pq: the program to query the database - EV: (pronounced like the biblical name) the database (and the genesis program to create indices) - post: program to combine all the above to read email and to send mail via queries pq by default would looku up people pq lastname: find all people with lastname, same as pq last=lastname pq first.last: find all people with first last, same as pq first=first/last=last pq first.m.last: find all people with first m last, same as pq first=first/middle=m/last=last this how email to dennis.m.ritchie @ att.com worked to send it on to research!dmr you could send mail to a whole department via /org=45267 or the whole division via /org=45 or a whole location via /loc=mh or just the two people in a specific office via /loc=mh/room=2f-164 these are "AND"s an "OR" is just another query after it on the same line There were some special extentions - - prefix, e.g. pq mackin* got all mackin, mackintosh, mackinson, etc - soundex, e.g. pq mackin~ got all with the last name that sounding like mackin, so names such as mackin, mckinney, mckinnie, mickin, mikami, etc (mackintosh and mackinson did not match the soundex, therefore not included) The EV database was general and fairly simple. It was directory with files called "Data" and "Proto" in it. "Data" was plain text, pipe delineated fields, newline separated records - 123456|ritchie|dennis|m||r320|research!dmr|11273|mh|2c-517|908|582|3770 (used data from preserved at https://www.bell-labs.com/usr/dmr/www/) "Proto" defined the fields in a record (I didn't remember exact syntax anymore) - id n i last a i first a i middle a - suffix a - soundex a i email a i org n i loc a i room a i area n i exch n i ext n i "n" means a number so 00001 was the same as 1, and "a" means alpha, the "i" or "-" told genesis if an index should be generated or not. I think is had more but that has faded with the years. If indices are generated it would then point to the block number in Data, so an lseek(2) could get to the record quick. I beleive there was two levels of block pointing indices. (sort of like inode block pointers had direct and indirect blocks) So everytime you added records to Data you had to regenerate all the indices, that was very time consuming. The nice thing about text Data was grep(1) worked just fine, or cut -d'|' or awk -F'|' but pq was much faster with a large numer of records. -Brian Dan Cross wrote: > It seems that Andrew has addressed Daytona, but there was a small database > package called `pq` that shipped with plan9 at one point that I believe > started life on Unix. It was based on "flat" text files as the underlying > data source, and one would describe relations internally using some > mechanism (almost certainly another special file). An interesting feature > was that it was "implicitly relational": you specified the data you wanted > and it constructed and executed a query internally: no need to "JOIN" > tables on attributes and so forth. I believe it supported indices that were > created via a special command. I think it was used as the data source for > the AT&T internal "POST" system. A big downside was that you could not add > records to the database in real time. > > It was taken to Cibernet Inc (they did billing reconciliation for wireless > carriers. That is, you have an AT&T phone but make a call that's picked up > by T-Mobile's tower: T-Mobile lets you make the call but AT&T has to pay > them for the service. I contracted for them for a short time when I got out > of the Marine Corps---the first time) and enhanced and renamed "Eteron" and > the record append issue was, I believe, solved. Sadly, I think that > technology was lost when Cibernet was acquired. It was kind of cool. > > - Dan C. > From ality at pbrane.org Fri Feb 19 06:41:45 2021 From: ality at pbrane.org (Anthony Martin) Date: Thu, 18 Feb 2021 12:41:45 -0800 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <202102182020.11IKK1dt006545@cuzuco.com> References: <202102182020.11IKK1dt006545@cuzuco.com> Message-ID: The Plan 9 version of pq can be found here: https://9p.io/sources/extra/pq.tgz Cheers, Anthony From m.douglas.mcilroy at dartmouth.edu Sun Feb 21 09:09:42 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Sat, 20 Feb 2021 18:09:42 -0500 Subject: [TUHS] Abstractions Message-ID: > - separation of code and data using read-only and read/write file systems I'll bite. How do you install code in a read-only file system? And where does a.out go? My guess is that /bin is in a file system of its own. Executables from /letc and /lib are probably there too. On the other hand, I guess users' personal code is still read/write. I agree that such an arrangement is prudent. I don't see a way, though, to update bin without disrupting most running programs. Doug From otto at drijf.net Sun Feb 21 18:15:29 2021 From: otto at drijf.net (Otto Moerbeek) Date: Sun, 21 Feb 2021 09:15:29 +0100 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Sat, Feb 20, 2021 at 06:09:42PM -0500, M Douglas McIlroy wrote: > > - separation of code and data using read-only and read/write file systems > > I'll bite. How do you install code in a read-only file system? And > where does a.out go? > > My guess is that /bin is in a file system of its own. Executables from > /letc and /lib are probably there too. On the other hand, I guess > users' personal code is still read/write. > > I agree that such an arrangement is prudent. I don't see a way, > though, to update bin without disrupting most running programs. > > Doug I always wonder how to distunguish data and programs when people want to separate them. One person's data is another person's program and vice versa. Think scripting, config files, grammar definitions, postscript files, exectuables to be fed to emulators, compilers, linkers, code analysis tools, the examples are endless. Turing already saw that form the theoretical point of view, others (like Von Neumann) more from the practical persppective. Data = Programs. -Otto From pnr at planet.nl Sun Feb 21 20:47:39 2021 From: pnr at planet.nl (Paul Ruizendaal) Date: Sun, 21 Feb 2021 11:47:39 +0100 Subject: [TUHS] Abstractions Message-ID: <0D0EC7CA-0014-44D5-BABD-CF799F9D4418@planet.nl> To quote from Jon’s post: > There have been heated discussions on this list about kernel API bloat. In my > opinion, these discussions have mainly been people grumbling about what they > don't like. I'd like to flip the discussion around to what we would like. > Ken and Dennis did a great job with initial abstractions. Some on this list > have claimed that these abstractions weren't sufficient for modern times. > Now that we have new information from modern use cases, how would we rethink > the basic abstractions? I’d like to add the constraint of things that would have been implementable on the hardware of the late 1970’s, let’s say a PDP11/70 with Datakit or 3Mbps Ethernet or Arpanet; maybe also Apple 2 class bitmap graphics. And quote some other posts: > Because it's easy pickings, I would claim that the socket system call is out > of line with the UNIX abstractions; it exists because of practical political > considerations, not because it's needed. I think that it would have fit > better folded into the open system call. >> >> Somebody once suggested a filesystem interface (it certainly fits the Unix >> philosophy); I don't recall the exact details. > > And it was done, over 30 years ago; see Plan 9 from Bell Labs.... I would argue that quite a bit of that was implementable as early as 6th Edition. I was researching that very topic last Spring [1] and back ported Peter Weinberger’s File System Switch (FSS) from 8th to 6th Edition; the switch itself bloats the kernel by about half a kilobyte. I think it may be one of the few imaginable extensions that do not dilute the incredible bang/buck ratio of the V6 kernel. With that change in place a lot of other things become possible: - a Kilian style procfs - a Weinberger style network FS - a text/file based ioctl - a clean approach to named pipes - a different starting point to sockets Each of these would add to kernel size of course, hence I’m thinking about a split I/D kernel. To some extent it is surprising that the FSS did not happen around 1975, as many ideas around it were 'in the air' at the time (Heinz Lycklama’s peripheral Unix, the Spider network Filestore, Rand ports, Arpanet Unix, etc). With the benefit of hindsight, it isn’t a great code leap from the cdev switch to the FSS - but probably the ex ante conceptual leap was just too big at the time. Paul [1] Code diffs here: https://1587660.websites.xs4all.nl/cgi-bin/9995/vdiff?from=fab15b88a6a0f36bdb41f24f0b828a67c5f9fe03&to=b95342aaa826bb3c422963108c76d09969b1de93&sbs=1 From rdm at cfcl.com Sun Feb 21 21:08:25 2021 From: rdm at cfcl.com (Rich Morin) Date: Sun, 21 Feb 2021 03:08:25 -0800 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: <4375BA78-1F3A-4106-8FAB-1AFC77B8630B@cfcl.com> > On Feb 20, 2021, at 15:09, M Douglas McIlroy wrote: > >> - separation of code and data using read-only and read/write file systems > > I'll bite. How do you install code in a read-only file system? Disclaimer: I haven't actually used Nerves myself, just watched some presentations, read various web pages, etc. So anything I say about it is quite unreliable. And, although that item was (sort of) true, it was obviously rather misleading if interpreted too broadly. So, I'll try to provide some context to explain what I meant by it. As I understand it, Nerves is intended as a build and delivery mechanism for IoT system software. It's supposed to be possible to upgrade a deployed device without blowing away its persistent saved state. And, if the upgrade fails, to back down to the previous version. Also, the running code on the device should not be able to trash the system software. To support this, they use multiple file systems, with various updating attributes. For example, they might have two file systems for the system software and a third one for the persistent saved state. This lets a developer upload and boot a new copy of the system software, but fall back to the old version if something goes wrong. -r From dave at horsfall.org Mon Feb 22 08:40:57 2021 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 22 Feb 2021 09:40:57 +1100 (EST) Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Sat, 20 Feb 2021, M Douglas McIlroy wrote: >> - separation of code and data using read-only and read/write file >> systems > > I'll bite. How do you install code in a read-only file system? And where > does a.out go? I once worked for a place who reckoned that /bin and /lib etc ought to be in an EEPROM; I reckon that he was right (Penguin/OS dumps everything under /usr/bin, for example). > My guess is that /bin is in a file system of its own. Executables from > /letc and /lib are probably there too. On the other hand, I guess users' > personal code is still read/write. That's how we ran our RK-05 11/40s since Ed 5... Good fun writing a DJ-11 driver from the DH-11 source; even more fun when I wrote a UT-200 driver from the manual alone (I'm sure that "ei.c" is Out There Somewhere), junking IanJ's driver. The war stories that I could tell... > I agree that such an arrangement is prudent. I don't see a way, though, > to update bin without disrupting most running programs. Change is inevitable; the trick is to minimise the disruption. -- Dave, who carried RK-05s all over the UNSW campus From clemc at ccc.com Mon Feb 22 08:54:19 2021 From: clemc at ccc.com (Clem Cole) Date: Sun, 21 Feb 2021 17:54:19 -0500 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Sat, Feb 20, 2021 at 6:10 PM M Douglas McIlroy < m.douglas.mcilroy at dartmouth.edu> wrote: > > - separation of code and data using read-only and read/write file > systems > > I'll bite. How do you install code in a read-only file system? And > where does a.out go? > The best way I have seen this done is with overlay and union file system support. The 'writeable' versions are the file in /bin are overlayed as needed. To do this properly you need the stackable file system stuff we worked on at LCC and Sun. If you can interpose at the inode level it's very cool and flexible (Sun played with - but makes the Sun symlink nightmare seem like an easy night at the movies), at the filesystem switch layer (Locus and UCLA - scheme that was in BSD at one point - easier to manage/admin). ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From usotsuki at buric.co Mon Feb 22 09:01:34 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Sun, 21 Feb 2021 18:01:34 -0500 (EST) Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Mon, 22 Feb 2021, Dave Horsfall wrote: > I once worked for a place who reckoned that /bin and /lib etc ought to be in > an EEPROM; I reckon that he was right (Penguin/OS dumps everything under > /usr/bin, for example). I have used distributions in the past that maintained the traditional distinction. While I've been stuck regarding bringing up a kernel, C compiler and libc all together, (keeping in mind my desire to avoid gcc and glibc for the project) the conceptual distribution I've been working on for some time uses more or less the same abstraction as the BSDs, with distinct /bin and /sbin vs. /usr/bin and /usr/sbin as I personally believe it should be, that the stuff in /bin should be enough to bring up and/or run diagnostics on a system, and everything else go in /usr. -uso. From wkt at tuhs.org Mon Feb 22 10:13:44 2021 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 22 Feb 2021 10:13:44 +1000 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: <20210222001344.GA26914@minnie.tuhs.org> On Mon, Feb 22, 2021 at 09:40:57AM +1100, Dave Horsfall wrote: > That's how we ran our RK-05 11/40s since Ed 5... Good fun writing a DJ-11 > driver from the DH-11 source; even more fun when I wrote a UT-200 driver > from the manual alone (I'm sure that "ei.c" is Out There Somewhere), junking > IanJ's driver. https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/sys/dmr/ei.c Cheers, Warren From will.senn at gmail.com Mon Feb 22 12:34:55 2021 From: will.senn at gmail.com (Will Senn) Date: Sun, 21 Feb 2021 20:34:55 -0600 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? Message-ID: All, So, we've been talking low-level design for a while. I thought I would ask a fundamental question. In days of old, we built small single-purpose utilities and used pipes to pipeline the data and transformations. Even back in the day, it seemed that there was tension to add yet another option to every utility. Today, as I was marveling at groff's abilities with regard to printing my man pages directly to my printer in 2021, I read the groff(1) page: example here: https://linux.die.net/man/1/groff What struck me (the wrong way) was the second paragraph of the description: The groff program allows to control the whole groff system by command line options. This is a great simplification in comparison to the classical case (which uses pipes only). Here is the current plethora of options: groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L arg] [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w name] [-W name] [file ...] Now, I appreciate groff, don't get me wrong, but my sensibilities were offended by the idea that a kazillion options was in any way simpler than pipelining single-purpose utilities. What say you? Is this the perfected logical extension of the unix pioneers' work, or have we gone horribly off the trail. Regards, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.branden.robinson at gmail.com Mon Feb 22 13:32:19 2021 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Mon, 22 Feb 2021 14:32:19 +1100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: Message-ID: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> At 2021-02-21T20:34:55-0600, Will Senn wrote: > All, > > So, we've been talking low-level design for a while. I thought I would > ask a fundamental question. In days of old, we built small > single-purpose utilities and used pipes to pipeline the data and > transformations. Even back in the day, it seemed that there was > tension to add yet another option to every utility. Today, as I was > marveling at groff's abilities with regard to printing my man pages > directly to my printer in 2021, I read the groff(1) page: > > example here: https://linux.die.net/man/1/groff A more up to date copy is available at the Linux man-pages site. https://man7.org/linux/man-pages/man1/groff.1.html > What struck me (the wrong way) was the second paragraph of the > description: > > The groff program allows to control the whole groff system by command > line options. This is a great simplification in comparison to the > classical case (which uses pipes only). What strikes _me_ about the above is the awful Denglish in it. I fixed this back in 2017 and the correction shipped as part of groff 1.22.4 in December 2018. > Here is the current plethora of options: > groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L arg] > [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w name] > [-W name] [file ...] > > Now, I appreciate groff, don't get me wrong, but my sensibilities were > offended by the idea that a kazillion options was in any way simpler > than pipelining single-purpose utilities. What say you? Is this the > perfected logical extension of the unix pioneers' work, or have we > gone horribly off the trail. I'd say it's neither, and reflects (1) the limitations of the Unix filter model, or at least the linear topology of Unix pipelines[1]; and (2) an arbitrary set of rules determined by convention and common practice with respect to sequencing. Consider the first the question of which *roff preprocessor languages should be embeddable in another preprocessor's language. Should you be able to embed equations in tables? What about tables inside equations (not too insane an idea--consider matrix literals)? Nothing in the Unix filter model implies a choice between these decisions, but an ordering decision must be made. V7 Unix tbl(1)'s man page[3] took a moderately strong position on preprocessor ordering based on more practical concerns (I suppose loading on shared systems). When it is used with .I eqn or .I neqn the .I tbl command should be first, to minimize the volume of data passed through pipes. Another factor is ergonomics. As the number of preprocessors expands, the number of potential orderings of a document processing pipeline also grows--combinatorially. Here's the chunk of the groff front-end program that determines the ordering of the pipeline it constructs for the user. // grap, chem, and ideal must come before pic; // tbl must come before eqn const int PRECONV_INDEX = 0; const int SOELIM_INDEX = PRECONV_INDEX + 1; const int REFER_INDEX = SOELIM_INDEX + 1; const int GRAP_INDEX = REFER_INDEX + 1; const int CHEM_INDEX = GRAP_INDEX + 1; const int IDEAL_INDEX = CHEM_INDEX + 1; const int PIC_INDEX = IDEAL_INDEX + 1; const int TBL_INDEX = PIC_INDEX + 1; const int GRN_INDEX = TBL_INDEX + 1; const int EQN_INDEX = GRN_INDEX + 1; const int TROFF_INDEX = EQN_INDEX + 1; const int POST_INDEX = TROFF_INDEX + 1; const int SPOOL_INDEX = POST_INDEX + 1; Sure, you could have a piece of paper with the above ordering taped to the wall near your terminal, but why? Isn't it better to have a tool to keep track of these arbitrary complexities instead? groff, as a front-end and pipeline manager, is much smaller than the actual formatter. According to sloccount, it's 1,195 lines to troff's 23,023 (measurements taken on groff Git HEAD, where I spend much of my time). If you need to alter the pipeline or truncate it, to debug an input document or resequence the processing order, you can, and groff supplies the -V flag to help you do so. A traditionalist need never type the groff command if it offends one's sensibilities--it would be a welcome change from people grousing about copyleft. All the pieces of the pipeline are still there and can be directly invoked. For an alternative approach to *roff document interpretation and rendering, albeit in a limited domain, see the mandoc project[4]. It interprets the man(7) and mdoc(7) macro languages, a subset of *roff, and tbl(1)'s mini-language with, as I understand it, a single parser. Regards, Branden [1] Tom Duff noted this a long time ago in his paper presenting the rc shell[2]; see §9. [2] https://archive.org/details/rc-shell/page/n2/mode/1up [3] https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/man/man1/tbl.1 [4] https://mandoc.bsd.lv/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From drsalists at gmail.com Mon Feb 22 14:32:27 2021 From: drsalists at gmail.com (Dan Stromberg) Date: Sun, 21 Feb 2021 20:32:27 -0800 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: On Sun, Feb 21, 2021 at 7:33 PM G. Branden Robinson > > The groff program allows to control the whole groff system by command > > line options. This is a great simplification in comparison to the > > classical case (which uses pipes only). > > What strikes _me_ about the above is the awful Denglish in it. I fixed > this back in 2017 and the correction shipped as part of groff 1.22.4 in > December 2018. > I like the easy composability of pipes, but I don't mind some options. I don't like the huge, all-purpose applications called web browsers nearly as much. They strike me as Very un-unixy. But much can be justified by not having to get users to download a client application, and not having to get sysadmins to punch a hole through their firewalls. I'd say it's neither, and reflects (1) the limitations of the Unix > filter model, or at least the linear topology of Unix pipelines[1] > I don't think they have to be linear: http://joeyh.name/code/moreutils/ (see the unfortunately-named "pee" utility) and: https://stromberg.dnsalias.org/~strombrg/mtee.html Full disclosure: I wrote mtee. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rly1 at embarqmail.com Mon Feb 22 14:13:49 2021 From: rly1 at embarqmail.com (Ron Young) Date: Sun, 21 Feb 2021 20:13:49 -0800 Subject: [TUHS] UNSW batch availability (was Re: Abstractions) Message-ID: Hi:     I've been following the discussion on abstractions and the recent messages have been talking about a ei200 batch driver (ei.c: https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/sys/dmr/ei.c). I have access to DtCyber (CDC Cyber emulator) that runs all/most of the cdc operating system. I'm toying with the idea of getting ei200 running. In looking at things, I ran across the following in https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/READ_ME > The UNSW batch system has not been provided with this > distribution, because of its limited appeal. > If you are unfortunate enough to have a CYBER to talk to, > please contact us and we will forward it to you. Does anyone happen to know if the batch system is still around? thanks -ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Mon Feb 22 14:34:10 2021 From: will.senn at gmail.com (Will Senn) Date: Sun, 21 Feb 2021 22:34:10 -0600 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: <6621ba33-1a45-b074-a3ef-26671360d949@gmail.com> On 2/21/21 9:32 PM, G. Branden Robinson wrote: > At 2021-02-21T20:34:55-0600, Will Senn wrote: >> All, >> >> So, we've been talking low-level design for a while. I thought I would >> ask a fundamental question. In days of old, we built small >> single-purpose utilities and used pipes to pipeline the data and >> transformations. Even back in the day, it seemed that there was >> tension to add yet another option to every utility. Today, as I was >> marveling at groff's abilities with regard to printing my man pages >> directly to my printer in 2021, I read the groff(1) page: >> >> example here: https://linux.die.net/man/1/groff > A more up to date copy is available at the Linux man-pages site. > > https://man7.org/linux/man-pages/man1/groff.1.html I just picked the first hit in google :) shoulda known better. However, it's the same text that's in my mac's install (Mojave). > >> What struck me (the wrong way) was the second paragraph of the >> description: >> >> The groff program allows to control the whole groff system by command >> line options. This is a great simplification in comparison to the >> classical case (which uses pipes only). > What strikes _me_ about the above is the awful Denglish in it. I fixed > this back in 2017 and the correction shipped as part of groff 1.22.4 in > December 2018. Mac Mojave: Groff Version 1.19.2              3 July 2005                         GROFF(1) >> Here is the current plethora of options: >> groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L arg] >> [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w name] >> [-W name] [file ...] >> >> Now, I appreciate groff, don't get me wrong, but my sensibilities were >> offended by the idea that a kazillion options was in any way simpler >> than pipelining single-purpose utilities. What say you? Is this the >> perfected logical extension of the unix pioneers' work, or have we >> gone horribly off the trail. > I'd say it's neither, and reflects (1) the limitations of the Unix > filter model, or at least the linear topology of Unix pipelines[1]; and > (2) an arbitrary set of rules determined by convention and common > practice with respect to sequencing. snip... Very informative post, Branden. I appreciate the details. I gotta read more code :). Will From g.branden.robinson at gmail.com Mon Feb 22 15:45:01 2021 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Mon, 22 Feb 2021 16:45:01 +1100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: <6621ba33-1a45-b074-a3ef-26671360d949@gmail.com> References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> <6621ba33-1a45-b074-a3ef-26671360d949@gmail.com> Message-ID: <20210222054459.atcfnojgkbm37hra@localhost.localdomain> At 2021-02-21T22:34:10-0600, Will Senn wrote: > On 2/21/21 9:32 PM, G. Branden Robinson wrote: > > What strikes _me_ about the above is the awful Denglish in it. I > > fixed this back in 2017 and the correction shipped as part of groff > > 1.22.4 in December 2018. > Mac Mojave: Groff Version 1.19.2              3 July > 2005                         GROFF(1) Yikes. Yeah, every once in a while a macOS user reports a known defect to the groff list, one we've fixed years ago. Apple's insistence on shipping a 15-year old version is pretty frustrating. I'm given to understand that "brew" can be used straightforwardly to obtain much more recent groff builds, and I know for sure that we have macOS users contributing reports when something in the toolchain goes wrong and we need to accommodate it. Here's a recent example[1]. I've been soliciting help from Windows users to keep our build in good shape over there, to no effect lately. This may have something to do with Microsoft's latest Unix compatibility effort being a bundled Ubuntu distribution--I don't know the details. It may be that going forward there will simply be no audience for "native" Windows support in groff. > Very informative post, Branden. I appreciate the details. I gotta read > more code :). Thank you! TUHS has been a tremendously useful resource in helping me to document where things came from, as well as to figure out when some element of surprising behavior is just a bug versus a historical compatibility feature. V9 sources sure would be nice to have, as would DWB versions other than 3.3... [1] https://savannah.gnu.org/bugs/?60035 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rtomek at ceti.pl Mon Feb 22 15:57:17 2021 From: rtomek at ceti.pl (Tomasz Rola) Date: Mon, 22 Feb 2021 06:57:17 +0100 Subject: [TUHS] cut, paste, join, etc. In-Reply-To: <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> References: <26484818-2f05-37d3-adff-6e34d383e117@gmail.com> <399f2cdc-d790-c4fe-18e3-0cb6b4c76554@spamtrap.tnetconsulting.net> <55d60220-c22d-c99f-f40c-68a741183213@gmail.com> Message-ID: <20210222055717.GA28147@tau1.ceti.pl> On Tue, Feb 16, 2021 at 08:26:11PM -0600, Will Senn wrote: [...] > Oops. That's right, no username & password, but you still need to > bring it up and interact with it... accept, as you say, you can > enter your sql as an argument to the executable. OK, I suppose ... > grump, grump... Not quite what I was thinking, but I'd be hard > pressed to argue the difference between creating a handful of files > in the filesystem (vs tables in sqlite) and then using some unix > filter utilities to access and combine the file relations (vs > passing sql to sqlite) other than, it'd be fun if there were select, > col, row (grep?), join (inner, outer, natural), utils that worked > with text without the need to worry about the finickiness of the > database (don't stone me as a database unbeliever, I've used plenty > in my day). I am not sure if this is what you are looking for, but sections 3 and 4 of "The AWK Programming Language" (by Aho, Kernighan and Weinberger) have a description of very nice data processing scripts written in AWK. Might even work in gawk. Might even work, actually - I had no time to write the code into files and give it a try. Personally, I would rather use awk for this rather than multiple command line utilities. Might be a bit nicer to modern system with process accounting enabled (I once wrote a shell script processing mailbox files, plenty of echos and greps, but since then have seen the light and I repented). On the other hand, on multiprocessor computer, each part of pipe runs in parallel, but I guess this had been said already. Also, found this in my notes - if you, or anybody from a future would like a quick glimpse on "what awk": :: Drinking coffee with AWK https://lobste.rs/s/hdljia/drinking_coffee_with_awk https://opensource.com/article/19/2/drinking-coffee-awk :: Using AWK and R to parse 25tb https://lobste.rs/s/kgah5l/using_awk_r_parse_25tb https://livefreeordichotomize.com/2019/06/04/using_awk_and_r_to_parse_25tb/ -- Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From rdm at cfcl.com Mon Feb 22 17:20:26 2021 From: rdm at cfcl.com (Rich Morin) Date: Sun, 21 Feb 2021 23:20:26 -0800 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: Message-ID: <501CBA6E-B242-4B46-8779-DAE4AB0A9EB5@cfcl.com> I've been happily using pipes since I found out about pipes, back in the early 80's (Thanks, Doug!). However, until recently I didn't write applications in a programming language which supported them "internally". Recently, however, I've been using Elixir, which does: Pipe Operator https://elixirschool.com/en/lessons/basics/pipe-operator/ Note that, although the basic pipe implementation simply does composition of functions with error handling, the Stream variant offers lazy evaluation: Enumerables and Streams https://elixir-lang.org/getting-started/enumerables-and-streams.html Yes, I know that F# (and probably other languages) had pipes first, but I still give points to José Valim for stealing wisely and well. Various folks then built onto the basic pipe mechanism, e.g.: - https://github.com/batate/elixir-pipes - extension library for using pattern matching with pipes, etc. - https://hexdocs.pm/broadway/Broadway.html - concurrent, multi-stage tool for building data ingestion and data processing pipelines with back pressure, etc. fun stuff... -r From jpl.jpl at gmail.com Tue Feb 23 01:49:52 2021 From: jpl.jpl at gmail.com (John P. Linderman) Date: Mon, 22 Feb 2021 10:49:52 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: I can imagine a simple perl (or python or whatever) script that would run through groff input, determine which preprocessors are *actually* needed, and set up a pipeline to run through (only) the needed preprocessors in the proper order. I wouldn't have to tell groff what preprocessors I think are needed, and groff wouldn't have to change (although my script would) when another preprocessor comes into existence. Modern processors are fast enough, and groff input small enough, that the "extra" pass wouldn't be burdensome. And it would take the burden off me to remember exactly which preprocessors are essential. -- jpl -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Tue Feb 23 02:02:40 2021 From: imp at bsdimp.com (Warner Losh) Date: Mon, 22 Feb 2021 09:02:40 -0700 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: On Mon, Feb 22, 2021 at 8:50 AM John P. Linderman wrote: > I can imagine a simple perl (or python or whatever) script that would run > through groff input, determine which preprocessors are *actually* needed, > and set up a pipeline to run through (only) the needed preprocessors in the > proper order. I wouldn't have to tell groff what preprocessors I think are > needed, and groff wouldn't have to change (although my script would) when > another preprocessor comes into existence. Modern processors are fast > enough, and groff input small enough, that the "extra" pass wouldn't be > burdensome. And it would take the burden off me to remember exactly which > preprocessors are essential. -- jpl > Yea, that's the main benefit of extra flags to commands: you can optimize the number of filters that data passes through, or you can do things with 'hidden state' that's hard to do in another phase of the output. ls is a good example. ls -lt is relatively easy to do the sorting of times and the formatting of times inside ls, but harder to do as a filter since times are hard to sort... Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpl.jpl at gmail.com Tue Feb 23 02:03:43 2021 From: jpl.jpl at gmail.com (John P. Linderman) Date: Mon, 22 Feb 2021 11:03:43 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: On a brute-forcier note, when I was doing a lot more troff, I wrote a command that ran input through *ALL* the preprocessors I might need. Even on 70's processors, it was fast enough, and msde my life a tiny bit better. -- jpl On Mon, Feb 22, 2021 at 10:57 AM William Cheswick wrote: > This proposal reminds me of Paul Glick’s lp command. It took whatever > file you gave it, and processed > it as necessary for whatever printer you chose. It was very useful, > simple AI. > > > On Feb 22, 2021, at 10:49 AM, John P. Linderman > wrote: > > > > I can imagine a simple perl (or python or whatever) script that would > run through groff input, determine which preprocessors are actually needed, > and set up a pipeline to run through (only) the needed preprocessors in the > proper order. I wouldn't have to tell groff what preprocessors I think are > needed, and groff wouldn't have to change (although my script would) when > another preprocessor comes into existence. Modern processors are fast > enough, and groff input small enough, that the "extra" pass wouldn't be > burdensome. And it would take the burden off me to remember exactly which > preprocessors are essential. -- jpl > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ches at cheswick.com Tue Feb 23 01:57:55 2021 From: ches at cheswick.com (William Cheswick) Date: Mon, 22 Feb 2021 10:57:55 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: This proposal reminds me of Paul Glick’s lp command. It took whatever file you gave it, and processed it as necessary for whatever printer you chose. It was very useful, simple AI. > On Feb 22, 2021, at 10:49 AM, John P. Linderman wrote: > > I can imagine a simple perl (or python or whatever) script that would run through groff input, determine which preprocessors are actually needed, and set up a pipeline to run through (only) the needed preprocessors in the proper order. I wouldn't have to tell groff what preprocessors I think are needed, and groff wouldn't have to change (although my script would) when another preprocessor comes into existence. Modern processors are fast enough, and groff input small enough, that the "extra" pass wouldn't be burdensome. And it would take the burden off me to remember exactly which preprocessors are essential. -- jpl From fuz at fuz.su Tue Feb 23 02:12:14 2021 From: fuz at fuz.su (Robert Clausecker) Date: Mon, 22 Feb 2021 17:12:14 +0100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: > I can imagine a simple perl (or python or whatever) script that would run > through groff input, determine which preprocessors are *actually* needed, > and set up a pipeline to run through (only) the needed preprocessors in the > proper order. I wouldn't have to tell groff what preprocessors I think are > needed, and groff wouldn't have to change (although my script would) when > another preprocessor comes into existence. Modern processors are fast > enough, and groff input small enough, that the "extra" pass wouldn't be > burdensome. And it would take the burden off me to remember exactly which > preprocessors are essential. -- jpl I'm not sure if it would be that simple. With preprocessors like soelim, your script would have to be able to open arbitrary external files to find out what preprocessors are needed. And perhaps other preprocessors too could trigger dependencies on additional preprocessors depending on how they are used. Yours, Robert Clausecker -- () ascii ribbon campaign - for an 8-bit clean world /\ - against html email - against proprietary attachments From jay-tuhs9915 at toaster.com Tue Feb 23 02:41:17 2021 From: jay-tuhs9915 at toaster.com (Jay Logue) Date: Mon, 22 Feb 2021 08:41:17 -0800 Subject: [TUHS] retro-fuse project Message-ID: <20210222164738.7381E93D39@minnie.tuhs.org> Lately, I've been playing around in v6 unix and mini-unix with a goal of better understanding how things work and maybe doing a little hacking.  As my fooling around progressed, it became clear that moving files into and out of the v6 unix world was a bit tedious.  So it occurred to me that having a way to mount a v6 filesystem under linux or another modern unix would be kind of ideal.  At the same time it also occurred to me that writing such a tool would be a great way to sink my teeth into the details of old Unix code. I am aware of Amit Singh's ancientfs tool for osxfuse, which implements a user-space v6 filesystem (among other things) for MacOS.  However, being read-only, it's not particularly useful for my problem.  So I set out to create my own FUSE-based filesystem capable of both reading and writing v6 disk images.  The result is a project I call retro-fuse, which is now up on github for anyone to enjoy (https://github.com/jaylogue/retro-fuse). A novel (or perhaps just peculiar) feature of retro-fuse is that, rather than being a wholesale re-implementation of the v6 filesystem, it incorporates the actual v6 kernel code itself, "lightly" modernized to work with current compilers, and reconfigured to run as a Unix process.  Most of file-handling code of the kernel is there, down to a trivial block device driver that reflects I/O into the host OS.  There's also a filesystem initialization feature that incorporates code from the original mkfs tool. Currently, retro-fuse only works on linux. But once I get access to my mac again in a couple weeks, I'll port it to MacOS as well.  I also hope to expand it to support other filesystems as well, such as v7 or the early BSDs, but we'll see when that happens. As I expected, this was a fun and very educational project to work on.  It forced me to really understand what was going in the kernel (and to really pay attention to what Lions was saying).  It also gave me a little view into what it was like to work on Unix back in the day.  Hopefully someone else will find my little self-education project useful as well. --Jay From will.senn at gmail.com Tue Feb 23 03:10:29 2021 From: will.senn at gmail.com (Will Senn) Date: Mon, 22 Feb 2021 11:10:29 -0600 Subject: [TUHS] retro-fuse project In-Reply-To: <20210222164738.7381E93D39@minnie.tuhs.org> References: <20210222164738.7381E93D39@minnie.tuhs.org> Message-ID: <07665269-ef0d-ca9a-ecfa-cb68e89bbf4b@gmail.com> On 2/22/21 10:41 AM, Jay Logue via TUHS wrote: > Lately, I've been playing around in v6 unix and mini-unix with a goal > of better understanding how things work and maybe doing a little > hacking. As my fooling around progressed, it became clear that moving > files into and out of the v6 unix world was a bit tedious.  So it > occurred to me that having a way to mount a v6 filesystem under linux > or another modern unix would be kind of ideal.  At the same time it > also occurred to me that writing such a tool would be a great way to > sink my teeth into the details of old Unix code. > ... > As I expected, this was a fun and very educational project to work > on.  It forced me to really understand what was going in the kernel > (and to really pay attention to what Lions was saying).  It also gave > me a little view into what it was like to work on Unix back in the > day.  Hopefully someone else will find my little self-education > project useful as well. > > --Jay > Yay! I for one, will appreciate this! Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From cowan at ccil.org Tue Feb 23 03:15:52 2021 From: cowan at ccil.org (John Cowan) Date: Mon, 22 Feb 2021 12:15:52 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: On Mon, Feb 22, 2021 at 11:17 AM Robert Clausecker wrote: > I'm not sure if it would be that simple. With preprocessors like > soelim, your script would have to be able to open arbitrary external > files to find out what preprocessors are needed. And perhaps other > preprocessors too could trigger dependencies on additional > preprocessors depending on how they are used. > True enough, but you'd be no better off in principle if you were using an explicit pipeline, especially if you are sourcing files that you didn't write: your knowledge of the content may be vague. John Cowan http://vrici.lojban.org/~cowan cowan at ccil.org Heckler: "Go on, Al, tell 'em all you know. It won't take long." Al Smith: "I'll tell 'em all we *both* know. It won't take any longer." -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at fourwinds.com Tue Feb 23 04:27:29 2021 From: jon at fourwinds.com (Jon Steinhart) Date: Mon, 22 Feb 2021 10:27:29 -0800 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: Message-ID: <202102221827.11MIRTKQ4069723@darkstar.fourwinds.com> Will Senn writes: > This is a multi-part message in MIME format. > > All, > > So, we've been talking low-level design for a while. I thought I would > ask a fundamental question. In days of old, we built small > single-purpose utilities and used pipes to pipeline the data and > transformations. Even back in the day, it seemed that there was tension > to add yet another option to every utility. Today, as I was marveling at > groff's abilities with regard to printing my man pages directly to my > printer in 2021, I read the groff(1) page: > > example here: https://linux.die.net/man/1/groff > > What struck me (the wrong way) was the second paragraph of the description: > > The groff program allows to control the whole groff system by command > line options. This is a great simplification in comparison to the > classical case (which uses pipes only). > > Here is the current plethora of options: > groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L > arg] [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w > name] [-W name] [file ...] > > Now, I appreciate groff, don't get me wrong, but my sensibilities were > offended by the idea that a kazillion options was in any way simpler > than pipelining single-purpose utilities. What say you? Is this the > perfected logical extension of the unix pioneers' work, or have we gone > horribly off the trail. > > Regards, > > Will I'm 99% happy with groff and its many options. Why? Because the various programs (troff, pic, tbl, eqn, ...) are still available and can be composed into pipelines of my own choosing. The 1% unhappiness is because I think that groff should be a shell script which it doesn't appear to be. In my opinion, if groff was a bad thing then one would have to question things like scripts and aliases in general. Groff is a composer, and composability is a core UNIXism to me. It would be way wrong if it replaced all of the programs that it invoked, but it doesn't. As an interesting example of the composability of the troff system, I did the diagrams for my book using pic because pic is awesome. But, despite what it says in the No Starch Press author guidelines, they really only accept material in word format. I could have rendered each image as a bitmap, but that just seemed so 80s. Turns out that while it doesn't do a great job, word will accept vector graphics in SVG format. So I ran each image through pic, through groff, through ps2pdf (embedding fonts), through pdf2svg, and finally through inkscape to crop the image. A tad cumbersome, but it works, and wouldn't be easy to do on any other system. I also did my original draft in troff and wrote a script to convert it into openoffice XML format so that it could be word-ified. Only part that I couldn't figure out was how to include the figures; I could generate the XML but it didn't work and there were no useful diagnostics so I had to import them by hand. Since Rob is on the list and (in)famous for the "cat -v" argument, I would agree with him that that is not the "right" way. Being consistent with my position on groff, I would go for a separate show-nonprinting utility and then, if widely used, a script that composed that and cat. Jon From rich.salz at gmail.com Tue Feb 23 05:30:41 2021 From: rich.salz at gmail.com (Richard Salz) Date: Mon, 22 Feb 2021 14:30:41 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: <202102221827.11MIRTKQ4069723@darkstar.fourwinds.com> References: <202102221827.11MIRTKQ4069723@darkstar.fourwinds.com> Message-ID: Is anyone upset with CC and its options and internal pipelines? -------------- next part -------------- An HTML attachment was scrubbed... URL: From robpike at gmail.com Tue Feb 23 06:13:59 2021 From: robpike at gmail.com (Rob Pike) Date: Tue, 23 Feb 2021 07:13:59 +1100 Subject: [TUHS] retro-fuse project In-Reply-To: <07665269-ef0d-ca9a-ecfa-cb68e89bbf4b@gmail.com> References: <20210222164738.7381E93D39@minnie.tuhs.org> <07665269-ef0d-ca9a-ecfa-cb68e89bbf4b@gmail.com> Message-ID: Please let us know how you go with the Macs. The system interfaces have become more refractory lately, with virtual file systems a particular concern. -rob On Tue, Feb 23, 2021 at 4:11 AM Will Senn wrote: > On 2/22/21 10:41 AM, Jay Logue via TUHS wrote: > > Lately, I've been playing around in v6 unix and mini-unix with a goal of > better understanding how things work and maybe doing a little hacking. As > my fooling around progressed, it became clear that moving files into and > out of the v6 unix world was a bit tedious. So it occurred to me that > having a way to mount a v6 filesystem under linux or another modern unix > would be kind of ideal. At the same time it also occurred to me that > writing such a tool would be a great way to sink my teeth into the details > of old Unix code. > ... > As I expected, this was a fun and very educational project to work on. It > forced me to really understand what was going in the kernel (and to really > pay attention to what Lions was saying). It also gave me a little view > into what it was like to work on Unix back in the day. Hopefully someone > else will find my little self-education project useful as well. > > --Jay > > Yay! I for one, will appreciate this! > > Will > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ality at pbrane.org Tue Feb 23 06:40:24 2021 From: ality at pbrane.org (Anthony Martin) Date: Mon, 22 Feb 2021 12:40:24 -0800 Subject: [TUHS] retro-fuse project In-Reply-To: <20210222164738.7381E93D39@minnie.tuhs.org> References: <20210222164738.7381E93D39@minnie.tuhs.org> Message-ID: On Plan 9: http://9p.io/magic/man2html/4/tapefs On Unix: https://9fans.github.io/plan9port/man/man4/tapefs.html Cheers, Anthony From g.branden.robinson at gmail.com Tue Feb 23 07:14:29 2021 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Tue, 23 Feb 2021 08:14:29 +1100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: <20210222211427.dpdkjxv72ojnmpuu@localhost.localdomain> Hi John, At 2021-02-22T10:49:52-0500, John P. Linderman wrote: > I can imagine a simple perl (or python or whatever) script that would > run through groff input, determine which preprocessors are *actually* > needed, and set up a pipeline to run through (only) the needed > preprocessors in the proper order. This is _almost_ what the groff grog(1) command does. It's been present as far back as our history goes, to groff 1.02 in June 1991. * It's a Perl script. * It uses pattern-matching heuristics to infer which arguments groff(1) will need to format the document (not just for preprocessors, but macro packages as well). * Depending on its own options, it writes the constructed command to stderr, executes it, or both. The only thing it doesn't handle is ordering, because groff(1) already takes care of that. > I wouldn't have to tell groff what preprocessors I think are needed, > and groff wouldn't have to change (although my script would) when > another preprocessor comes into existence. Modern processors are fast > enough, and groff input small enough, that the "extra" pass wouldn't > be burdensome. And it would take the burden off me to remember exactly > which preprocessors are essential. -- jpl We don't get a lot of bug reports about grog. Maybe it's not given enough prominence in groff's own documentation. Regards, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From g.branden.robinson at gmail.com Tue Feb 23 07:16:32 2021 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Tue, 23 Feb 2021 08:16:32 +1100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: <20210222211631.bbf7as6r76kxoowt@localhost.localdomain> At 2021-02-22T11:03:43-0500, John P. Linderman wrote: > On a brute-forcier note, when I was doing a lot more troff, I wrote a > command that ran input through *ALL* the preprocessors I might need. > Even on 70's processors, it was fast enough, and msde my life a tiny > bit better. -- jpl No crime in that. An alias or shell function to call "groff -Rpet" would take care of all the V7 Unix preprocessors. Regards, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From steffen at sdaoden.eu Tue Feb 23 10:24:41 2021 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Tue, 23 Feb 2021 01:24:41 +0100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: <20210222033217.dkqavclp22sa77ln@localhost.localdomain> Message-ID: <20210223002441.a8dF2%steffen@sdaoden.eu> Robert Clausecker wrote in : |> I can imagine a simple perl (or python or whatever) script that would run |> through groff input, determine which preprocessors are *actually* needed, |> and set up a pipeline to run through (only) the needed preprocessors \ |> in the |> proper order. I wouldn't have to tell groff what preprocessors I \ |> think are |> needed, and groff wouldn't have to change (although my script would) when |> another preprocessor comes into existence. Modern processors are fast |> enough, and groff input small enough, that the "extra" pass wouldn't be |> burdensome. And it would take the burden off me to remember exactly which |> preprocessors are essential. -- jpl | |I'm not sure if it would be that simple. With preprocessors like |soelim, your script would have to be able to open arbitrary external |files to find out what preprocessors are needed. And perhaps other |preprocessors too could trigger dependencies on additional |preprocessors depending on how they are used. Newer incarnations of man(1) support a shebang-alike control line <^'\" >followed by concat of [egprtv]+ and include $MANROFFSEQ content into this list, then do case "${preproc_arg}" in e) pipeline="$pipeline | $EQN" ;; g) GRAP ;; # Ignore for compatibility. p) pipeline="$pipeline | $PIC" ;; r) pipeline="$pipeline | $REFER" ;; t) pipeline="$pipeline | $TBL" ;; v) pipeline="$pipeline | $VGRIND" ;; *) usage ;; esac (I copied all this from 2014 text, do not ask me no questions.) It would make very much sense to extend this syntax for roff usage, so that document creators can define how manual consumers generate the result. This should/could include specification and thus automatic adjustment of the used character set. The problem with pipes is that they are academic. You can write wrapper scripts or shell functions or for simple cases even aliases to give the desire a name, but it does not fit pretty well the all-graphical shader-improved wiping experience people now have. You also want good manuals and a shell with a good history feature and a nice line editor and possibly tabulator completion, just in case you have forgotten something or made an error or are too lazy to type that much. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From wobblygong at gmail.com Tue Feb 23 10:25:51 2021 From: wobblygong at gmail.com (Wesley Parish) Date: Tue, 23 Feb 2021 13:25:51 +1300 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: I've just checked Slackware 14.* and it's still got a few binaries in /bin, unlike the RedHat* group which has indeed sent them all to /usr/bin. I don't know about the Debian* group, or if the Mandrake* group have gone with the RedHat* or not. Let alone all the other distros. Wesley Parish On 2/22/21, Dave Horsfall wrote: > On Sat, 20 Feb 2021, M Douglas McIlroy wrote: > >>> - separation of code and data using read-only and read/write file >>> systems >> >> I'll bite. How do you install code in a read-only file system? And where >> does a.out go? > > I once worked for a place who reckoned that /bin and /lib etc ought to be > in an EEPROM; I reckon that he was right (Penguin/OS dumps everything > under /usr/bin, for example). > >> My guess is that /bin is in a file system of its own. Executables from >> /letc and /lib are probably there too. On the other hand, I guess users' >> personal code is still read/write. > > That's how we ran our RK-05 11/40s since Ed 5... Good fun writing a DJ-11 > driver from the DH-11 source; even more fun when I wrote a UT-200 driver > from the manual alone (I'm sure that "ei.c" is Out There Somewhere), > junking IanJ's driver. > > The war stories that I could tell... > >> I agree that such an arrangement is prudent. I don't see a way, though, >> to update bin without disrupting most running programs. > > Change is inevitable; the trick is to minimise the disruption. > > -- Dave, who carried RK-05s all over the UNSW campus > From usotsuki at buric.co Tue Feb 23 10:38:21 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Mon, 22 Feb 2021 19:38:21 -0500 (EST) Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Tue, 23 Feb 2021, Wesley Parish wrote: > I've just checked Slackware 14.* and it's still got a few binaries in > /bin, unlike the RedHat* group which has indeed sent them all to > /usr/bin. I don't know about the Debian* group, or if the Mandrake* > group have gone with the RedHat* or not. Let alone all the other > distros. Debian links /bin to /usr/bin. -uso. From tytso at mit.edu Tue Feb 23 11:47:49 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 22 Feb 2021 20:47:49 -0500 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Tue, Feb 23, 2021 at 01:25:51PM +1300, Wesley Parish wrote: > I've just checked Slackware 14.* and it's still got a few binaries in > /bin, unlike the RedHat* group which has indeed sent them all to > /usr/bin. I don't know about the Debian* group, or if the Mandrake* > group have gone with the RedHat* or not. Let alone all the other > distros. More information about the /usr migration, can be found at: * https://wiki.debian.org/UsrMerge * https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/ One of the interesting points made in the above is that merging /bin and /usr/bin, et. al., was first done by Solaris 11 (ten years ago) and so one of the arguments for Linux distributions for proceeding with the /usr merge was to improve cross compatibility with legacy commercial Unix systems. So obviously, like so many other things, it's all Oracle's fault. :-) - Ted From m.douglas.mcilroy at dartmouth.edu Tue Feb 23 12:47:18 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Mon, 22 Feb 2021 21:47:18 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? Message-ID: > I can imagine a simple perl (or python or whatever) script that would run > through groff input [and] determine which preprocessors are actually > needed ... Brian imagined such and implemented it way back when. Though I used it, I've forgotten its name. One probably could have fooled it by tricks like calling pic only in a .so file and perhaps renaming .so. But I never heard of it failing in real life. It does impose an extra pass over the input, but may well save a pass compared to the defensive groff -pet that I often use or to the rerun necessary when I forget to mention some or all of the filters. From tytso at mit.edu Tue Feb 23 12:50:22 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 22 Feb 2021 21:50:22 -0500 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Mon, Feb 22, 2021 at 07:38:21PM -0500, Steve Nickolas wrote: > On Tue, 23 Feb 2021, Wesley Parish wrote: > > > I've just checked Slackware 14.* and it's still got a few binaries in > > /bin, unlike the RedHat* group which has indeed sent them all to > > /usr/bin. I don't know about the Debian* group, or if the Mandrake* > > group have gone with the RedHat* or not. Let alone all the other > > distros. > > Debian links /bin to /usr/bin. New installs of Debian will use a /usr merged configuration. However, for pre-existing installations, we are not yet forcing, or even strongly recommending, system administrators to install the usrmerge package which will transition an legacy directory hierarchy to be /usr merged. So at the moment, Debian packages need to support both merged and non-merged configurations, which is not ideal from a pacakge maintainer's POV. - Ted From imp at bsdimp.com Tue Feb 23 13:19:44 2021 From: imp at bsdimp.com (Warner Losh) Date: Mon, 22 Feb 2021 20:19:44 -0700 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On Mon, Feb 22, 2021, 7:50 PM Theodore Ts'o wrote: > On Mon, Feb 22, 2021 at 07:38:21PM -0500, Steve Nickolas wrote: > > On Tue, 23 Feb 2021, Wesley Parish wrote: > > > > > I've just checked Slackware 14.* and it's still got a few binaries in > > > /bin, unlike the RedHat* group which has indeed sent them all to > > > /usr/bin. I don't know about the Debian* group, or if the Mandrake* > > > group have gone with the RedHat* or not. Let alone all the other > > > distros. > > > > Debian links /bin to /usr/bin. > > New installs of Debian will use a /usr merged configuration. However, > for pre-existing installations, we are not yet forcing, or even > strongly recommending, system administrators to install the usrmerge > package which will transition an legacy directory hierarchy to be /usr > merged. So at the moment, Debian packages need to support both merged > and non-merged configurations, which is not ideal > I anticipate needing a /usr/bin/bash soon on my FreeBSD system for the same reason I have a /bin/bash pointing at /usr/local/bin/bash. Progress :) Warner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreww591 at gmail.com Tue Feb 23 13:31:49 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Mon, 22 Feb 2021 20:31:49 -0700 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: On 2/21/21, Steve Nickolas wrote: > > While I've been stuck regarding bringing up a kernel, C compiler and libc > all together, (keeping in mind my desire to avoid gcc and glibc for the > project) the conceptual distribution I've been working on for some time > uses more or less the same abstraction as the BSDs, with distinct /bin and > /sbin vs. /usr/bin and /usr/sbin as I personally believe it should be, > that the stuff in /bin should be enough to bring up and/or run diagnostics > on a system, and everything else go in /usr. > I don't see much of a point in maintaining the separation these days. /bin and /usr/bin were originally separated because it wasn't possible to fit everything on one disk, and (AFAIK) the separation was mostly maintained after that to reduce the chance of filesystem corruption rendering the system unbootable (which is much less of a problem nowadays because of journalled and log-structured filesystems). Under UX/RT, the OS I'm writing, all commands (administrative or otherwise) will appear to be in /bin, and all daemons will appear to be in /sbin (with corresponding symlinks in /usr). The separation into administrative and regular commands will be meaningless since the traditional root/non-root security model will be completely eliminated in favor of role-based access control. The / and /usr separation will be useless since it will be impossible to have a separate /usr partition (the contents of the root will be dynamically bound from a collection of individual package directories, and won't correspond to the root of the system volume). From jaapna at xs4all.nl Tue Feb 23 20:42:30 2021 From: jaapna at xs4all.nl (Jaap Akkerhuis) Date: Tue, 23 Feb 2021 11:42:30 +0100 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: Message-ID: > On Feb 23, 2021, at 3:47, M Douglas McIlroy wrote: > >> I can imagine a simple perl (or python or whatever) script that would run >> through groff input [and] determine which preprocessors are actually >> needed ... > > Brian imagined such and implemented it way back when. Though I used > it, I've forgotten its name. One probably could have fooled it by > tricks like calling pic only in a .so file and perhaps renaming .so. > But I never heard of it failing in real life. It does impose an extra > pass over the input, but may well save a pass compared to the > defensive groff -pet that I often use or to the rerun necessary when I > forget to mention some or all of the filters. If I remember correctly, it was an awk script printing out the suggested pipeline to use. One could then cut and paste that line. jaap -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 267 bytes Desc: Message signed with OpenPGP URL: From brantley at coraid.com Tue Feb 23 23:23:59 2021 From: brantley at coraid.com (Brantley Coile) Date: Tue, 23 Feb 2021 13:23:59 +0000 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: Message-ID: <47E2CC0C-83C7-4C49-B80D-5510F50B7655@coraid.com> #!/bin/rc # doctype: synthesize proper command line for troff troff=troff eqn=eqn prefer=prefer opt='' dev='' while(~ $1 -*){ switch($1){ case -n; troff=nroff eqn=neqn prefer='prefer -n' case -T dev=$1 case -* opt=$opt' $1' } shift } ifs=' '{ files=`{echo $*} } grep -h '\$LIST|\|reference|Jp|^\.(EQ|TS|\[|PS|IS|GS|G1|GD|PP|BM|LP|BP|PI|cstart|begin|TH...|TI)|^\.P$' $* | sort -u | awk ' BEGIN { files = "'$"files'" } /\$LIST/ { e++ } /^\.PP/ { ms++ } /^\.LP/ { ms++ } /^\.EQ/ { eqn++ } /^\.TS/ { tbl++ } /^\.PS/ { pic++ } /^\.IS/ { ideal++ } /^\.GS/ { tped++ } /^\.G1/ { grap++; pic++ } /^\.GD/ { dag++; pic++ } /^\.\[/ { refer++ } /\|reference/ { prefer++ } /^\.cstart/ { chem++; pic++ } /^\.begin +dformat/ { dformat++; pic++ } /^\.TH.../ { man++ } /^\.BM/ { lbits++ } /^\.P$/ { mm++ } /^\.BP/ { pictures++ } /^\.PI/ { pictures++ } /^\.TI/ { mcs++ } /^\.ft *Jp|\\f\(Jp/ { nihongo++ } END { x = "" if (refer) { if (e) x = "refer/refer -e " files " | " else x = "refer/refer " files "| " files = "" } else if (prefer) { x = "cat " files "| '$prefer'| "; files = "" } if (tped) { x = x "tped " files " | "; files = "" } if (dag) { x = x "dag " files " | "; files = "" } if (ideal) { x = x "ideal -q " files " | "; files = "" } if (grap) { x = x "grap " files " | "; files = "" } if (chem) { x = x "chem " files " | "; files = "" } if (dformat) { x = x "dformat " files " | "; files = "" } if (pic) { x = x "pic " files " | "; files = "" } if (tbl) { x = x "tbl " files " | "; files = "" } if (eqn) { x = x "'$eqn' '$dev' " files " | "; files = "" } x = x "'$troff' " if (man) x = x "-man" else if (ms) x = x "-ms" else if (mm) x = x "-mm" if (mcs) x = x " -mcs" if (lbits) x = x " -mbits" if (pictures) x = x " -mpictures" if (nihongo) x = x " -mnihongo" x = x " '$opt' '$dev' " files print x }' > On Feb 23, 2021, at 5:42 AM, Jaap Akkerhuis wrote: > > > >> On Feb 23, 2021, at 3:47, M Douglas McIlroy wrote: >> >>> I can imagine a simple perl (or python or whatever) script that would run >>> through groff input [and] determine which preprocessors are actually >>> needed ... >> >> Brian imagined such and implemented it way back when. Though I used >> it, I've forgotten its name. One probably could have fooled it by >> tricks like calling pic only in a .so file and perhaps renaming .so. >> But I never heard of it failing in real life. It does impose an extra >> pass over the input, but may well save a pass compared to the >> defensive groff -pet that I often use or to the rerun necessary when I >> forget to mention some or all of the filters. > > > If I remember correctly, it was an awk script printing out the > suggested pipeline to use. One could then cut and paste that line. > > jaap From ralph at inputplus.co.uk Tue Feb 23 23:49:59 2021 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Tue, 23 Feb 2021 13:49:59 +0000 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: References: Message-ID: <20210223134959.0EEC2219D9@orac.inputplus.co.uk> Hi Doug, > > I can imagine a simple perl (or python or whatever) script that > > would run through groff input [and] determine which preprocessors > > are actually needed ... > > Brian imagined such and implemented it way back when. Though I used > it, I've forgotten its name. Was it ‘doctype’? That's what it's called in Kernighan & Pike's ‘Unix Programming Environment’, pp. 306-8. Groff had something similar called grog(1) which had flaws when I rewrote it in sh and mainly awk back in 2002. (My version never made it in because an FSF copyright assignment was required and their answers to some of my questions meant I wouldn't sign.) Someone else since rewrote Groff's in Perl. > defensive groff -pet that I often use I've taken to putting the information needed as a comment at the start of the main source file whence it's picked up by a generic run-off script. Similar to man(1) looking for a «'\"» comment with code letters: ‘p’ for pic, ‘v’ for vgrind, etc. -- Cheers, Ralph. From steve at quintile.net Wed Feb 24 01:04:10 2021 From: steve at quintile.net (Steve Simon) Date: Tue, 23 Feb 2021 15:04:10 +0000 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? Message-ID: <3cef07a184264eb505de433b2e95a287@quintile.net> its written in rc(1) and uses plan9 regex which sometimes differ from unix ones a little but there is doctype: http://9p.io/magic/man2html/1/doctype http://9p.io/sources/plan9/rc/bin/doctype -Steve From woods at robohack.ca Wed Feb 24 03:29:29 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 23 Feb 2021 09:29:29 -0800 Subject: [TUHS] /usr separation (was: Abstractions) In-Reply-To: References: Message-ID: At Mon, 22 Feb 2021 20:31:49 -0700, Andrew Warkentin wrote: Subject: Re: [TUHS] Abstractions > > On 2/21/21, Steve Nickolas wrote: > > > > While I've been stuck regarding bringing up a kernel, C compiler and libc > > all together, (keeping in mind my desire to avoid gcc and glibc for the > > project) the conceptual distribution I've been working on for some time > > uses more or less the same abstraction as the BSDs, with distinct /bin and > > /sbin vs. /usr/bin and /usr/sbin as I personally believe it should be, > > that the stuff in /bin should be enough to bring up and/or run diagnostics > > on a system, and everything else go in /usr. > > I don't see much of a point in maintaining the separation these days. > /bin and /usr/bin were originally separated because it wasn't possible > to fit everything on one disk, and (AFAIK) the separation was mostly > maintained after that to reduce the chance of filesystem corruption > rendering the system unbootable (which is much less of a problem > nowadays because of journalled and log-structured filesystems). Maybe there isn't any impetus to _create_ a separate /usr these days of large software but even larger disks. However I think there are at least two good reasons to _maintain_ a separate /usr. At least for ostensibly POSIX and Unix compatible systems, that is. For one there's a huge amount of deeply embedded lore, human (finger and brain) memory, actual code, documentation, and widespread practices that use this separation and rely on it, effectively making it a requirement. As Steve mentions above there's also the concept of knowing the minimum requirements for bringing up a system capable of the most basic tasks. Of course there's likely going to be some variance in what any given person might define as "most basic tasks", but that's most a separate issue. However I will give one example of why this might be a good thing to know and preserved: it is highly useful for those creating "embedded" systems, or application specific systems. They can start with just the minimal root filesystem, and then know exactly what they have to add in order to meet their application's requirements precisely. (and the reasons for doing that can be much wider than many might assume) Also the basic idea of having a root filesystem that contains just and only what's necessary for the system to boot and run, and putting everything else that makes the system usable to users into /usr, is also still a worthwhile concept even just on its own. The maintenance of an illusion of a separate /usr can of course be easily done with a farm of symlinks, thus preserving any dependencies in anyone's memory, documentation, or code. However the reality of maintaining a separate minimal toolset for system bring-up is that it cannot be reliably done without constant and pervasive testing; and the very best (and perhaps only) way to achieve this, especially in any smaller open-source project, is for everyone to use it that way as much of the time as possible. I say this from decades long experience of slowly moving systems to having just one partition for both root and /usr and then on occasion testing with separate root and /usr, and every time I do this testing I find dependencies have crept in on something in /usr for basic booting. (and that's even when I base my system on a platform that still tries hard to maintain this separation of root and /usr!) BTW, I think it was Sun that first did some of this merging of root and /usr a very long time ago. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From gtaylor at tnetconsulting.net Wed Feb 24 04:28:13 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 23 Feb 2021 11:28:13 -0700 Subject: [TUHS] /usr separation In-Reply-To: References: Message-ID: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> On 2/23/21 10:29 AM, Greg A. Woods wrote: > Maybe there isn't any impetus to _create_ a separate /usr these days > of large software but even larger disks. I'm undecided. Part of me likes the / (root) and /usr split. But another part of me questions /if/ and (if so) /why/ it is (still) /needed/. > However I think there are at least two good reasons to _maintain_ > a separate /usr. At least for ostensibly POSIX and Unix compatible > systems, that is. Does /usr actually /need/ to be a /separate/ file system? Or would a wholesale link from /usr to / (root) suffice? Or perhaps a collection of sym-links from /usr/ to / suffice? > For one there's a huge amount of deeply embedded lore, human > (finger and brain) memory, actual code, documentation, and widespread > practices that use this separation and rely on it, effectively making > it a requirement. Are they relying on the /separation/ of separate file systems? Or are they simply relying on wrote memory for the path? Ergo sym-links could fulfill the perceived need? > As Steve mentions above there's also the concept of knowing the > minimum requirements for bringing up a system capable of the most > basic tasks. The pat response to this in the Linux community is "That's what the initrd / initramfs is for!" What that fails to take into account is if the system actually uses an initrd / initramfs or not. Many of the systems I maintain do /not/ use an initrd / initramfs. Thus the systems have /some/ actual /need/ to be able to bring up a minimal system to repair file system problems. Even if the so called problem is simply that the extent file system needs an fsck with human interaction (time since last check and / or maximum number of mounts). If you do use an initrd / initramfs, then you can reasonably safely lump everything* in the / (root) file system. */boot still tends to be it's own file system on Linux, mostly because that's where the initrd / initramfs image live which contain drivers for more fancy things (software RAID, LVM, ZFS, SAN, etc.) which are needed to bring up / (root). > Of course there's likely going to be some variance in what any > given person might define as "most basic tasks", but that's most a > separate issue. Agreed. However, I posit that "most basic tasks" be what is necessary to transition from single user mode to multi-user mode. Including any and all utilities required to fix file systems, work with logical volumes, SAN, etc. > However I will give one example of why this might be a good thing to > know and preserved: it is highly useful for those creating "embedded" > systems, or application specific systems. They can start with just the > minimal root filesystem, and then know exactly what they have to add > in order to meet their application's requirements precisely. (and the > reasons for doing that can be much wider than many might assume) Please elaborate on what that has to do with the / (root) vs /usr split? I feel like you're differentiating between a minimal install vs a kitchen sink install. Which seems to me to be independent of how the underlying file system(s) is (are) arranged. > Also the basic idea of having a root filesystem that contains just > and only what's necessary for the system to boot and run, and putting > everything else that makes the system usable to users into /usr, > is also still a worthwhile concept even just on its own. Many in the Linux community think this is the job of the initrd / initramfs. I personally believe that this is the job of the / (root) file system. Aside: In the event that /usr is on the / (root) file system, then the system should still be able to come up as if /usr didn't exist b/c it had been renamed or was on a separate file system. > The maintenance of an illusion of a separate /usr can of course be > easily done with a farm of symlinks, thus preserving any dependencies > in anyone's memory, documentation, or code. Agreed. With things like bind mounts, we don't even need to use sym-links. }:-) Though, one potential danger is that people see duplication between /bin/ and /usr/bin/ and decide to remove one of them. Doing so will ultimately remove both and cause someone to have a not good day. Aside: Perhaps these not good days are not something to be avoided, but instead something to be treated as a learning opportunity. Much like young kids need to learn that fire is hot for themselves. > However the reality of maintaining a separate minimal toolset for > system bring-up is that it cannot be reliably done without constant > and pervasive testing; and the very best (and perhaps only) way to > achieve this, especially in any smaller open-source project, is for > everyone to use it that way as much of the time as possible. I say > this from decades long experience of slowly moving systems to having > just one partition for both root and /usr and then on occasion testing > with separate root and /usr, and every time I do this testing I find > dependencies have crept in on something in /usr for basic booting. > (and that's even when I base my system on a platform that still tries > hard to maintain this separation of root and /usr!) I have a different conundrum regarding */bin. Why do I need nine different (s)bin directories in my path? I -- possibly naively -- believe that we have the technology to have all commands in /one/ directory, namely /bin. Quickly after that thought, I realize that I want different things in my path than other people do. So I end up with custom /bin directories. Which usually ends up with sym-links that reference variables or custom mounts (possibly via auto-mount applying some logic). > BTW, I think it was Sun that first did some of this merging of root > and /usr a very long time ago. Agreed. Though I'm far from authoritative. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From tytso at mit.edu Wed Feb 24 04:57:53 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Tue, 23 Feb 2021 13:57:53 -0500 Subject: [TUHS] /usr separation In-Reply-To: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> Message-ID: On Tue, Feb 23, 2021 at 11:28:13AM -0700, Grant Taylor via TUHS wrote: > > What that fails to take into account is if the system actually uses an > initrd / initramfs or not. Many of the systems I maintain do /not/ use an > initrd / initramfs. Thus the systems have /some/ actual /need/ to be able > to bring up a minimal system to repair file system problems. Even if the so > called problem is simply that the extent file system needs an fsck with > human interaction (time since last check and / or maximum number of mounts). There are two reasons why you might want to have an initramfs. One is you are using a distribution-provided generic kernel, in which case the device driver / kernel modules needed to access the root file system needed to be loaded from *somewhere*, and that's the in-memory initramfs/initrd. The other reason is how you run fsck on the root file system. That won't be needed if hardware is perfect, the kernel is bug-free(tm), and the root file system has journalling support, as all modern file systems tend to have. However, if it is needed, there are two ways to do this. One is the traditional way, which is to mount the root file system read/only, repair the file system, and if any changes were required to the root file system, force a reboot; otherwise, remount the root file system read-write, and proceed. The other way of doing this is to include the fsck program in the initrams, and run fsck on the root file system before it is mounted. Now you never have to worry about rebooting if any chances were made, since the root file system wasn't mounted and so there is no danger of invalid metadata being cached in memory. That being said, it's certainly possible to skip using an initramfs; it's geenrally not required, and if you're building your own kernel, with the device drivers you need for your hardwaer compiled into the kernel, most distributions will support skipping the initramfs. (Debian certainly does, in any case.) > */boot still tends to be it's own file system on Linux, mostly because > that's where the initrd / initramfs image live which contain drivers for > more fancy things (software RAID, LVM, ZFS, SAN, etc.) which are needed to > bring up / (root). /boot needs to exist due to limitations to the firmware and/or boot loader being used. If the boot loader is using the legacy PC Bios interfaces to read the kernel and initial ramdisk/file system, then those files need to be in a low-numbered LBA disk space, due to legacy BIOS/firmware limitations. It could also be a concern if you are using some exotic file system (say, ZFS), and the bootloader doesn't support that file system due to copyright licensing incompatibilities, or the boot loader just not supporting that bleeding-edge file system. In that case, you might have to keep /boot as an ext4 file system. Other than that, there is no reason why /boot needs to be its own file system, except that most installers will create one just because it's simpler to use the same approach for all cases, even if it's not needed for a particular use case. - Ted P.S. Oh, and if you are using UEFI, you might need to have yet another file system which is a Microsoft FAT file system, typically mounted as /boot/efi, to keep the UEFI firmware happy.... From beebe at math.utah.edu Wed Feb 24 05:37:57 2021 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Tue, 23 Feb 2021 12:37:57 -0700 Subject: [TUHS] Abstractions Message-ID: The recent discussions on the TUHS list of whether /bin and /usr/bin are different, or symlinked, brought to mind the limited disk and tape sizes of the 1970s and 1980s. Especially the lower-cost tape technologies had issues with correct recognition of an end-of-tape condition, making it hard to span a dump across tape volumes, and strongly suggesting that directory tree sizes be limited to what could fit on a single tape. I made an experiment today across a broad range of operating systems (many with multiple versions in our test farm), and produced these two tables, where version numbers are included only if the O/S changed practices: ------------------------------------------------------------------------ Systems with /bin a symlink to /usr/bin (or both to yet another common directory) [42 major variants]: ArchLinux Kali RedHat 8 Arco Kubuntu 19, 20 Q4OS Bitrig Lite ScientificLinux 7 CentOS 7, 8 Lubuntu 19 Septor ClearLinux Mabox Solaris 10, 11 Debian 10, 11 Magiea Solydk Deepin Manjaro Sparky DilOS Mint 20 Springdale Dyson MXLinux 19 Ubuntu 19, 20, 21 Fedora Neptune UCS Gnuinos Netrunner Ultimate Gobolinux Oracle Linux Unleashed Hefftor Parrot 4.7 Void IRIX PureOS Xubuntu 19, 20 ------------------------------------------------------------------------ Systems with separate /bin and /usr/bin [60 major variants]: Alpine Hipster OS108 AltLinux KaOS Ovios Antix KFreeBSD PacBSD Bitrig Kubuntu 18 Parrot 4.5 Bodhi LibertyBSD PCBSD CentOS 5, 6 LMDE PCLinuxOS ClonOS Lubuntu 17 Peppermint Debian 7--10 LXLE Salix DesktopBSD macOS ScientificLinux 6 Devuan MidnightBSD SlackEX DragonFlyBSD Mint 18--20 Slackware ElementaryOS MirBSD Solus FreeBSD 9--13 MXLinux 17, 18 T2 FuryBSD NetBSD 6-1010 Trident Gecko NomadBSD Trisquel Gentoo OmniOS TrueOS GhostBSD OmniTribblix Ubuntu 14--18 GNU/Hurd OpenBSD Xubuntu 18 HardenedBSD OpenMandriva Zenwalk Helium openSUSE Zorinos ------------------------------------------------------------------------ Some names appear in both tables, indicating a transition from separate directories to symlinked directories in more recent O/S releases. Many of these system names are spelled in mixed lettercase, and if I've botched some of them, I extend my apologies to their authors. Some of those systems run on multiple CPU architectures, and our test farm exploits that; however, I found no instance of the CPU type changing the separation or symbolic linking of /bin and /usr/bin. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From gtaylor at tnetconsulting.net Wed Feb 24 06:29:11 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 23 Feb 2021 13:29:11 -0700 Subject: [TUHS] /usr separation In-Reply-To: References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> Message-ID: <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> On 2/23/21 11:57 AM, Theodore Ts'o wrote: > There are two reasons why you might want to have an initramfs. Rather than getting into a tit for tat debate, I'll agree that we have both proposed reasons why you /might/ want to use an initramfs. The operative words are "you" and "might". Each person probably wants slightly different things. It's far from one size fits all. > The other reason is how you run fsck on the root file system. The same way that it's been done for years. Root is mounted read only and you run fsck to repair damage. If it's severe damage, you will likely need to boot off of something else. I've had both situations happen multiple times. The quintessential max mount count / max days since last check have happily been fixed while root was mounted read only. > That won't be needed if hardware is perfect, the kernel is > bug-free(tm), and the root file system has journalling support, > as all modern file systems tend to have. I wouldn't bet on that. I've had to run fsck on journalling file systems at boot / mount time multiple times. > However, if it is needed, there are two ways to do this. One is the > traditional way, which is to mount the root file system read/only, > repair the file system, and if any changes were required to the root > file system, force a reboot; otherwise, remount the root file system > read-write, and proceed. This is what happened in /most/ of the cases that I've needed to interact with fsck of a root file system. > The other way of doing this is to include the fsck program in the > initrams, and run fsck on the root file system before it is mounted. > Now you never have to worry about rebooting if any chances were made, > since the root file system wasn't mounted and so there is no danger > of invalid metadata being cached in memory. Oh ... I would definitely *NOT* say /never/. There are ways that a file system can get corrupted that will cause fsck to stop and require manual intervention. > That being said, it's certainly possible to skip using an initramfs; > it's geenrally not required, and if you're building your own kernel, > with the device drivers you need for your hardwaer compiled into > the kernel, most distributions will support skipping the initramfs. > (Debian certainly does, in any case.) And if you're building a minimal kernel, removing support for modules and what's required for swing-root saves space. ;-) > /boot needs to exist due to limitations to the firmware and/or boot > loader being used. Not necessarily. E.g. one single partition containing /boot and / (root). > If the boot loader is using the legacy PC Bios interfaces to read the > kernel and initial ramdisk/file system, then those files need to be in > a low-numbered LBA disk space, due to legacy BIOS/firmware limitations. So make sure said /boot & / (root) partition stays within that limitation. I don't recall exactly what that is. I think it's ~8 GB. But it's definitely possible to have small installations in that space. > It could also be a concern if you are using some exotic file system > (say, ZFS), and the bootloader doesn't support that file system due > to copyright licensing incompatibilities, or the boot loader just not > supporting that bleeding-edge file system. In that case, you might > have to keep /boot as an ext4 file system. That scenario is definitely a possibility. Though such scenarios are not a requirement and tend to be antithetical to minimal installations, like the type that would be used in embedded devices and possibly copied to ROM as indicated in a different post. > Other than that, there is no reason why /boot needs to be its own > file system, except that most installers will create one just because > it's simpler to use the same approach for all cases, even if it's > not needed for a particular use case. As Steve Gibson is famous for saying; The tyranny of the default. > P.S. Oh, and if you are using UEFI, you might need to have yet > another file system which is a Microsoft FAT file system, typically > mounted as /boot/efi, to keep the UEFI firmware happy.... Yes, the file system needs to exist. But that's part of the firmware, not the operating system. I also question if that FAT file system needs to be mounted or not. -- I don't know how GRUB et al. deal with a non-mounted UEFI file system. But even if it does need to be mounted, you can still get away with two partitions; / (root) and /boot/efi. I suspect UEFI does away with the LBA issue you mentioned. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From sauer at technologists.com Wed Feb 24 07:02:00 2021 From: sauer at technologists.com (Charles H. Sauer) Date: Tue, 23 Feb 2021 15:02:00 -0600 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: <91696417-3233-232e-e1f4-3cb914202801@technologists.com> To add to the inventory below: Dell SVR4 /bin is a symlink to /usr/bin NEXTSTEP/486 3.3 /bin and /usr/bin are separate On 2/23/2021 1:37 PM, Nelson H. F. Beebe wrote: > The recent discussions on the TUHS list of whether /bin and /usr/bin > are different, or symlinked, brought to mind the limited disk and tape > sizes of the 1970s and 1980s. Especially the lower-cost tape > technologies had issues with correct recognition of an end-of-tape > condition, making it hard to span a dump across tape volumes, and > strongly suggesting that directory tree sizes be limited to what could > fit on a single tape. > > I made an experiment today across a broad range of operating systems > (many with multiple versions in our test farm), and produced these two > tables, where version numbers are included only if the O/S changed > practices: > > ------------------------------------------------------------------------ > Systems with /bin a symlink to /usr/bin (or both to yet another common > directory) [42 major variants]: > > ArchLinux Kali RedHat 8 > Arco Kubuntu 19, 20 Q4OS > Bitrig Lite ScientificLinux 7 > CentOS 7, 8 Lubuntu 19 Septor > ClearLinux Mabox Solaris 10, 11 > Debian 10, 11 Magiea Solydk > Deepin Manjaro Sparky > DilOS Mint 20 Springdale > Dyson MXLinux 19 Ubuntu 19, 20, 21 > Fedora Neptune UCS > Gnuinos Netrunner Ultimate > Gobolinux Oracle Linux Unleashed > Hefftor Parrot 4.7 Void > IRIX PureOS Xubuntu 19, 20 > > ------------------------------------------------------------------------ > Systems with separate /bin and /usr/bin [60 major variants]: > > Alpine Hipster OS108 > AltLinux KaOS Ovios > Antix KFreeBSD PacBSD > Bitrig Kubuntu 18 Parrot 4.5 > Bodhi LibertyBSD PCBSD > CentOS 5, 6 LMDE PCLinuxOS > ClonOS Lubuntu 17 Peppermint > Debian 7--10 LXLE Salix > DesktopBSD macOS ScientificLinux 6 > Devuan MidnightBSD SlackEX > DragonFlyBSD Mint 18--20 Slackware > ElementaryOS MirBSD Solus > FreeBSD 9--13 MXLinux 17, 18 T2 > FuryBSD NetBSD 6-1010 Trident > Gecko NomadBSD Trisquel > Gentoo OmniOS TrueOS > GhostBSD OmniTribblix Ubuntu 14--18 > GNU/Hurd OpenBSD Xubuntu 18 > HardenedBSD OpenMandriva Zenwalk > Helium openSUSE Zorinos > > ------------------------------------------------------------------------ > > Some names appear in both tables, indicating a transition from > separate directories to symlinked directories in more recent O/S > releases. > > Many of these system names are spelled in mixed lettercase, and if > I've botched some of them, I extend my apologies to their authors. > > Some of those systems run on multiple CPU architectures, and our test > farm exploits that; however, I found no instance of the CPU type > changing the separation or symbolic linking of /bin and /usr/bin. > > ------------------------------------------------------------------------------- > - Nelson H. F. Beebe Tel: +1 801 581 5254 - > - University of Utah FAX: +1 801 581 4148 - > - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - > - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - > - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - > ------------------------------------------------------------------------------- > -- voice: +1.512.784.7526 e-mail: sauer at technologists.com fax: +1.512.346.5240 Web: https://technologists.com/sauer/ Facebook/Google/Skype/Twitter: CharlesHSauer From henry.r.bent at gmail.com Wed Feb 24 07:15:52 2021 From: henry.r.bent at gmail.com (Henry Bent) Date: Tue, 23 Feb 2021 16:15:52 -0500 Subject: [TUHS] Abstractions In-Reply-To: <91696417-3233-232e-e1f4-3cb914202801@technologists.com> References: <91696417-3233-232e-e1f4-3cb914202801@technologists.com> Message-ID: On Tue, 23 Feb 2021 at 16:03, Charles H. Sauer wrote: > To add to the inventory below: > Dell SVR4 /bin is a symlink to /usr/bin > NEXTSTEP/486 3.3 /bin and /usr/bin are separate > > On 2/23/2021 1:37 PM, Nelson H. F. Beebe wrote: > > The recent discussions on the TUHS list of whether /bin and /usr/bin > > are different, or symlinked, brought to mind the limited disk and tape > > sizes of the 1970s and 1980s. Especially the lower-cost tape > > technologies had issues with correct recognition of an end-of-tape > > condition, making it hard to span a dump across tape volumes, and > > strongly suggesting that directory tree sizes be limited to what could > > fit on a single tape. > > > > I made an experiment today across a broad range of operating systems > > (many with multiple versions in our test farm), and produced these two > > tables, where version numbers are included only if the O/S changed > > practices: > > > > ------------------------------------------------------------------------ > > Systems with /bin a symlink to /usr/bin (or both to yet another common > > directory) [42 major variants]: > > > > ArchLinux Kali RedHat 8 > > Arco Kubuntu 19, 20 Q4OS > > Bitrig Lite ScientificLinux 7 > > CentOS 7, 8 Lubuntu 19 Septor > > ClearLinux Mabox Solaris 10, 11 > > Debian 10, 11 Magiea Solydk > > Deepin Manjaro Sparky > > DilOS Mint 20 Springdale > > Dyson MXLinux 19 Ubuntu 19, 20, 21 > > Fedora Neptune UCS > > Gnuinos Netrunner Ultimate > > Gobolinux Oracle Linux Unleashed > > Hefftor Parrot 4.7 Void > > IRIX PureOS Xubuntu 19, 20 > > > > ------------------------------------------------------------------------ > > Systems with separate /bin and /usr/bin [60 major variants]: > > > > Alpine Hipster OS108 > > AltLinux KaOS Ovios > > Antix KFreeBSD PacBSD > > Bitrig Kubuntu 18 Parrot 4.5 > > Bodhi LibertyBSD PCBSD > > CentOS 5, 6 LMDE PCLinuxOS > > ClonOS Lubuntu 17 Peppermint > > Debian 7--10 LXLE Salix > > DesktopBSD macOS ScientificLinux 6 > > Devuan MidnightBSD SlackEX > > DragonFlyBSD Mint 18--20 Slackware > > ElementaryOS MirBSD Solus > > FreeBSD 9--13 MXLinux 17, 18 T2 > > FuryBSD NetBSD 6-1010 Trident > > Gecko NomadBSD Trisquel > > Gentoo OmniOS TrueOS > > GhostBSD OmniTribblix Ubuntu 14--18 > > GNU/Hurd OpenBSD Xubuntu 18 > > HardenedBSD OpenMandriva Zenwalk > > Helium openSUSE Zorinos > > > > ------------------------------------------------------------------------ > > > > Some names appear in both tables, indicating a transition from > > separate directories to symlinked directories in more recent O/S > > releases. > > > > Many of these system names are spelled in mixed lettercase, and if > > I've botched some of them, I extend my apologies to their authors. > > > > Some of those systems run on multiple CPU architectures, and our test > > farm exploits that; however, I found no instance of the CPU type > > changing the separation or symbolic linking of /bin and /usr/bin. > > > > Solaris /bin was a symlink to /usr/bin as early as 2.5.1. It's also worth pointing out that NetBSD, in addition to having a separate /bin and /usr/bin, has /rescue which has a large selection of statically linked binaries. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From woods at robohack.ca Wed Feb 24 11:51:27 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 23 Feb 2021 17:51:27 -0800 Subject: [TUHS] Abstractions In-Reply-To: References: Message-ID: At Tue, 23 Feb 2021 12:37:57 -0700, "Nelson H. F. Beebe" wrote: Subject: Re: [TUHS] Abstractions > > The recent discussions on the TUHS list of whether /bin and /usr/bin > are different, or symlinked, brought to mind the limited disk and tape > sizes of the 1970s and 1980s. Especially the lower-cost tape > technologies had issues with correct recognition of an end-of-tape > condition, making it hard to span a dump across tape volumes, and > strongly suggesting that directory tree sizes be limited to what could > fit on a single tape. Hmmmm... you may just be mixing up the names of the archive tools you mean, but on the other hand maybe you don't know that "dump" does whole filesystems, not just sub-directories. That of course doesn't take anything away from what you were saying about making sure you could do a full dump onto a single tape with some types of less high-end and high-quality tape devices. But that's a "newer" problem. Original Unix dump(1m) had no trouble asking for additional tapes to be mounted when the filesystem required multiple tapes. So it has nothing to do with legacy of the original root and /usr split. > I made an experiment today across a broad range of operating systems > (many with multiple versions in our test farm), and produced these two > tables, where version numbers are included only if the O/S changed > practices: An interesting compilation, but sadly (to me at least) it is mostly a mess of GNU/Linux which, rightly or wrongly, I categorize all under one (extremely opaque) umbrella. BTW, when I said "long ago" for Solaris, I meant a REALLY long time ago. /bin has been a symlink to /usr/bin since Solaris 2.0 and yet /usr could/can still be a separate filesystem on a Solaris installation. This is accomplished by putting everything necessary to boot the system up to the point where other additional filesystems can be mounted using just the programs found in /sbin. Of course this wasn't done smoothly and completely all in one go. IIRC /sbin/sh didn't exist until Solaris-9, and (also IIRC) it is just a copy of /usr/bin/sh. So Sun pushed everything in /bin to /usr/bin, then copied a few things back to /sbin as they found they needed them. Kind of a half-assed hack that wasn't well thought out, and had very poor motivations. I'd forgot that IRIX was following Solaris on this track. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From beebe at math.utah.edu Wed Feb 24 12:23:01 2021 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Tue, 23 Feb 2021 19:23:01 -0700 Subject: [TUHS] Abstractions In-Reply-To: Message-ID: Greg Woods responds to my posting: >> Hmmmm... you may just be mixing up the names of the archive tools you >> mean, but on the other hand maybe you don't know that "dump" does whole >> filesystems, not just sub-directories. I meant "dump" as a generic verb, not specifically the Unix dump utility. Many sites also used tar to backup directory trees: after all, tar means Tape ARchiver. >> Original Unix dump(1m) had no trouble asking for additional tapes ... That was, however, contingent on a reliable signal from the tape unit, and my strong recollection is that when we moved to various types of cheap cassette tapes, the end-of-tape indicator was unreliable. Thus, we paid attention to both disk and tape sizes. Today, with 10TB+ on LTO-8 tapes, it isn't an issue for us, and we also tend to have many different ZFS volumes representing various parts of the filesystem, allowing different backup and snapshotting policies. Besides tapes and snapshots, we also have a live SAN mirror, and a remote snapshot server, giving plenty of data replication, and the warm fuzzy feelings from that. After 20 years of ZFS, I don't recall us ever losing data. We have also gone through two generations of major fileserver upgrades and complete data migrations without service interruptions (except for a brief interval for each user account to synchronize data on old and new servers). ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From m.douglas.mcilroy at dartmouth.edu Wed Feb 24 12:42:30 2021 From: m.douglas.mcilroy at dartmouth.edu (M Douglas McIlroy) Date: Tue, 23 Feb 2021 21:42:30 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? In-Reply-To: <3cef07a184264eb505de433b2e95a287@quintile.net> References: <3cef07a184264eb505de433b2e95a287@quintile.net> Message-ID: doctyoe it was. Thanks, Doug On Tue, Feb 23, 2021 at 10:12 AM Steve Simon wrote: > > its written in rc(1) and uses plan9 regex which > sometimes differ from unix ones a little but > there is doctype: > > http://9p.io/magic/man2html/1/doctype > http://9p.io/sources/plan9/rc/bin/doctype > > -Steve From woods at robohack.ca Wed Feb 24 12:47:07 2021 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 23 Feb 2021 18:47:07 -0800 Subject: [TUHS] Abstractions In-Reply-To: References: <91696417-3233-232e-e1f4-3cb914202801@technologists.com> Message-ID: At Tue, 23 Feb 2021 16:15:52 -0500, Henry Bent wrote: Subject: Re: [TUHS] Abstractions > > It's also worth > pointing out that NetBSD, in addition to having a separate /bin and > /usr/bin, has /rescue which has a large selection of statically linked > binaries. Indeed. However /rescue is really just a hack to avoid the problems that occur when basic tools are dynamic-linked. My vastly preferred alternative is to static-link everything. Of course with C libraries these days that means the binaries can be rather large -- albiet still relatively small in comparison to modern disks. In any case I've also built NetBSD such that all of the base system binaries are linked together into one binary (we call this "crunchgen", but Linux usually calls it "Busybox(tm)"). I decided to put all the bin directories together into one for the ultimate savings of space and time and effort, but it would be trivial to keep the root and /usr split for better managing application-specific embedded systems. This hard-static-linking of everything into one binary results in a surprisingly small, indeed very tiny, system. For i386 (32-bit) it could probably boot multiuser in about 16mb of RAM. What I've got so far is a bootable image file of a "complete" NetBSD-5/i386 systems that's just a tiny bit over 7Mb. It contains a kernel and a ramdisk image with a 12Mb filesystem containing a crunchgen binary with almost everything in it (247 system programs, including all the networking tools, but no named, and no toolchain, no mailer, and no manual pages -- not atypical of what was delivered with some commercial unix systems of days gone by, but of course updated with modern things like ssh, etc..) -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From andreww591 at gmail.com Wed Feb 24 13:12:23 2021 From: andreww591 at gmail.com (Andrew Warkentin) Date: Tue, 23 Feb 2021 20:12:23 -0700 Subject: [TUHS] /usr separation (was: Abstractions) In-Reply-To: References: Message-ID: On 2/23/21, Greg A. Woods wrote: > > For one there's a huge amount of deeply embedded lore, human (finger and > brain) memory, actual code, documentation, and widespread practices that > use this separation and rely on it, effectively making it a requirement. > That is only a justification for keeping the /usr hierarchy around (and using symlinks/binding to make stuff appear in both places), not for arbitrarily separating programs and libraries between the two. > > However the reality of maintaining a separate minimal toolset for system > bring-up is that it cannot be reliably done without constant and > pervasive testing; and the very best (and perhaps only) way to achieve > this, especially in any smaller open-source project, is for everyone to > use it that way as much of the time as possible. I say this from > decades long experience of slowly moving systems to having just one > partition for both root and /usr and then on occasion testing with > separate root and /usr, and every time I do this testing I find > dependencies have crept in on something in /usr for basic booting. (and > that's even when I base my system on a platform that still tries hard to > maintain this separation of root and /usr!) > With a system-wide package manger a set of basic packages can be maintained without having an arbitrary separation into root and usr. The reference distribution of UX/RT will have several nested sets of packages rather than a separation of binaries between root and usr. The smallest will be what is included in the supervisor image (the equivalent of a kernel image and initramfs combined into one), which will be what is required to mount the system volume. Above that will be the minimal system, which will be the set of packages required to boot to a multi-user login. All of this will be in the base system repository, along with a few other optional groups of packages (including a full desktop environment). Most optional third-party application packages will be in a separate repository (like ports or pkgsrc under BSD, but using the same package manager as the base system and available by default without any special configuration). On 2/23/21, Theodore Ts'o wrote: > > /boot needs to exist due to limitations to the firmware and/or boot > loader being used. If the boot loader is using the legacy PC Bios > interfaces to read the kernel and initial ramdisk/file system, then > those files need to be in a low-numbered LBA disk space, due to legacy > BIOS/firmware limitations. It could also be a concern if you are > using some exotic file system (say, ZFS), and the bootloader doesn't > support that file system due to copyright licensing incompatibilities, > or the boot loader just not supporting that bleeding-edge file system. > In that case, you might have to keep /boot as an ext4 file system. > The BIOS addressing limitations only happen with CHS-only BIOSes, which haven't really been a thing since the mid-to-late 90s. The only reason to have a separate /boot partition for anything newer than that is because of bootloader limitations. On 2/23/21, Grant Taylor via TUHS wrote: > > I have a different conundrum regarding */bin. Why do I need nine > different (s)bin directories in my path? I -- possibly naively -- > believe that we have the technology to have all commands in /one/ > directory, namely /bin. > > Quickly after that thought, I realize that I want different things in my > path than other people do. So I end up with custom /bin directories. > Which usually ends up with sym-links that reference variables or custom > mounts (possibly via auto-mount applying some logic). > UX/RT will solve the issue of different sets of programs in the path in different user or application contexts with per-process and per-user namespaces (since fine-grained security will be deeply integrated into the system and neither on-disk device files nor setuid binaries will exist, there shouldn't be any security concerns with letting regular users bind and mount stuff for themselves). $PATH will just be set to "/bin" in the vast majority of cases. From imp at bsdimp.com Wed Feb 24 13:20:55 2021 From: imp at bsdimp.com (Warner Losh) Date: Tue, 23 Feb 2021 20:20:55 -0700 Subject: [TUHS] Abstractions In-Reply-To: References: <91696417-3233-232e-e1f4-3cb914202801@technologists.com> Message-ID: On Tue, Feb 23, 2021, 7:47 PM Greg A. Woods wrote: > At Tue, 23 Feb 2021 16:15:52 -0500, Henry Bent > wrote: > Subject: Re: [TUHS] Abstractions > > > > It's also worth > > pointing out that NetBSD, in addition to having a separate /bin and > > /usr/bin, has /rescue which has a large selection of statically linked > > binaries. > > Indeed. However /rescue is really just a hack to avoid the problems > that occur when basic tools are dynamic-linked. > > My vastly preferred alternative is to static-link everything. > > Of course with C libraries these days that means the binaries can be > rather large -- albiet still relatively small in comparison to modern > disks. > > In any case I've also built NetBSD such that all of the base system > binaries are linked together into one binary (we call this "crunchgen", > but Linux usually calls it "Busybox(tm)"). I decided to put all the bin > directories together into one for the ultimate savings of space and time > and effort, but it would be trivial to keep the root and /usr split for > better managing application-specific embedded systems. > > This hard-static-linking of everything into one binary results in a > surprisingly small, indeed very tiny, system. For i386 (32-bit) it > could probably boot multiuser in about 16mb of RAM. > I booted a FreeBSD/i386 4 system, sans compilers and a few other things, off 16MB CF card in the early 2000s. I did both static (one binary) and dynamic and found dynamic worked a lot better for the embedded system... I also did a 8MB PoC router and data logger image that was stripped to the bone. PicoBSD fit onto a 1.44MB floppy as lat as FreeBSD 4 and made a good firewall... Warner What I've got so far is a bootable image file of a "complete" > NetBSD-5/i386 systems that's just a tiny bit over 7Mb. It contains a > kernel and a ramdisk image with a 12Mb filesystem containing a crunchgen > binary with almost everything in it (247 system programs, including all > the networking tools, but no named, and no toolchain, no mailer, and no > manual pages -- not atypical of what was delivered with some commercial > unix systems of days gone by, but of course updated with modern things > like ssh, etc..) > > -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rudi.j.blom at gmail.com Wed Feb 24 14:18:07 2021 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Wed, 24 Feb 2021 11:18:07 +0700 Subject: [TUHS] Abstractions Message-ID: Some additions: Systems with /bin a symlink to /usr/bin Digital UNIX 4.0 Tru64 UNIX 5.0 to 5.1B HP-UX 11i 11.23 and 11.31 Systems with separate /bin and /usr/bin SCO UNIX 3.2 V4.0 to V4.2 -- The more I learn the better I understand I know nothing. From tytso at mit.edu Thu Feb 25 00:14:11 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 24 Feb 2021 09:14:11 -0500 Subject: [TUHS] /usr separation In-Reply-To: <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> Message-ID: On Tue, Feb 23, 2021 at 01:29:11PM -0700, Grant Taylor via TUHS wrote: > On 2/23/21 11:57 AM, Theodore Ts'o wrote: > > There are two reasons why you might want to have an initramfs. > > Rather than getting into a tit for tat debate, I'll agree that we have both > proposed reasons why you /might/ want to use an initramfs. The operative > words are "you" and "might". Each person probably wants slightly different > things. It's far from one size fits all. Sure, I was trying to enumerate the reasons why initramfs, for some combinations of hardware / configurations, might be necessary. > > /boot needs to exist due to limitations to the firmware and/or boot > > loader being used. > > Not necessarily. E.g. one single partition containing /boot and / (root). Sorry, I should have written, "/boot MAY need to exist". > > Other than that, there is no reason why /boot needs to be its own file > > system, except that most installers will create one just because it's > > simpler to use the same approach for all cases, even if it's not needed > > for a particular use case. > > As Steve Gibson is famous for saying; The tyranny of the default. I wouldn't say that; I'd rather say that if you have a huge combination of configurations that you have to test, those configurations which aren't regularly tested will tend to bitrot, or have odd failures in various error cases. The more corners that you have, the more corner cases. And this is where it's all about *who* gets to pay, either via money, or via their labor, to support these various cases. Weren't people just complaining, in other TUHS threads, of "bloat" in Linux? Well, this is how you get bloat. It's just that if it's a feature *you* want, then it's not bloat, but an essential feature, and if it's not provided, you whine mightily. And when you have a large number of enterprise customers paying $$$ to enterprise distribution vendors, each with their own set of essential features, and where *binary* backwards compatibility is considered an essential feature, then that's how you get what others will called "bloat". I would call this the "Tyrany of Gold", as in the reformulated Golden Rule, "The ones with the Gold, makes the Rules". > > P.S. Oh, and if you are using UEFI, you might need to have yet another > > file system which is a Microsoft FAT file system, typically mounted as > > /boot/efi, to keep the UEFI firmware happy.... > > Yes, the file system needs to exist. But that's part of the firmware, not > the operating system. I also question if that FAT file system needs to be > mounted or not. -- I don't know how GRUB et al. deal with a non-mounted > UEFI file system. GRUB doesn't care. But various system administration utilities that want to manage to UEFI boot menu (as distinct from the GRUB boot menu), they need to modify the files that are read by the UEFI firmware. So it's convenient if it's mounted *somewhere*. Also, even if it's not mounted, it's still a partition that has to be around, and one reason to keep it mounted is to avoid a system administrator from saying, "hmmm, what's this unused /dev/sda1 partition? I guess I can use it as an extra swap partition!" And then the system won't boot, and then they call the enterprise distro's help desk, and unnecessary calls into the help desk costs $$$, and distro's tend to optimize for unnecessary cost. (Plus lots of unhappy customers who are down, even if it is there own d*mned fault, is not good for business.) > But even if it does need to be mounted, you can still get away with two > partitions; / (root) and /boot/efi. I suspect UEFI does away with the LBA > issue you mentioned. Yes, in another 5 or 10 years, we can probably completely deprecate the MBR-based boot sequence. At which point there will be another series of whiners on TUHS ala the complaint that distributions are dropping support for i386.... But since most TUHS posters aren't paying $$$ to enterprise distributions, most enterpise distro engineers are going to give precisely zero f*cks. But hey, if you want to volunteer to provide the hard work for supporting these configurations to the community distribution, like Debian, those distros will be happy to accept the volunteer help. :-) - Ted From arnold at skeeve.com Thu Feb 25 02:01:46 2021 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 24 Feb 2021 09:01:46 -0700 Subject: [TUHS] retro-fuse project In-Reply-To: <20210222164738.7381E93D39@minnie.tuhs.org> References: <20210222164738.7381E93D39@minnie.tuhs.org> Message-ID: <202102241601.11OG1klH017560@freefriends.org> Jay Logue via TUHS wrote: > ... The result is a project I call retro-fuse, which is now up on github > for anyone to enjoy (https://github.com/jaylogue/retro-fuse). Very cool! > Currently, retro-fuse only works on linux. But once I get access to my > mac again in a couple weeks, I'll port it to MacOS as well.  I also hope > to expand it to support other filesystems as well, such as v7 or the > early BSDs, but we'll see when that happens. I note that Linux already has the 'sysv' kernel module which supports System V short-filename file systems. Enhancing that for V7 and early BSD may be a faster route to having such file system images be mountable. (But perhaps less fun than a FUSE filesystem that uses original Unix code.) An enhanced version of the sysv kernel module that supports filesystems from the AT&T Unix PC / 3B1 is available at https://github.com/dgesswein/s4-3b1-pc7300. Arnold From gtaylor at tnetconsulting.net Thu Feb 25 03:50:03 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Wed, 24 Feb 2021 10:50:03 -0700 Subject: [TUHS] /usr separation In-Reply-To: References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> Message-ID: <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> On 2/24/21 7:14 AM, Theodore Ts'o wrote: > I wouldn't say that; I'd rather say that if you have a huge combination > of configurations that you have to test, those configurations which > aren't regularly tested will tend to bitrot, or have odd failures > in various error cases. The more corners that you have, the more > corner cases. Fair enough. > I would call this the "Tyrany of Gold", as in the reformulated Golden > Rule, "The ones with the Gold, makes the Rules". Being a fan of the golden rule, I would not make, much less use, that derivation. I think it completely changes the meaning of the spirit behind the golden rule. I don't fault your logic. I just dislike where it ended up. > GRUB doesn't care. But various system administration utilities that > want to manage to UEFI boot menu (as distinct from the GRUB boot menu), > they need to modify the files that are read by the UEFI firmware. Valid distinction. > So it's convenient if it's mounted *somewhere*. Also, even if it's not > mounted, it's still a partition that has to be around, and one reason > to keep it mounted is to avoid a system administrator from saying, > "hmmm, what's this unused /dev/sda1 partition? I guess I can use it > as an extra swap partition!" I seem to recall hearing about a problem where a rogue rm could accidentally wipe out part of the UEFI. Maybe it was the contents of the /boot/efi partition. So, I'd suggest a happy medium of mounting it Read-Only. That way it's known to be used /and/ it's protected from a simple rogue rm. It can relatively easily be re-mounted as Read-Write when necessary. As well as subsequently re-mounted back to Read-Only. > Yes, in another 5 or 10 years, we can probably completely deprecate > the MBR-based boot sequence. At which point there will be another > series of whiners on TUHS ala the complaint that distributions are > dropping support for i386.... I feel like we've already abandoned i386 as in 80386 (or compatible) architecture. I think we now require Pentium (586?) or better. At some point, we'll completely remove 32-bit support from mainstream Linux distributions, thus requiring something from the 21st century. > But since most TUHS posters aren't paying $$$ to enterprise > distributions, most enterpise distro engineers are going to give > precisely zero f*cks. But hey, if you want to volunteer to provide > the hard work for supporting these configurations to the community > distribution, like Debian, those distros will be happy to accept the > volunteer help. :-) ~chuckle~ -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From brad at anduin.eldar.org Thu Feb 25 03:40:36 2021 From: brad at anduin.eldar.org (Brad Spencer) Date: Wed, 24 Feb 2021 12:40:36 -0500 Subject: [TUHS] retro-fuse project In-Reply-To: <202102241601.11OG1klH017560@freefriends.org> (arnold@skeeve.com) Message-ID: arnold at skeeve.com writes: > Jay Logue via TUHS wrote: > >> ... The result is a project I call retro-fuse, which is now up on github >> for anyone to enjoy (https://github.com/jaylogue/retro-fuse). > > Very cool! > >> Currently, retro-fuse only works on linux. But once I get access to my >> mac again in a couple weeks, I'll port it to MacOS as well.  I also hope >> to expand it to support other filesystems as well, such as v7 or the >> early BSDs, but we'll see when that happens. > > I note that Linux already has the 'sysv' kernel module which supports > System V short-filename file systems. Enhancing that for V7 and early > BSD may be a faster route to having such file system images be mountable. > (But perhaps less fun than a FUSE filesystem that uses original Unix code.) > > An enhanced version of the sysv kernel module that supports filesystems > from the AT&T Unix PC / 3B1 is available at > > https://github.com/dgesswein/s4-3b1-pc7300. > > Arnold NetBSD has v7fs which claims to be able to deal with the 7th Edition filesystem. -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From tytso at mit.edu Thu Feb 25 04:37:51 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 24 Feb 2021 13:37:51 -0500 Subject: [TUHS] /usr separation In-Reply-To: <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> Message-ID: On Wed, Feb 24, 2021 at 10:50:03AM -0700, Grant Taylor via TUHS wrote: > > I would call this the "Tyrany of Gold", as in the reformulated Golden > > Rule, "The ones with the Gold, makes the Rules". > > Being a fan of the golden rule, I would not make, much less use, that > derivation. I think it completely changes the meaning of the spirit behind > the golden rule. Oh, sure. I agree completely that it's 180 degrees from the original golden rule; it had intended to be a joke. Unfortunately, years of living in a country whre the ones with the Gold really do make all of the Rules has gotten me to the point where if I don't laugh at it, I would have to cry.... > I seem to recall hearing about a problem where a rogue rm could accidentally > wipe out part of the UEFI. Maybe it was the contents of the /boot/efi > partition. So, I'd suggest a happy medium of mounting it Read-Only. That > way it's known to be used /and/ it's protected from a simple rogue rm. It > can relatively easily be re-mounted as Read-Write when necessary. As well > as subsequently re-mounted back to Read-Only. So technically it doesn't wipe out UEFI; it just will destroy the ability to boot the system. (e.g., this is where Grub lives, and if you delete it, UEFI will no longer be able to launch Grub, and hence, not boot Linux.) Fortunately, if you have a rescue CD / USB Thumb drive, it's relatively easy to recover from this. A rogue rm which deletes /bin (even if /bin is a symlink to /usr/bin, all of the shell scripts and /etc/passwd entries probably still refer to /bin/sh) is going to make the system similarly unbootable. As far as making a system more robust against rogue rm's, I really like scheme used by ChromeOS, where the entire file system is not only read-only, but protected by a cryptographic Merkle Tree such that if malware attempts to modify it, the system will crash. This is combined with firmware which will only load a kernel with a valid digital signature, and the user data is stored on an encrypted file system mounted on /mnt/stateful_partition and it is the only file system mounted read/write on a ChromeOS system. It violates a lot of expectations about where files should live on a "normal" Unix or Linux system, but it's defnitely way more safe and secure. > I feel like we've already abandoned i386 as in 80386 (or compatible) > architecture. I think we now require Pentium (586?) or better. At some > point, we'll completely remove 32-bit support from mainstream Linux > distributions, thus requiring something from the 21st century. For now, as far as I know, Debian still supports a 486 (or i386 with an i387 co-processor, which was my first Linux system). But yes, it is very likely, absent people showing up to volunteer to support 32-bit userspace at Debian (e.g., ongoing security updates, support for the i386 build farm, reporting and triaging build failures of packages on i386, etc.), that the i386 arch will probably get dropped after Debian Bullseye release (which will probably happpen sometime in mid-2021 if I had to guess). I'm not sure there are any 486's around any more, and it's likely most uses of systems with i386 binaries are on 64-bit processors running in 32-bit mode, so 486 vs 586 is probably not all that important in the grand scheme of things. - Ted From gtaylor at tnetconsulting.net Thu Feb 25 04:48:28 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Wed, 24 Feb 2021 11:48:28 -0700 Subject: [TUHS] /usr separation In-Reply-To: References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> Message-ID: <72c21fbb-7477-9b42-741b-88da1ae8919f@spamtrap.tnetconsulting.net> On 2/24/21 11:37 AM, Theodore Ts'o wrote: > Oh, sure. I agree completely that it's 180 degrees from the original > golden rule; it had intended to be a joke. Unfortunately, years of > living in a country whre the ones with the Gold really do make all > of the Rules has gotten me to the point where if I don't laugh at it, > I would have to cry.... When colleagues would say "you would think" or "I've been thinking" or the likes, with "We don't do that! The logo does it for us!" when dealing with IBM shenanigans. Again, laugh, lest I cry. > So technically it doesn't wipe out UEFI; it just will destroy the > ability to boot the system. (e.g., this is where Grub lives, and if > you delete it, UEFI will no longer be able to launch Grub, and hence, > not boot Linux.) ACK Either way, it causes someone to have a Bad Day™. > Fortunately, if you have a rescue CD / USB Thumb drive, it's relatively > easy to recover from this. And now we're back towards the start of this (sub)thread of a system being able to boot strap itself or not. > A rogue rm which deletes /bin (even if /bin is a symlink to /usr/bin, > all of the shell scripts and /etc/passwd entries probably still refer > to /bin/sh) is going to make the system similarly unbootable. Agreed. Though I think there is a difference in containing the damage to the OS vs going beyond it and damaging the firmware configuration. > As far as making a system more robust against rogue rm's, I really > like scheme used by ChromeOS, where the entire file system is > not only read-only, but protected by a cryptographic Merkle Tree > such that if malware attempts to modify it, the system will crash. > This is combined with firmware which will only load a kernel with a > valid digital signature, and the user data is stored on an encrypted > file system mounted on /mnt/stateful_partition and it is the only > file system mounted read/write on a ChromeOS system. It violates > a lot of expectations about where files should live on a "normal" > Unix or Linux system, but it's defnitely way more safe and secure. I've not looked at Chrome OS or how it does things because of my dislike for actually /using/ it. However, it sounds like it's worth popping the hood and looking at things. > For now, as far as I know, Debian still supports a 486 (or i386 with > an i387 co-processor, which was my first Linux system). But yes, > it is very likely, absent people showing up to volunteer to support > 32-bit userspace at Debian (e.g., ongoing security updates, support > for the i386 build farm, reporting and triaging build failures of > packages on i386, etc.), that the i386 arch will probably get dropped > after Debian Bullseye release (which will probably happpen sometime > in mid-2021 if I had to guess). I don't know how quickly 32-bit will disappear. I think the embedded market and other non-i386 32-bit platforms will likely keep 32-bit code around for a while yet. At least user space application code. Maybe the i386 kernel code will languish ~> bit rot. Or worse, get in the way of maintaining 64-bit code and thereby be ejected. > I'm not sure there are any 486's around any more, and it's likely most > uses of systems with i386 binaries are on 64-bit processors running > in 32-bit mode, so 486 vs 586 is probably not all that important in > the grand scheme of things. ¯\_(ツ)_/¯ -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4013 bytes Desc: S/MIME Cryptographic Signature URL: From norman at oclsc.org Thu Feb 25 05:38:00 2021 From: norman at oclsc.org (Norman Wilson) Date: Wed, 24 Feb 2021 14:38:00 -0500 Subject: [TUHS] Proliferation of options is great simplification of pipes, really? Message-ID: <1614195484.3256.for-standards-violators@oclsc.org> To fill out the historical record, the earliest doctype I know of was a shell (not rc) script. From my basement heater that happens to run 10/e: b$ man doctype | uniq DOCTYPE(1) DOCTYPE(1) NAME doctype - guess command line for formatting a document SYNOPSIS doctype [ option ... ] [ file ] DESCRIPTION Doctype guesses and prints on the standard output the com- mand line for printing a document that uses troff(1), related preprocessors like eqn(1), and the ms(6) and mm macro packages. Option -n invokes nroff instead of troff. Other options are passed to troff. EXAMPLES eval `doctype chapter.?` | apsend Typeset files named chapter.0, chapter.1, ... SEE ALSO troff(1), eqn(1), tbl(1), refer(1), prefer(1), pic(1), ideal(1), grap(1), ped(9.1), mcs(6), ms(6), man(6) BUGS It's pretty dumb about guessing the proper macro package. Page 1 Tenth Edition (printed 2/24/2021) doctype(1) is in the 8/e manual, so it existed in early 1985; I bet it's actually older than that. The manual page is on the V8 tape, but, oddly, not the program; neither is it in the V10 pseudo-tape I cobbled together for Warren long ago. I'm not sure why not. The version in rc is, of course, a B-movie remake of the original. Norman Wilson Toronto ON From woods at robohack.ca Thu Feb 25 06:05:38 2021 From: woods at robohack.ca (Greg A. Woods) Date: Wed, 24 Feb 2021 12:05:38 -0800 Subject: [TUHS] Abstractions In-Reply-To: References: <91696417-3233-232e-e1f4-3cb914202801@technologists.com> Message-ID: At Tue, 23 Feb 2021 20:20:55 -0700, Warner Losh wrote: Subject: Re: [TUHS] Abstractions > > I booted a FreeBSD/i386 4 system, sans compilers and a few other things, > off 16MB CF card in the early 2000s. I did both static (one binary) and > dynamic and found dynamic worked a lot better for the embedded system... I guess it may depend on your measure of "better"? With a single static-linked binary on a modern demand paged system with shared text pages, the effect is that almost all instructions for any and all programs (and of course all libraries) are almost always paged in at any given time. The result is that program startup requires so few page-in faults that it appears to happen instantaneously. My little i386 image feels faster at the command line (e.g. running on an old Soekris box, even when the root filesystem is on a rather slow flash drive) than on any of the fastest non-static-linked systems I've ever used because of this -- that is of course until it is asked to do any actual computing or other I/O operations. :-) So, in an embedded system there will be many influencing factors, including such as how many exec()s there are during normal operations. For machines with oodles of memory and very fast and large SSDs (and using any kernel with a decently tuneable paging system) one can simply static-link all binaries separately and achieve similar results, at least for programs that are run relatively often. For example the build times of a full system build of, e.g. NetBSD, with a fully static-linked host system and toolchain are remarkably lower than on a fully dynamic-linked system since all the extra processing (and esp. any extra I/Os) done by the "stupid" dynamic linker (i.e. the one that's ubiquitous in modern unixy systems) are completely and forever eliminated. I haven't even measured the difference in years now because I find fully dynamic-linked systems too painful to use for intensive development of large systems. Taking this to the opposite extreme one need only use modern macOS on a machine with an older spinning-rust hard drive that has a loud seek arm to hear and feel how incredibly slow even the simplest tasks can be, e.g. typing "man man" after a reboot or a few days of not running "man". This is because on top of the "stupid" dynamic linker that's needed to start the "man" program, there's also a huge stinking pile of additional wrappers has been added to all of the toolchain command-line tools that require doing even more gratuitous I/O operations (as well as running perhaps millions more gratuitous instructions) for infrequent invocations (luckily these wrappers seem to cache some of the most expensive overhead). (note: "man" is not in the same boat as, e.g. the toolchain progs, and I'm not quite sure why it churns so much on first invocations) My little static-linked i386 system can run "man man" several (many?) thousand times before my old iMac can display even the first line of output. And that's for a simple small program -- just imagine the immense atrocities necessary to run a program that links to several dozen libraries (e.g. the typical GUI application like a web browser, with the saving grace that we don't usually restart browsers in a loop like we restart compilers; but, e.g. /usr/bin/php on macos links to 21 libraries, and even the linker (ld) needs 7 dynamic libraries). BTW, a non-stupid dynamic linker would work the way Multics did (and to some extent I think that's more how dynamic linking worked in AT&T UNIX (SysVr3.2) on the 3B2s), but such things are so much more complicated in a flat address space. Pre-binding, such as I think macOS and IRIX do (and maybe can be done with the most modern binutils), are somewhat like Multics "bound segments" (though still less flexible and perhaps less performant). -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From usotsuki at buric.co Thu Feb 25 06:25:52 2021 From: usotsuki at buric.co (Steve Nickolas) Date: Wed, 24 Feb 2021 15:25:52 -0500 (EST) Subject: [TUHS] /usr separation In-Reply-To: References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> Message-ID: On Wed, 24 Feb 2021, Theodore Ts'o wrote: > On Wed, Feb 24, 2021 at 10:50:03AM -0700, Grant Taylor via TUHS wrote: >> Being a fan of the golden rule, I would not make, much less use, that >> derivation. I think it completely changes the meaning of the spirit behind >> the golden rule. > > Oh, sure. I agree completely that it's 180 degrees from the original > golden rule; it had intended to be a joke. Unfortunately, years of > living in a country whre the ones with the Gold really do make all of > the Rules has gotten me to the point where if I don't laugh at it, I > would have to cry.... I first heard this form used in the movie "Aladdin" (the 1992 Disney one, with Robin Williams). >> I seem to recall hearing about a problem where a rogue rm could accidentally >> wipe out part of the UEFI. Maybe it was the contents of the /boot/efi >> partition. So, I'd suggest a happy medium of mounting it Read-Only. That >> way it's known to be used /and/ it's protected from a simple rogue rm. It >> can relatively easily be re-mounted as Read-Write when necessary. As well >> as subsequently re-mounted back to Read-Only. > As far as making a system more robust against rogue rm's, I really > like scheme used by ChromeOS, where the entire file system is not only > read-only, but protected by a cryptographic Merkle Tree such that if > malware attempts to modify it, the system will crash. This is > combined with firmware which will only load a kernel with a valid > digital signature, and the user data is stored on an encrypted file > system mounted on /mnt/stateful_partition and it is the only file > system mounted read/write on a ChromeOS system. It violates a lot of > expectations about where files should live on a "normal" Unix or Linux > system, but it's defnitely way more safe and secure. It may not be as much of a protection, but I replaced the system rm on my Debian with one based on 4.4BSD (since I already had the code lying around) to which I added a bit of protection against attempts to "rm -rf /" after a worm got in and ran an obfuscated version of that...thankfully it didn't run as the superuser. I do get occasional "invalid switch" errors from it while using apt, so it probably uses a gnuism (since afaict, the code I used was strictly conformant to Posix). Otherwise, it hasn't caused any issues. -uso. From steffen at sdaoden.eu Thu Feb 25 08:08:36 2021 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 24 Feb 2021 23:08:36 +0100 Subject: [TUHS] /usr separation In-Reply-To: References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> Message-ID: <20210224220836.IpoLL%steffen@sdaoden.eu> Steve Nickolas wrote in : |On Wed, 24 Feb 2021, Theodore Ts'o wrote: ... |> As far as making a system more robust against rogue rm's, I really |> like scheme used by ChromeOS, where the entire file system is not only |> read-only, but protected by a cryptographic Merkle Tree such that if |> malware attempts to modify it, the system will crash. This is |> combined with firmware which will only load a kernel with a valid |> digital signature, and the user data is stored on an encrypted file |> system mounted on /mnt/stateful_partition and it is the only file |> system mounted read/write on a ChromeOS system. It violates a lot of |> expectations about where files should live on a "normal" Unix or Linux |> system, but it's defnitely way more safe and secure. | |It may not be as much of a protection, but I replaced the system rm on my |Debian with one based on 4.4BSD (since I already had the code lying |around) to which I added a bit of protection against attempts to "rm -rf |/" after a worm got in and ran an obfuscated version of that...thankfully |it didn't run as the superuser. | |I do get occasional "invalid switch" errors from it while using apt, so it |probably uses a gnuism (since afaict, the code I used was strictly |conformant to Posix). Otherwise, it hasn't caused any issues. Just this week i finished my move from BSD compatibility to plain Linux-only (which you seem to run) for my "web" and my "web with credentials" user accounts; the accounts are gone now, instead i as "i" execute according overlays. pstree for example now say [sudo..] box-browse.sh---box-browse.sh---unshare--- su---.box-browse-gui-+-firefox-bin-+-Web Content when i browse totally boxed and unprivileged. (Still not CPU and memory restricted, but other than that.) The / root is the low level of an overlayfs, the upper level is a tmpfs that may not use more than 5 percent of RAM. It has its own minimal /dev (with audio even) and has read/write access to one shared folder. Ditto with credentials, but that runs in the global network namespace, whereas the unprivileged even runs isolated from that. It is a bit messy if you want to be portable to Linux distributions which use busybox unshare etc., because there you need to use chroot(1) yourself, and therefore mount /proc also yourself thereafter (ie unshare(1)s --mount-proc is effectively useless). Also it would be nice to be able to execute a few commands before you switch aka map user and group IDs in the containment (if you do so). But for open source software the answer there usually is "shut up and hack", thus. Of course with this approach the containers need to live in the same X11 session, therefore the one mounts only /tmp/.X11-unix (it is tremendous that Linux can "mount" a normal directory now!), the other just the plain /tmp. So an rm -f could destroy the shared folder (it lives on a filesystem with snapshot support though). For the credential account it could even wipe /tmp/ and the --bind mounted .mozilla encfs that is in the containment there (but ditto, plus specific backups). I have not looked at overlayfs code, but i think the whiteouts of the upper layer will be saved in the "work" layer, so an rm -rf / could possibly even not finish because 5 percent RAM could exceed earlier? Happy to (un)share ~150 lines sh(1) script. Yes Mr. Cole, thanks for this working on overlay (less so union) filesystems, it is tremendous! (P.S.: that .box-browse-gui.sh is a condome to prevent that firefox-bin as compiled by Mozilla locks me out of the system. Have seen this twice already when browsing serious German and Austrian magazines, needing a reboot. So i now have my browser session been protected by a shell guard, and my window manager menu has a "TOUCH" entry. If the timestamp of the touch file becomes older than 300 seconds i hear the first gong of Big Ben and need to touch .. after the fifth a "kill -TERM -1" happens.) --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From steffen at sdaoden.eu Thu Feb 25 08:20:27 2021 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 24 Feb 2021 23:20:27 +0100 Subject: [TUHS] Seeking wisdom from Unix Greybeards In-Reply-To: References: <9c1595cc-54a1-8af9-0c2d-083cb04dd97c@spamtrap.tnetconsulting.net> <20201125172255.83D252146F@orac.inputplus.co.uk> <20201126145134.GB394251@mit.edu> <20201126214825.bDDjr%steffen@sdaoden.eu> Message-ID: <20210224222027.kghDx%steffen@sdaoden.eu> Greg A. Woods wrote in : |At Thu, 26 Nov 2020 22:48:25 +0100, Steffen Nurpmeso \ |wrote: |Subject: Re: [TUHS] Seeking wisdom from Unix Greybeards |> |> ANSI escape sequences aka ISO 6429 came via ECMA-48 i have |> learned, and that appeared first in 1976 (that via Wikidpedia). | |Wikipedia is a bit misleading here. This is one case where ANSI and |ECMA worked together quite closely (and another example of where ISO |took the result more or less directly, though on a different schedule). | |As it happens one can read about it much more directly from the original |sources. | |First we can find that FIPS-86 is "in whole" ANSI-X3.64-1979 | | https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub86-1981.pdf | |Thus giving us "free" access to the original ANSI standard in a "new" |digital (PDF) form. Here's the full copy of ANSI-X3.64-1979 verbatim |(including cover pages): | | https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub86.pdf | |See in particular "Appendix H" in the latter. Interesting that ANSI did not include the colour specifications. And that with Jimmy Carter. What is so interesting when looking at this. How typewriter output surrounded us decades ago, it was everywhere, and taken for granted (i am born in 1972), and how long that time has passed. |X3.64 also gives a good list of all the people and organisations which |cooperated to create this standard (though interestingly only mentions |ECMA-48 in that last appendix). | |There is also corroborating evidence of this cooperation in the preface |("BRIEF HISTORY") to the 2nd Edition of ECMA-48: | | https://www.ecma-international.org/wp-content/uploads/ECMA-48_2nd_editio\ | n_august_1979.pdf | |Note though that the link the 1st Edition of ECMA-48 here is wrong, so |as yet I've not seen if there's any history given in that 1st edition): | | https://www.ecma-international.org/publications-and-standards/standards/\ | ecma-48/ | |As an aside, the DEC VT100 terminal was an early (it came out a year |before X3.64) and relatively complete (for a video terminal application) |implementation of X3.64. | |BTW, I would in general agree with Steffen that implementing an |application to output anything but X3.64/ECMA-48/ISO-6429 is rather |pointless these days, _unless_ one wants to take advantage of any |particular implementation's additional "private" features, and/or work |around any annoying but inevitable bugs in various implementations. |Also the API provided by, e.g. libcurses, often makes for much easier |programming than direct use of escape sequences, or invention and |maintenance of one's own API. | |-- | Greg A. Woods | |Kelowna, BC +1 250 762-7675 RoboHack |Planix, Inc. Avoncote Farms --End of --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From tytso at mit.edu Thu Feb 25 13:38:13 2021 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 24 Feb 2021 22:38:13 -0500 Subject: [TUHS] /usr separation In-Reply-To: <72c21fbb-7477-9b42-741b-88da1ae8919f@spamtrap.tnetconsulting.net> References: <78fede43-bf9b-5a56-5e59-e6ee5a0ee23d@spamtrap.tnetconsulting.net> <3d2d7b46-41e8-92d7-3a7b-d0f3006bc761@spamtrap.tnetconsulting.net> <3e41de9a-aaa3-0501-12e4-a99b589192f4@spamtrap.tnetconsulting.net> <72c21fbb-7477-9b42-741b-88da1ae8919f@spamtrap.tnetconsulting.net> Message-ID: On Wed, Feb 24, 2021 at 11:48:28AM -0700, Grant Taylor via TUHS wrote: > I've not looked at Chrome OS or how it does things because of my dislike for > actually /using/ it. However, it sounds like it's worth popping the hood > and looking at things. If you don't like using Chromebooks, the same scheme is used for Google's Container Optimized OS (intended for use in cloud VM's running docker images): Container-Optimized OS is an operating system image for your Compute Engine VMs that is optimized for running Docker containers. With Container-Optimized OS, you can bring up your Docker containers on Google Cloud Platform quickly, efficiently, and securely. Container-Optimized OS is maintained by Google and is based on the open source Chromium OS project. https://cloud.google.com/container-optimized-os/docs Cheers, - Ted From dave at horsfall.org Fri Feb 26 08:45:49 2021 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 26 Feb 2021 09:45:49 +1100 (EST) Subject: [TUHS] Macs and future unix derivatives In-Reply-To: <2d8643d8ab50f65d14db0fb53933a148@firemail.de> References: <2d8643d8ab50f65d14db0fb53933a148@firemail.de> Message-ID: On Mon, 8 Feb 2021, Thomas Paulsen wrote: > I'm a UNIX man since the days of Sys4R4. Since then I run Linux and > nothing else than Linux, no dual, triple, quadruple boot into anything > else. To be honest I don't like Apple because it's an elitist system: > People pay at least one thousand bucks just for the feel like being part > of a superior elite. Under Sys4R4 me and 50 others shared one big > resource: no place for any elitist movement. I've been a Unix bod since Edition 5, and I run a few systems (Mac, FreeBSD, Penguin); I don't consider myself to be an elitist (unless you consider "Anything But Windoze" to be elitist). I also paid only about $300 for my 2nd-hand MacBook Pro, and it works fine. -- Dave From dave at horsfall.org Sat Feb 27 12:47:30 2021 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 27 Feb 2021 13:47:30 +1100 (EST) Subject: [TUHS] Abstractions In-Reply-To: <20210222001344.GA26914@minnie.tuhs.org> References: <20210222001344.GA26914@minnie.tuhs.org> Message-ID: On Mon, 22 Feb 2021, Warren Toomey wrote: >> That's how we ran our RK-05 11/40s since Ed 5... Good fun writing a >> DJ-11 driver from the DH-11 source; even more fun when I wrote a UT-200 >> driver from the manual alone (I'm sure that "ei.c" is Out There >> Somewhere), junking IanJ's driver. > > https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/sys/dmr/ei.c Nah; that's the IanJ rubbish. Mine was written from scratch (and actually worked) but I don't think that it ever left UNSW. -- Dave From stu at remphrey.net Sat Feb 27 18:54:11 2021 From: stu at remphrey.net (Stuart Remphrey) Date: Sat, 27 Feb 2021 16:54:11 +0800 Subject: [TUHS] FreeBSD behind the times? (was: Favorite unix design principles?) In-Reply-To: <20210206025553.GY13701@mcvoy.com> References: <20210130222854.GN4227@mcvoy.com> <20210130231119.GA33905@eureka.lemis.com> <20210131022500.GU4227@mcvoy.com> <9504e27d-d976-9681-6b97-aa87d124fc43@gmail.com> <20210206025553.GY13701@mcvoy.com> Message-ID: Hi Larry et al, Just curious about this: was there any feedback from Jeff Bonwick and/or Bill Moore re the ARC -vs- page cache? Or would any of the design notes document the reasoning behind the decision? Surely it must have come up and been justified or got an exception in the Solaris architecture review (SARC "20 Q's", wasn't it called?) Since AFAICS it affected Solaris O/S interface (former-)guarantees. Although those notes are probably lost / inaccessible now... There's also the monthly OpenZFS leadership meeting, Matt Ahrens et al are in there: I wonder if they would have access to some of the original reasoning; how it was justified / why it was permitted. Dave, btw: check out the high-level structure of ZFS metadata -- every block is checksummed, and the checksum kept in the parent block (i.e. *not* kept together), applicable for both data and metadata blocks, and at least two copies are kept of metadata (but you can request more depending on your paranoia, see also "ditto" blocks). Compression is optional at the filesystem level (not held at the pool aka volume level; a pool may contain multiple filesystems), when compression is enabled if affects future created files, same if unset or changed to another algorithm; the filesystem handles a mix of files (blocks, even; I forget offhand) existing with various or no compression. Rgds, Stuart. On Sat, 6 Feb 2021 at 10:56, Larry McVoy wrote: > On Fri, Feb 05, 2021 at 06:22:32PM -0800, Rico Pajarola wrote: > > On Fri, Feb 5, 2021 at 12:51 PM Dave Horsfall wrote: > > > Thanks; I'd heard that ZFS was a compressed file system, so I stopped > > > right there (I had lots of experience in recovering from corrupted > RK05s, > > > and didn't need any more trouble). > > > > > That's funny, for me this is the main reason to use ZFS... What really > sets > > ZFS apart from everything else is the lack of trouble and its resilience > to > > failures. > > I'm gonna call Bill tomorrow and get his take again, that's Bill Moore > one of the two main guys who did ZFS. > > This whole thread is sort of silly. There are the users of ZFS who love > it for what it does for them. I have no argument with them. Then there > are the much smaller, depressingly so, group of people who care about OS > design that think ZFS took a step backwards. > > I think Dennis might have stepped in here, if he was still with us, and > had some words. > > I think Dennis would have brought us back to lets talk about the kernel > and what is right. ZFS is useful, no doubt, but it is not right from > a kernel guy's point of view. > > I miss Dennis. > -------------- next part -------------- An HTML attachment was scrubbed... URL: